1

I have created simple table called as test3

create table if not exists test3(

   Studies varchar(300) not null,
   Series varchar(500) not null
   );

I got some json data

{
        "Studies": [{
                "studyinstanceuid": "2.16.840.1.114151",
                "studydescription": "Some study",
                "studydatetime": "2014-10-03 08:36:00"
        }],
        "Series": [{
                "SeriesKey": "abc",
                "SeriesInstanceUid": "xyz",
                "studyinstanceuid": "2.16.840.1.114151",
                "SeriesDateTime": "2014-10-03 09:05:09"
        }, {
                "SeriesKey": "efg",
                "SeriesInstanceUid": "stw",
                "studyinstanceuid": "2.16.840.1.114151",
                "SeriesDateTime": "0001-01-01 00:00:00"
        }],

        "ExamKey": "exam-key",
}

and here is my json_path

{

    "jsonpaths": [
        "$['Studies']",
        "$['Series']"
    ]

}

Both the json data and json path is uploaded to s3.

I try to execute the following copy command in redshift consule.

copy test3
from 's3://mybucket/redshift_demo/input.json'
credentials 'aws_access_key_id=my_key;aws_secret_access_key=my_access' 
json 's3://mybucket/redsift_demo/json_path.json'

I get the following error. Can anyone please help been stuck on this for sometime now.

Amazon](500310) Invalid operation: Number of jsonpaths and the number of columns should match. JSONPath size: 1, Number of columns in table or column list: 2
Details: 
 -----------------------------------------------
  error:  Number of jsonpaths and the number of columns should match. JSONPath size: 1, Number of columns in table or column list: 2
  code:      8001
  context:   
  query:     1125432
  location:  s3_utility.cpp:670
  process:   padbmaster [pid=83747]
  -----------------------------------------------;
1 statement failed.

Execution time: 1.58s

1
  • The issue with s3 credentials and nothing to do with redshift I was able to resolve this. Commented Feb 7, 2017 at 5:45

1 Answer 1

1

Redshift's error is misleading. The issue is that your input file is wrongly formatted: you have an extra comma after the last JSON entry.

Copy succeeds if you change "ExamKey": "exam-key", to "ExamKey": "exam-key"

Sign up to request clarification or add additional context in comments.

2 Comments

ketan thanks for the comment I found the error, it was my bad I had wrong s3 credentials, basically the s3cmd which was using the upload the files to s3 bucket and s3 credentials in the copy was not matching. Once I fixed that the above error went away.
but you would still have to remove the extra comma as i suggested. otherwise the copy will fail.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.