Amzon Copy Command - amazon-web-services

I am trying to add the encoding shortcut to the Amazon Redshift Copy Command taken from here. The copy command is
COPY Tier2.personloginactivity (PersonId,LoginDateTime,TimeSpentOnSite) FROM 's3://imdb-ftp-development/tier2.text.csv' credentials 'aws_access_key_id=dsdsdsdsdsdsd;aws_secret_access_key=dsdsdsdsdsds'
delimiter ',' IGNOREHEADER 1 encoding 'UTF-8';
The encoding portion is giving me a error. Can u please help.

Per the doc http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html:
Valid values for file_encoding are as follows:
UTF8
UTF16
UTF16LE
UTF16BE
So you need to remove the dash.

Related

Invalid null byte - field longer than 1 byte

I am copying data from a Redshift Manifest File stored in S3.
My copy command looks like
COPY <table name> FROM 's3://...' CREDENTIALS '<credentials>' FORMAT AS JSON 'auto' GZIP TRUNCATECOLUMNS ACCEPTINVCHARS EMPTYASNULL TIMEFORMAT AS 'auto' REGION '<region>' manifest;
The column in the table where I am facing this issue is of type varchar(255).
Value of this column in the s3 file looks like
"<column>":"\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000\\u0000..."
Error: Invalid null byte - field longer than 1 byte
I have tried using NULL AS '\0' as well. That didn't work. The error this gives is Invalid operation: NULL argument is not supported for JSON based COPY
It is not clear why you would want to store a bunch of ascii zero characters in a string so more information on what this is for will get a more useful workaround. The basic answer is 'don't do this'.
Ascii zero is defined as the null terminator character (aka NUL but this is not the same things a NULL) and this character has special meaning in data streams. It's a control character and as such has no business being in your strings.
If you are trying to represent binary data in a string you should base64 encode the data first.
If you are trying to represent NULL this is done with null in the json - "column":null
More information on what you are doing will be helpful in proposing a solution.

How to deal with Linebreaks in redshift load?

I have a csv which has line breaks in one of the column. I get the error Delimiter not found.
If I replace the text as continuous without line-breaks then it works. But how do I deal with line-breaks.
My COPY command:
COPY cat_crt_test_scores
from 's3://rds-cat-crt-test-score-table/checkcsv.csv'
iam_role 'arn:aws:iam::423639311527:role/RedshiftS3Access'
explicit_ids
delimiter '|'
TIMEFORMAT 'auto'
ESCAPE;
Delimiter not found after reading till Dear Conduira,
As suggested by John Rotenstein in the comments, using the CSV option is the right way to deal with this.
A more detailed answer is given here.

Escaping delimiter in Amazon Redshift COPY command

I'm pulling data from Amazon S3 into a table in Amazon Redshift. The table contains various columns, where some column data might contain special characters.
The copy command has an option called Delimiter where we can specify the delimiter while pulling the data into the table.
The issue is 2 fold -
When I export (unload command) to S3 using a delimiter - say , - it works fine, but when I try to import into Redshift from S3, the issue creeps in because certain columns contain the ',' operator which the copy command misinterprets as delimiter and throws error.
I tried various delimiters, but the data in my table seems to contain some or other kind of special character which causes the above issue.
I even tried unloading using multiple delimiter - like #% or ~, but when loading from s3 using copy command - the dual delimiter is not supported.
Any solutions?
I think the delimiter can be escaped using \ but for some reason that isn't working either, or maybe I'm not using the right syntax for escaping in copy command.
The following example shows the contents of a text file with the field values separated by commas.
12,Shows,Musicals,Musical theatre
13,Shows,Plays,All "non-musical" theatre
14,Shows,Opera,All opera, light, and "rock" opera
15,Concerts,Classical,All symphony, concerto, and choir concerts
If you load the file using the DELIMITER parameter to specify comma-delimited input, the COPY command will fail because some input fields contain commas. You can avoid that problem by using the CSV parameter and enclosing the fields that contain commas in quote characters. If the quote character appears within a quoted string, you need to escape it by doubling the quote character. The default quote character is a double quotation mark, so you will need to escape each double quotation mark with an additional double quotation mark. Your new input file will look something like this.
12,Shows,Musicals,Musical theatre
13,Shows,Plays,"All ""non-musical"" theatre"
14,Shows,Opera,"All opera, light, and ""rock"" opera"
15,Concerts,Classical,"All symphony, concerto, and choir concerts"
Source :- Load Quote from a CSV File
What I use -
COPY tablename FROM 'S3-Path' CREDENTIALS '' MANIFEST CSV QUOTE '\"' DELIMITER ',' TRUNCATECOLUMNS ACCEPTINVCHARS MAXERROR 2
If I’ve made a bad assumption please comment and I’ll refocus my answer.
If the delimiter is appearing within fields, then use the ADDQUOTES parameter with the UNLOAD command:
Places quotation marks around each unloaded data field, so that Amazon Redshift can unload data values that contain the delimiter itself.
Then:
If you use ADDQUOTES, you must specify REMOVEQUOTES in the COPY if you reload the data.
A popular delimiter is the pipe character (|) that is rare in text files.
Adding CSV QUOTE as '\"' before the DELIMITER worked for me.

Redshift COPY command with "^A" delimiter

I am trying to use a control A ("^A") delimited file to load into redshift using COPY command, I see default delimiter is pipe (|) and with CSV it is comma.
I couldnt file a way to use ^A, when i tried COPY command with ^A or \x01, it is throwing below message. Anybody tried this before? documentation says we can use delimiter, but no clue on using ^A.
Password:
ERROR: COPY delimiter must be a single character
I have used '\\001' as a delimiter for ctrl+A based field separation in redshift and also in Pig.
Example :
copy redshiftinfo from 's3://mybucket/data/redshiftinfo.txt'
credentials 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>'
delimiter '\\001'

Amazon Redshift - COPY from CSV - single Double Quote in row - Invalid quote formatting for CSV Error

I'm loading a CSV file from S3 into Redshift. This CSV file is analytics data which contains the PageUrl (which may contain user search info inside a query string for example).
It chokes on rows where there is a single, double-quote character, for example if there is a page for a 14" toy then the PageUrl would contain:
http://www.mywebsite.com/a-14"-toy/1234.html
Redshift understandably can't handle this as it is expecting a closing double quote character.
The way I see it my options are:
Pre-process the input and remove these characters
Configure the COPY command in Redshift to ignore these characters but still load the row
Set MAXERRORS to a high value and sweep up the errors using a separate process
Option 2 would be ideal, but I can't find it!
Any other suggestions if I'm just not looking hard enough?
Thanks
Duncan
It's 2017 and I run into the same problem, happy to report there is now a way to get redshift to load csv files with the odd " in the data.
The trick is to use the ESCAPE keyword, and also to NOT use the CSV keyword.
I don't know why, but having the CSV and ESCAPE keywords together in a copy command resulted in failure with the error message "CSV is not compatible with ESCAPE;"
However with no change to the loaded data I was able to successfully load once I removed the CSV keyword from the COPY command.
You can also refer to this documentation for help:
http://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-conversion.html#copy-escape
Unfortunately, there is no way to fix this. You will need to pre-process the file before loading it into Amazon Redshift.
The closest options you have are CSV [ QUOTE [AS] 'quote_character' ] to wrap fields in an alternative quote character, and ESCAPE if the quote character is preceded by a slash. Alas, both require the file to be in a particular format before loading.
See:
Redshift COPY Data Conversion Parameters
Redshift COPY Data Format Parameters
I have done this using ---> DELIMITER ',' IGNOREHEADER 1; at the replacement for 'CSV' at the end of COPY command. Its working really fine.