Redshift skip entire file which contains error - amazon-web-services

Is there any way/option or workaround to skip the entire file which contains bad entries , while loading the data from S3 to Redshift.
Please note that I am not talking about skipping the entries that are invalid in the file, but the entire file which contains bad entry or record.

By default Redshift fails entire file if you don't supply Maxerror option in Copy command. Its default behavior.
copy catdemo from 's3://awssampledbuswest2/tickit/category_pipe.txt' iam_role 'arn:aws:iam::<aws-account-id>:role/<role-name>' region 'us-west-2';
Above command will fail entire file and will not load any data from given file. Read the documentation here for more information.
If you specify, Maxerror option then only it ignores records upto that # from particular file.
copy catdemo from 's3://awssampledbuswest2/tickit/category_pipe.txt' iam_role 'arn:aws:iam::<aws-account-id>:role/<role-name>' region 'us-west-2' MAXERROR 500;
In above example Redshift will tolerate up-to 500 bad records.
I hope this answers your question, but If it doesn't please update the question and I will refocus the answer.

Related

AWS COPY From S3 Command Fails As A Result of Truncation - RESOLVED

First of all I would like to mention that I tried searching for this issue in existing SO questions but I couldn't find the scenario I came across. Hence asking a new question.
So I am trying to import data from S3 to Redshift.
The data in S3 is JSON data seperated by new line character i.e. \n (exported using UNLOAD command from other redshift cluster)
The copy command is -
copy redhist_table_name
from 's3://bucket/path/to/s3/json/file.json'
iam_role 'iam_role_info'
region 'region';
json 'auto';
The STL_LOAD_ERRORS shows error as - delimiter not found but when I looked closely, I found that the copy command is copying only first 1024 characters from the json row which results in above error
I looked for all the options that copy command offers to see if there's way to increase this limit but I found none.
Any ideas where is this limit coming from? Or is this not the root cause of this issue?
So I expect this is not the root cause. Stl_load_errors only stores 1024 characters in the "raw_line" column. There very well may be a limit to how long a line can be but I know that it is much longer than 1024 characters.
Which line in the file is COPY failing on? First or somewhere later in the file?
If a line deep in the file there may be something off about it. UNLOAD to COPY should work correctly but I can see that there may be some corner cases (like " in string values). If it a specific line then posting that line (sanitized if need be) would be helpful.

Loading data into Amazon Redshift: Ignore last n rows

Is there an option to load a CSV into Redshift while skipping over a footer?
Just like we use ignoreheader when we want to ignore initial rows. If we want to ignore last rows is there any way?
No. There is no parameter to tell the COPY command to ignore rows at the end of a file.
However, you could load the file with an error by specifying a MAXERROR of 1, which will allow the file to load with one bad row (or more, if required).

Rows missing while reading a file that was unloaded from Redshift

I unloaded a set of 200 million records from Redshift to S3 using SQLWorkbench. I got a message saying "Unload complete, 2,00,00,00,000 records complete". However, when I download this file from s3 and open it, there are only 40 million rows. No errors at any point of time. I am very confused and unable to proceed because of this issue.
What could be causing the issue?
An unload of this size will not be in 1 file. Each unloaded file is limited to 6.2GB or smaller if the MAXFILESIZE parameter is set. Also, if PARALLEL is ON (default) each slice in Redshift will make its own set of files in S3. I expect you are only look at one of many files that were created by the UNLOAD. Each file will have a slice number and a part number attached to the file base name you provided in the UNLOAD statement.

AWS S3: How to plug in a dynamic file name in the S3 directory in COPY command

I have job in Redshift that is responsible for pulling 6 files every month from S3. File names follow a standard naming convention as "file_label_MonthNameYYYY_Batch01.CSV". I'd like to modify the below COPY command to change the file naming in the S3 directory dynamically so I won't have to hard code the Month Name and YYYY and batch number. Batch number ranges 1-6.
Currently, here is what I have which is not efficient:
COPY tbl_name ( column_name1, column_name2, column_name3 )
FROM 'S3://bucket_name/folder_name/Static_File_Label_July2021_Batch01.CSV'
CREDENTIALS 'aws_access_key_id = xxx;aws_secret_access_key = xxxxx'
removequotes
EMPTYASNULL
BLANKSASNULL
DATEFORMAT 'MM/DD/YYYY'
delimiter ','
IGNOREHEADER 1;
COPY tbl_name ( column_name1, column_name2, column_name3 )
FROM 'S3://bucket_name/folder_name/Static_File_Label_July2021_Batch02.CSV'
CREDENTIALS 'aws_access_key_id = xxx;aws_secret_access_key = xxxxx'
removequotes
EMPTYASNULL
BLANKSASNULL
DATEFORMAT 'MM/DD/YYYY'
delimiter ','
IGNOREHEADER 1;
The dynamic file name shall change to August2021_Batch01 & August2021_Batch02 next month and so forth. Is there a way to do this? Thank you in advance.
There are lots of approaches to this. Which one is best for your case will depend on your circumstances. You need a layer in your process that controls configuring SQL for each month. Here are some ways to consider:
Use a manifest file - This file will have the S3 object names to
load. Your processing / file prep can update this file
Use a fixed load folder where the files are located for COPY, then
move these files to perm storage location after COPY.
Use variables in you bench to set the Month value and replace this
in when the SQL is issued to Redshift.
Write some code (Lambda?) to issue the SQL you are looking for
Last I checked you could leave the object name incomplete and all
matching objects would be loaded. Leave off the batch number and
suffix and load all the files with one text change.
It is desirable to load multiple files with a COPY command (uses more nodes in parallel) and options 1, 2, and 5 do this.
When specifying the FROM location of files to load, you can specify a partial filename.
Here is an example from COPY examples - Amazon Redshift:
The following example loads the SALES table with tab-delimited data from lzop-compressed files in an Amazon EMR cluster. COPY loads every file in the myoutput/ folder that begins with part-.
copy sales
from 'emr://j-SAMPLE2B500FC/myoutput/part-*'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
delimiter '\t' lzop;
Therefore, you could specify:
FROM 'S3://bucket_name/folder_name/Static_File_Label_July2021_*'
You would just need to change the Month & Year identifier. All files with that prefix would be loaded in one batch.

Skip top N lines in snowflake load

My actual data in csv extracts starts from line 10. How can I skip top few lines in snowflake load using copy or any other utility. Do we have anything similar to SKIP_HEADER ?
I have files on S3 and its my stage. I would be creating a snowpipe later on this datasource.
yes there is a skip_header option for CSV, allowing you to skip a specified number of rows, when defining a file format. Please have a look here:
https://docs.snowflake.net/manuals/sql-reference/sql/create-file-format.html#type-csv
So you create a file format associated with the csv files you have in mind and then use this when calling the copy commands.