AWS Glue Studio output schema - amazon-web-services

Exists a faster way in Glue Studio to get the output schema?
Today I have to run the Data Preview to get the schema.

Related

Optimize Glue job and comparision between Visual and Script mode, JDBC Connection parameters

I am working on a Glue job to read data from an oracle database and write it into redshift. I have crawled the tables from my oracle source and redshift target. When I use the glue visual, with oracle source and write to redshift component it is completing in around 7 mins with G1x and 5 workers. I tried other combinations and concluded this is the best combination I can use.
Now I wanted to optimize this further and am trying to write a pyspark script from scratch. I used a simple jdbc read and write, but it is taking more than 30 minutes to complete. I have 3M records in source. I have tried with numpartitions 10 and fetch size 30000. My question is:
what are the default configs used by glue visual job, as it is finishing way so fastly?
Does the fetch size is already configured on the source side when we use to read using a jdbc connection? because if glue visual job is using this and its value is more than what I have specified, could that be the reason for faster execution?
Please let me know if you need any further details.

Aws Glue Crawler is not updating the table after 1st crawl

I am adding a new file in parquet format which is created by a Glue Databrew in my S3 folder. The new file has the same schema as the previous file. But when I am running the Crawler for the 2nd time it is neither updating the table nor creating a new one in the data catalog. Also when I am crawling both the files together, both of them are getting added.
Log File is giving the following information:
INFO : Created partitions with values [[New file name]] for table
BENCHMARK : Finished writing to Catalog
I have tried with and without "Create a single schema for each S3 path". But the crawler is not updating the table with the new file. Sooner I will add new files on a daily basis to do my analysis. Any solution?
The best way to approach this issue in my opinion is to use AWS DataBrew output to Data Catalog directly. Data Catalog can be updated either by the crawler or by DataBrew directly but the recommended practice is that you employ any one of those mechanisms not both.
Can you try running the job with output as your data catalog and let Databrew manage your catalog? It should update your catalog table with right data/files.

AWS Athena Javascript SDK - Create table from query result (CTAS) - Specifiy otuput format

I am trying using the AWS JavaScript Node.JS SDK to make a query using AWS Athena and store the results in a table in AWS Glue with Parquet format (not just a CSV file)
If I am using the conosle, it is pretty simple with a CTAS query :
CREATE TABLE tablename
WITH (
external_location = 's3://bucket/tablename/',
FORMAT = 'parquet')
AS
SELECT *
FROM source
But with AWS Athena JavaScript SDK I am only able to set an output file destination using the Workgoup or Output parameters and make a basic select query, the results would output to a CSV file and would not be indexed properly in AWS Glue so it breaks a bigger process it is part of, if I try to call that query using the JavaScript SDK I get :
Table properties [FORMAT] are not supported.
I would be able to call that DDL statement using the Java SDK JDBC driver connection option.
Is anyone familiar with a solution or workaround with the Javascript SDK for Node.JS?
There is no difference between running the SQL you posted in the Athena web console, AWS SDK for JavaScript, AWS SDK for Java, or the JDBC driver, none of these will process the SQL, so if the SQL works in one of these it will work in all of them. It's only the Athena service that reads the SQL.
Check your SQL and make sure you really use the same in your code as you have tried in the web console. If they are indeed the same, the error is somewhere else in your code, so post that too.
Update the problem is the upper case FORMAT. If you paste the code you posted into the Athena web console, it bugs out and doesn't run the query, but if you run it with the CLI or an SDK you get the error you posted. You did not run the same SQL in the console as in the SDK, if you had you would have gotten the same error in both.
Use lower case format and it will work.
This is definitely a bug in Athena, these properties should not be case sensitive.

Can I run Athena query from sql file stored in S3

I have a .sql file filled with Athena queries.
Is there a way I can tell Athena to run the sql queries saved in s3://my-bucket/path/to/queries.sql?
In MySQL can do something like this (based in SO answer), but curious if possible in Athena
mysql> source \home\user\Desktop\test.sql;
Is there a way I can tell Athena to run the sql queries saved in s3://my-bucket/path/to/queries.sql?
I think there is no direct way to tell Athena to run query stored in S3.
In MySQL can do something like this (based in SO answer), but curious if possible in Athena.
If you want to do it at all, then yes, you should be able to run the query using AWS CLI.
Your steps should be look like this.
Get the query from S3 using CLI and store in temp variable
Pass the query stored in a temp variable to Athena Query CLI
Hope this will help.

AWS Athena: HIVE_UNKNOWN_ERROR: Unable to create input format

I've crawled a couple of XML files on S3 using AWS Glue, using a simple XML classifier:
However, when I try running any query on that data using AWS Athena, I get the following error (note that it's the simplest possible query I'm doing here):
HIVE_UNKNOWN_ERROR: Unable to create input format
Note that Athena can see my tables and it can see the columns, it just can't query them:
I noticed that there is someone with the same problem on the AWS Discussion forums: Athena XML Query Give HIVE Unknown Error but it got no love from anyone.
I know there is a similar question here about this error but the query in question targeted an RDS database, unlike an S3 bucket like I have here.
Has anyone got a solution for this?
Sadly at this time 12/2018 Athena cannot query XML input which is hard to understand when you may hear that Athena along with AWS Glue can query xml.
What output you are seeing from the AWS crawler is correct though, just not what you think its doing! For example after your crawler has run and you see the tables, but cannot execute any Athena queries. Go into your AWS Glue Catalog and at the right click tables, click your table, edit properties it will look something like this:
Notice how input format is null? If you have any other tables you can look at their properties or refer back to the input formatters documentation for Athena. This is the error you recieve.
Solutions:
convert your data to text/json/avro/other supported formats prior to upload
create a AWS glue job which converts a source to target from xml to target supported Athena format(compressed hopefully with ORC/Parquet)