I would like to create via Terraform an Athena database including tables and views. I have already searched a lot and found some posts, e.g. here: Create AWS Athena view programmatically
I know that I can use Terraform provisioners to execute AWS CLI commands to create these resources, for example like this: AWS Athena Create table view with SQL
But I don't want to do that. I want to create everything (as far as possible) with Terraform so that I don't have to worry about lifecycle etc.
As far as I understand, an Athena database can be a Glue database, depending on the source you choose. If I choose the AWSDataCatalog (Glue) as data source in Athena, it should not matter if I create an Athena database or a Glue database with Terraform, correct?
In Glue I can also create tables, but no views. Do the Glue tables automatically correspond to Athena tables? How can I create Athena views? I would like to create everything with SQL DDL, just like you can do it in the AWS Web Console. How does this work via Terraform? If this functionality is not available, what is the best way to go? I am grateful for every tip and help!
Athena uses the Glue Data Catalog to store metadata about databases, tables, and views. All Athena tables are Glue tables. However, not all Glue tables work with Athena – you can create tables in Glue that won't be visible in Athena, and you can create tables that will be visible but won't work (for example cause runtime errors when you query them).
Athena uses Glue Data Catalog for views, but the format is very specific to Athena, unlike regular tables which can be made interoperable with for example Spark.
In an answer to the question you link to I explain in detail the anatomy of an Athena view. I have created views with CloudFormation with that information so it can be done with Terraform too. Unless you write code you will have to jump through all the hoops and repeat most of the information as Presto metadata, unfortunately.
Related
I am trying to build AWS QuickSight reports using AWS Athena that builds the specific views for said reports. however, I seem to only be able to select a single table in creating the Glue job despite being able to select all tables i need for the crawler of the entire DB from Dynamo.
What is the simplest route to get a complete extract of all tables that is queryable in Athena.
I dont want to connect the reports direct to dynamoDB as it s a production database and want to create some separation to avoid any performance degradation by a poor query etc.
I have completed all the steps mentioned in this tutorial.
https://aws.amazon.com/blogs/big-data/improve-amazon-athena-query-performance-using-aws-glue-data-catalog-partition-indexes/
I am getting the expected results. But I will like to know if this is possible without glue.
I will like to use only Athena (as well as S3) and nothing else to achieve the same results.
Athena uses the Glue Data Catalog to store table metadata. Technically you can use Athena Federation to not use the Glue Data Catalog, but for normal usage the Glue Data Catalog is necessary, just like S3 is.
Partition indexes is a feature of Glue Data Catalog, and without implementing something like it yourself and use Federation there is no equivalent feature in Athena itself, since Athena does not store table metadata itself.
Perhaps you could explain in more detail why you don't want to use Glue Data Catalog?
I am trying to create external table in Amazon Redshift using statement
mentioned at this link.
In my case I want location To be parameterized instead of static value
I am using dB Weaver for Amazon redshift
If your partitions are hive compatible(<partition_column_name>=<partition_column_value>) and your table is defined via Glue or Athena, then you can run MSCK REPAIR TABLE on the Athena table directly, which would add them. Read this thread for more info: https://forums.aws.amazon.com/thread.jspa?messageID=800945
You can also try using partition projections, if you don't use hive compatible partitions, where you define the structure of the files location in relation to the partitions and parameters.
If those don't work with you, you can use AWS Glue Crawlers which supposedly automatically detect partitions: https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html
If that doesn't work for you, well then your problem is very specific. I suggest pulling up your sleeves and write some code, deploy on Lambda or AWS Glue Python Shell Job. Here's a bunch of examples where other people tried that:
https://medium.com/swlh/add-newly-created-partitions-programmatically-into-aws-athena-schema-d773722a228e
https://medium.com/#alsmola/partitioning-cloudtrail-logs-in-athena-29add93ee070
AWS glue crawler has cost associated with it, how to avoid us of the crawler in aws glue.
Is there any way we can avoid the use of crawler and infer schema from any other option, so that cost can be reduced.
In addition to what bdcloud has said, it's also possible to add tables to the data catalogue using the 'AWS::Glue::Table' resource in CloudFormation.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-glue-table.html
It's easier to do this if you have a table schema you can use as a template (aws glue get-table --database-name <db name> --name <table name> will give you JSON that is pretty close to what CloudFormation is expecting).
Again, you need to know your schema in advance, but choose the approach that best fits the workflow you're going with.
You can use Athena to create tables in Glue catalog, but to do so you need to know the schema of the file or you can get the DDL from the existing table created by running SHOW CREATE TABLE <table-name> in Athena and then you can modified the DDL statement according to your schema.
DDL queries are free in Athena and incurs no charges.
One other way of doing it is by issuing a Glue create table API call. Please refer to this for python syntax.
Recently we started to store our backups in aws s3. It is all csv files that we need to query through aws athena.
We tried to insert the tables one by one but it's taking too long, it is a fair amount of data. Is there any API that we can use or something that is alredy set?
we were about to do something with spark, but maybe there is a simpler way, or something that's already have been done.
thanks
You can simply create an external table on top of CSV files with the required properties.
Reference : Create External Table on AWS Athena
You can also use Glue Crawler and configure it to automatically populate the tables for you.
Reference : Cataloging tables with a crawler
There are different AWS SDK's available (here) to automate your tasks like uploading files to S3, creating athena tables or cataloging tables through glue clawler.