Can't download or read Hive output in Amazon S3 bucket - amazon-web-services

I'm new to AWS and Hive, and I'm trying to use Hive to analyze Google Ngrams data. I tried to save a table as tab-delimited CSV in an S3 bucket, but now I don't know how to view it or download it to see if my job executed correctly.
The query I used to create the table was
CREATE EXTERNAL TABLE test_table2 (
gram string,
year int,
occurrences bigint,
pages bigint,
books bigint
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION 's3://mybucket/sub-bucket/test-table2.txt';
I then filled the table with data:
INSERT OVERWRITE TABLE test_table2
SELECT
gram,
year,
occurrences,
pages,
books
FROM
eng1m_5grams_normed
WHERE
gram = 'early bird gets the worm';
The query ran fine, and I think everything worked correctly. However, when I navigate to my bucket in the S3 Management Console online, the text file appears as a folder containing a bunch of files. These files have long hexadecimal character names and are 0 bytes big.
Is this just the text file represented as a directory? Is there a way I can view or download the file to see if my query worked? I tried to make the directory public so I could download it, but the download button in the "Actions" dropdown menu is still greyed out.

In Hive/S3 , think of S3 directories as tables. The files contained in those directories are contents of those tables (i.e. rows). The reason you have multiple files in the directory is because multiple reducers are writing the "table".
S3 Browser is a very nice tool for working with S3.

What happened is that very few rows may have qualified against the predicate in the where clause. so very few (or no) rows were selected and emitted into the output (and hence the zero sized files). EMR doesn't give a simple way to download the result of a query.

Related

AWS Glue not deleting or deprecating tables generated over now removed S3 data

Due to user error, our S3 directory over which a Glue crawler ran routinely became flooded with .csv files. When Glue ran over the S3 directory- it created a table for each of the 200,000+ csv files. I ran a script that deleted the .csv files shortly after (S3 bucket has versioning enabled), and re-ran the Glue crawler with the following settings:
Schema updates in the data store Update the table definition in the data catalog.
Inherit schema from table Update all new and existing partitions with metadata from the table.
Object deletion in the data store Delete tables and partitions from the data catalog.
Within the cloudwatch logs- it's updating the tables matching the remaining data, but it's not deleting any of the tables generated from those .csv files. According to it's configuration log on Cloudwatch- it should be able to do so.
INFO : Crawler configured with Configuration
{
"Version": 1,
"Grouping": {
"TableGroupingPolicy": "CombineCompatibleSchemas"
}
}
and SchemaChangePolicy
{
"UpdateBehavior": "UPDATE_IN_DATABASE",
"DeleteBehavior": "DELETE_FROM_DATABASE"
I should include there is another crawler that is set to crawl over the S3 bucket, but it's not been run in over a year, so I doubt that could be a point of conflict.
I'm are stumped on what could be the issue; as it stands, I can write a script to pattern match the existing tables and drop those with a csv in their name or delete and rebuild the database by having Glue re crawl S3, but if possible- I'd much rather Glue drops the tables itself after identifying they point to no files within S3 itself.
I'm currently taking the approach of writing a script to delete the tables created by Athena. All the generated files from Athena queries are 49 characters long, have five _ charachters for the results file and six _ for the metadata, and generally follow the format of ending in a _csv for the resulting query results, and _csv_metadata for the query metadata.
I'm getting a list of all the table names in my database, filtering it only include those that are 49 characters long, end with a _csv_metadata, and have six _ charachters within them. I'm iterating on each string and deleting their corresponding table in the database. For the resulting query that ends with _csv, I'm cutting of the trailing nine charachters of the the _csv_metadata string which cuts off _metadata.
If I were to improve on this, I'd also query the table and ensure it has no data in it and matches certain column name definitions.

Moving a partitioned table across regions (from US to EU)

I'm trying to move a partitioned table over from the US to the EU region but whenever I manage to do so, It doesn't partition the table on the correct column.
The current process that I'm taking is:
Create a Storage bucket in the region that I want the partitioned table to be in
Export the partitioned table over via CSV to the original bucket (within the old region)
Transfer the table across buckets (from the original bucket to the new one)
Create a new table using the CSV from the new bucket (auto-detect schema is on)
bq --location=eu load --autodetect --source_format=CSV table_test_set.test_table [project ID/test_table]
I expect that the column to be partitioned on the DATE column but instead it's partitioned on the column PARTITIONTIME
Also a note that I'm currently doing this with CLI commands. This will need to be redone multiple times and so having reusable code is a must.
When I migrate data from 1 table to another one, I follow this process
I extract the data to GCS (CSV or other format)
I extract the schema to the source table with this command bq show --schema <dataset>.<table>
I create via the GUI the destination table with the edit as text schema and I paste it. I define manually the partition field that I want to use from the schema;
I load the data from GCS to the destination table.
This process has 2 advantages:
When you import a CSV format, you define the REAL type that you want. Remember, in schema autodetect, Bigquery look about 10 or 20 lines and deduce the schema. Often, string fields are set as INTEGER but the first line of my file doesn't contains letter, only numbers (in serial number for example)
You can define your partition fields properly
The process is quite easy to script. I use the GUI for creating destination table, but bq command lines are great for doing the same thing.
After some more digging I managed to find out the solution. By using "--time_partitioning_field [column name]" you are able to partition by a specific column. So the command would look like this:
bq --location=eu --schema [where your JSON schema file is] load --time_partitioning_field [column name] --source_format=NEWLINE_DELIMITED_JSON table_test_set.test_table [project ID/test_table]
I also found that using JSON files to make things easier.

Export athena table to S3 as one readable file

I am baffled: I cannot figure out how to export a sucessfully run CREATE TABLE statement to a single CSV.
The query "saves" the result of my Create Table command in an appropriately named S3 bucket, partitioned into 60 (!) files. Alas, these files are not readable text files
CREATE TABLE targetsmart_idl_data_pa_mi_deduped_maid AS
SELECT *
FROM targetsmart_idl_data_pa_mi_deduped_aaid
UNION ALL
SELECT *
FROM targetsmart_idl_data_pa_mi_deduped_idfa
How can I save this table to S3, as a single file, CSV format, without having to download and re-upload it?
If you want a result of CTAS query statement being written into a single file, then you would need to use bucketing by one of the columns you have in your resulting table. In order to get resulting files in csv format, you would need to specify tables' format and field delimiter properties.
CREATE TABLE targetsmart_idl_data_pa_mi_deduped_maid
WITH (
format = 'TEXTFILE',
field_delimiter = ',',
external_location = 's3://my_athena_results/ctas_query_result_bucketed/',
bucketed_by = ARRAY['__SOME_COLUMN__'],
bucket_count = 1)
AS (
SELECT *
FROM targetsmart_idl_data_pa_mi_deduped_aaid
UNION ALL
SELECT *
FROM targetsmart_idl_data_pa_mi_deduped_idfa
);
Athena is a distributed system, and it will scale the execution on your query by some unobservable mechanism. Note, that even explicitly specifying a bucket size of one, might still get multiple files [1].
See Athena documentation for more information on its syntax and what can be specified within WITH directive. Also, don't forget about
considerations and limitations for CTAS Queries, e.g. the external_location for storing CTAS query results in Amazon S3 must be empty etc.
Update 2019-08-13
Apparently, the result of CTAS statements are compressed with GZIP algorithm by default. I couldn't find in documentation how to change this behavior. So, all you would need is to uncompress it after you had downloaded it locally. NOTE: uncompressed files won't have .csv file extension, but you still will be able to open them with text editors.
Update 2019-08-14
You wont' be able to preserve column names inside files if you save them in csv format. Instead, they would be specified in AWS Glue meta-data catalog, together with other information about a newly created table.
If you want to preserve column names in the output files after executing CTAS queries, then you should consider file formats which inherently do that, e.g. JSON, Parquet etc. You can do that by using format property within WITH clause. Choice of file format really depends on a use case and size of data. Go with JSON if your files are relatively small and you want to download and be able to read their content virtually from anywhere. If files are big and you are planning to keep them on S3 and query with Athena, then go with Parquet.
Athena stores query results in Amazon S3.
A results file stored automatically in a CSV format (*.csv) .So results can be exported into a csv file without CREATE TABLE statement (https://docs.aws.amazon.com/athena/latest/ug/querying.html)
Execute athena query using StartQueryExecution API and results .csv can be found at the output location specified in api call.
(https://docs.aws.amazon.com/athena/latest/APIReference/API_StartQueryExecution.html)

Find the source of athena query result

We have thousands of files stored in S3. These files are exposed to athena so that we can query on them. While doing debugging i found that athena shows multiple blank lines when queries on a specific id. Given that there are thousands of files, I am not sure where that data is coming from.
Is there a way that i can see the source file for respective rows in athena result?
There is a hidden column exposed by Presto Hive connector: "$path"
This column exposes the path of the file particular row has been read from.
Note: the column name is actually $path, but you need to "-quote it in SQL. This is because $ is otherwise illegal in an identifier.

Getting s3 key name within EMR

I'm running a hvie script on EMR that's pulling data out of s3 keys. I can get all the data and put it in a table just fine. The problem is, some of the data I need is in the key name. How do I get the key name from within hive and put that into the hive table?
I faced similar problem recently. From what I researched, it depends. You can get the data out of the "directory" part but not the "filename" part of s3 keys.
You can use partition if s3 keys are formatted properly. partition can be queried the same way as columns. here is a link with some examples: Loading data with Hive, S3, EMR, and Recover Partitions
You can also specify the partitions yourself if s3 files are already grouped properly. For example I needed the date information so my script looked like this:
create external table Example(Id string, PostalCode string, State string)
partitioned by (year int, month int, day int)
row format delimited fields terminated by ','
tblproperties ("skip.header.line.count"="1");
alter table Example add partition(year=2014,month=8,day=1) location 's3n://{BuckeyName}/myExampledata/2014/08/01/';
alter table Example add partition(year=2014,month=8,day=2) location 's3n://{BuckeyName}/myExampledata/2014/08/02/';
...keep going
The partition data must be part of the "directory name" and not the "filename" because Hive loads data from a directory.
If you need to read some text out of the file name, I think you have to create custom program to rename the objects to so that the text you need is in the "directory name".
Good luck!