I am trying to insert some values in a customer table:
INSERT into customer
(c_custkey,c_name,c_address,c_city,c_nation,c_region,c_phone,c_mktsegment)
VALUES (123,'ddf','sfvvc','ccxx','dddd','dddss','sszzs','sssaaaa');
I am using the Amazon Redshift Query Editor for this. The query is getting triggered twice and I can see that in STL_QUERY and STV_RECENTS tables.
Can someone help me resolve this and why does it work this way?
Related
After I create a query in Athena using the AWS console, how can I find the Query ID associated with that query?
If I run the query and it fails, I can see the Query ID in the AWS console by going to "Recent Queries" and clicking the "Failed" link next to the query run (https://docs.aws.amazon.com/athena/latest/ug/querying.html), but this doesn't work for queries where the status is "completed".
I can also use the API to first call ListNamedQueries, then iterate through that resulting list of query ids and call GetNamedQuery on each one to see if the name matches the query I'm looking for. But that's a really roundabout process just to get the query id for a query I just created.
Is there another way to simply find a named query's id via the console that I'm not seeing?
I am really new to GCP and I am trying to Query in a GCP BigQuery to fetch all data from one BigQuery table and Insert all into another BigQuery table
I am trying the Following query where Project 1 & Dataset.Table1 is the Project where I am trying to read the data. and Project 2 and Dataset2.Table2 is the Table where I am trying to Insert all the data with the same Naming
SELECT * FROM `Project1.DataSet1.Table1` LIMIT 1000
insert INTO `Project2.Dataset2.Table2`
But am I receiving a query error message?
Does anyone know how to solve this issue?
There may be a couple of comments...
The syntax might be different => insert into table select and so on - see DML statements in the standard SQL
Such approach of data coping might not be very optimal considering time and cost. It might be better to use bq cp -f ... commands - see BigQuery Copy — How to copy data efficiently between BigQuery environments and bq command-line tool reference - if that is possible in your case.
The correct syntax of the query is as suggested by #al-dann. I will try to explain further with a sample query as below:
Query:
insert into `Project2.Dataset2.Table2`
select * from `Project1.DataSet1.Table1`
Input Table:
This will insert values into the second table as below:
Output Table:
I am trying to create a visualization using aws quicksight, I've done it before using the same data source and tables.
Right now when I try to run simple query
select * from table order by time desc limit 10;
it outputs QuickSight could not generate any output column after applying transformation.
When I run same query in aws athena it works fine.
I have my data in SPICE.
EDIT: I've just re-created my dataset in quicksight and it is now working... Still want to know what was wrong.
I got the same error in QuickSight when I created a new custom SQL data source getting data out of Spectrum. And I also got it to work by re-creating the data source.
Is there a bug in the information_schema.views implementation in AWS Athena?
I'm getting 0 rows returned when running the query SELECT * FROM information_schema.views in AWS Athena even though the database I'm running on has tables and views in it.
Wanted to check if anyone else is facing the same issue.
I'm trying to fetch the view definition script for ALL views in the AWS Athena database as a single result set instead of using the SHOW CREATE VIEW statement.
You should be using
SHOW TABLES IN example_database; to get information about tables in Athena database.
And the loop through use describe query to fetch details.
Hope this will help!
I have a column in Dynamo DB which is backed up in S3, and is something like this:
{"attributes":
{"m":{"isBirthday":{"s":"0"},
"party":{"m":{"cake":{"bOOL":false},"pepsi":{"bOOL":false},"chips":{"bOOL":false},"fries":{"bOOL":false},"puffs":{"bOOL":false}}},"gift":{"s":"yes"}}},
"createdDate":{"n":"1521772435189"},
"modifiedDate":{"n":"1521772435189"}}
I need to use this S3 file in Athena and run query on the items in this "attributes".
I have tried multiple options to map it and then retrieve the values. However it doesnot work.
Please someone let me know how do I map it to my athena table to run query on this.