Query DynamoDB from within AWS - amazon-web-services

Ive been looking around, and have not been able to find anywhere on the AWS console a place where i can query the tables i have created in DynamoDB.
Is it possible for me to run quick queries against any of the tables i have in DynamoDB from within AWS itself. Or will i actually have to go ahead and build a quick app that lets me run the queries??
I would have thought that there would be some basic tool provided that lets me run queries against the tables. If there is, its well hidden....
Any help would be great!

DynamoDB console -> Tables -> click the table you want to query -> select the Items tab
You will then see an interface to Scan or Query the table. You can change the first drop-down from "Scan" to "Query" based on what you want to do, and change the second drop-down to select the table index you want to query.

Related

Is there a way to query multiple Partial Keys in dynamo DB table using AWS dashboard?

I would like to know if there's an option to query with multiple partition keys from DynamoDB table in AWS dashboard. Unable to find any article or similar requests for dashboard on the web. Will keep you posted if I find an answer for the same.
Thanks in advance.
The Console doesn't support this directly, because there is no support in the underlying API. What you're looking for is the equivalent of the following SQL query:
select *
from table
where PK in ('value_1', 'value_2') /*equivalent to: PK = 'value_1' or PK = 'value_2' */
The console supports using the Query and Scan operations. Query always operates on an item collection, so all items that share the same partition key, which means it can't be used for your use case.
Scan on the other hand is a full table scan, which allows you to optionally filter the results. The filter language has no support for this kind of or logical operator so that won't really help you. It will however allow you to view all items, which includes the ones you're looking for, but as I said, it's not really possible.

Extract script with which data was inserted into an old table and the owner in BigQuery - GCP

I am trying to find out what was the query run to be able to insert data into an empty table created earlier.
It would also be useful for me to know the user who was the creator of that table, if not, just to ask him about that script.
I tried with "INFORMATION_SCHEMA.TABLES;" but I only get the create script of the table.
I would look into BigQuery audit logs
most probably you can get information you want :
here is reference: https://cloud.google.com/bigquery/docs/reference/auditlogs/

Can I set a true/false for every entry in a single column using DynamoDB AWS console?

I have a table in DynamoDB and i'd like essentially to set a boolean to true/false for ALL rows or entries in that table, for just a single column. Let's say the column is called UserActive. I know i can do this by clicking the pencil/edit icon in the console, for each individual row, but for thousands of entries, that's just not feasible. I need to be able to do this from the AWS console.
How can i set them all to true/false in one go?
I need to be able to do this from the AWS console.
There is no way to edit multiple documents at once from the console. Sorry.
What you can do is write a script in the language of your choice using the AWS SDK to scan through all your documents and update them.

How do I execute the SHOW PARTITIONS command on an Athena table?

I'm using AWS Athena with AWS Glue for the first time, with S3 providing a 'folder' structure which maps to the partitions in my data - I'm getting into the concepts so please excuse any mistaken description!
I'm looking at what happens when I add data to my S3 bucket and see that new folders are ignored. Digging deeper I came across the 'SHOW PARTITIONS' command, as described here https://docs.aws.amazon.com/athena/latest/ug/show-partitions.html, and I'm trying to execute this against my test tables using the Athena query editor, with a mind that I'll go onto use the 'ALTER TABLE ADD PARTITION' command to add the new S3 folders.
I'm trying to execute the 'SHOW PARTITIONS' command in the AWS Athena Console Query Editor:
SHOW PARTITIONS "froms3_customer-files"."unit"
but when I try to execute it I see this message:
line 1:17: missing {'from', 'in'} at '"froms3_customer-files"' (service: amazonathena; status code: 400; error code: invalidrequestexception; request id: c0c0c351-2d42-4da4-b1f3-223b1733db65)
I'm struggling to understand what this is telling me, can anyone help me here?
Athena does not supports hyphen in database name.
Athena table, view, database, and column names cannot contain special
characters, other than underscore (_).
Also remove the double quotes from the show partitions command.
SHOW PARTITIONS froms3_customer_files.unit
References :
Athena table and database naming convention
Athena show partitions
If you want to see all the partitions that are created till now you can use following command
SHOW PARTITIONS DB_NAME.TABLE_NAME
If you want to view the keys along which table is partitioned you can view it through UI in following way:
1. Click on the table menu options.
2. Click on Show Properties
3. Click on partitions to see partition.

AWS Glue crawler need to create one table from many files with identical schemas

We have a very large number of folders and files in S3, all under one particular folder, and we want to crawl for all the CSV files, and then query them from one table in Athena. The CSV files all have the same schema. The problem is that the crawler is generating a table for every file, instead of one table. Crawler configurations have a checkbox option to "Create a single schema for each S3 path" but this doesn't seem to do anything.
Is what I need possible? Thanks.
Glue crawlers claims to solve many problems, but in fact solves few. If you're slightly outside the scope of what they designed for you're out of luck. There might be a way to configure it to do what you want, but in my experience trying to make Glue crawlers do things that aren't perfectly aligned with it is not worth the effort.
It sounds like you have a good idea of what the schema of your data is. When that is the case Glue crawlers also provide very little value. You probably have a better idea of what the schema should look than Glue will ever be able to figure out.
I suggest that you manually create the table, and write a one off script that lists all the partition locations on S3 that you want to include in the table and generate ALTER TABLE ADD PARTITION … SQL, or Glue API calls to add those partitions to the table.
To keep the table up to date when new partition locations are added, have a look at this answer for guidance: https://stackoverflow.com/a/56439429/1109
One way to do what you want is to use just one of the tables created by the crawler as an example, and create a similar table manually (in AWS Glue->Tables->Add tables, or in Athena itself, with
CREATE EXTERNAL TABLE `tablename`(
`column1` string,
`column2` string, ...
using existing table as an example, you can see the query used to create that table in Athena when you go to Database -> select your data base from Glue Data Catalog, then click on 3 dots in front of the one "automatically created by crawler table" that you choose as an example, and click on "Generate Create table DDL" option. It will generate a big query for you, modify it as necessary (I believe you need to look at LOCATION and TBLPROPERTIES parts, mostly).
When you run this modified query in Athena, a new table will appear in Glue data catalog. But it will not have any information about your s3 files and partitions, and crawler most likely will not update metastore info for you. So you can in Athena run "MSCK REPAIR TABLE tablename;" query (it's not very efficient, but works for me), and it will add missing file information, in the Result tab you will see something like (in case you use partitions on s3, of course):
Partitions not in metastore: tablename:dt=2020-02-03 tablename:dt=2020-02-04
Repair: Added partition to metastore tablename:dt=2020-02-03
Repair: Added partition to metastore tablename:dt=2020-02-04
After that you should be able to run your Athena queries.