The AWS timestream Database is queried using grafana API and results are shown on dashboards
While everything works well when we query for less data points but my queries would fail when I query too much data i.e, of 1-2 months for 100 or more dimensions. the query would fail while fetching data.
As stated in the AWS Timestream docs, there are some best practices that, if you follow, your queries will be quite fast. I can vouch that, obeying those rules, you can return a huge data-set (4M records) under 40s.
Adding to those guides beneath, I would also suggest avoiding high cardinality dimensions. I explain: IF you have a dimension, like time, or something that grows indefinitely, the indexes on this dimension will get out of hand and, soon, your query will be too slow to be useful.
The original document can be found here (There are some not-pasted links in the list, consult the doc).
Following are suggested best practices for queries with Amazon
Timestream.
Include only the measure and dimension names essential to query.
Adding extraneous columns will increase data scans, which impacts the
performance of queries.
Where possible, push the data computation to Timestream using the
built-in aggregates and scalar functions in the SELECT clause and
WHERE clause as applicable to improve query performance and reduce
cost. See SELECT and Aggregate functions.
Where possible, use approximate functions. E.g., use APPROX_DISTINCT
instead of COUNT(DISTINCT column_name) to optimize query performance
and reduce the query cost. See Aggregate functions.
Use a CASE expression to perform complex aggregations instead of
selecting from the same table multiple times. See The CASE statement.
Where possible, include a time range in the WHERE clause of your
query. This optimizes query performance and costs. For example, if you
only need the last one hour of data in your dataset, then include a
time predicate such as time > ago(1h). See SELECT and Interval and
duration.
When a query accesses a subset of measures in a table, always include
the measure names in the WHERE clause of the query.
Where possible, use the equality operator when comparing dimensions
and measures in the WHERE clause of a query. An equality predicate on
dimensions and measure names allows for improved query performance and
reduced query costs.
Wherever possible, avoid using functions in the WHERE clause to
optimize for cost.
Refrain from using LIKE clause multiple times. Rather, use regular
expressions when you are filtering for multiple values on a string
column. See Regular expression functions.
Only use the necessary columns in the GROUP BY clause of a query.
If the query result needs to be in a specific order, explicitly
specify that order in the ORDER BY clause of the outermost query. If
your query result does not require ordering, avoid using an ORDER BY
clause to improve query performance.
Use a LIMIT clause if you only need the first N rows in your query.
If you are using an ORDER BY clause to look at the top or bottom N
values, use a LIMIT clause to reduce the query costs.
Use the pagination token from the returned response to retrieve the
query results. For more information, see Query.
If you've started running a query and realize that the query will not
return the results you're looking for, cancel the query to save cost.
For more information, see CancelQuery.
If your application experiences throttling, continue sending data to
Amazon Timestream at the same rate to enable Amazon Timestream to
auto-scale to the satisfy the query throughput needs of your
application.
If the query concurrency requirements of your applications exceed the
default limits of Timestream, contact AWS Support for limit increases.
Related
I have a client application which querys data in Spanner..
Lets say I have a table with 10 columns and my client application can search on a combination of columns.. Lets say I've added 5 indexes to optimise searching.
According to https://cloud.google.com/spanner/docs/sql-best-practices#secondary-indexes
it says:
In this scenario, Spanner automatically uses the secondary index SingersByLastName when executing the query (as long as three days have passed since database creation; see A note about new databases). However, it's best to explicitly tell Spanner to use that index by specifying an index directive in the FROM clause:
And also https://cloud.google.com/spanner/docs/secondary-indexes#index-directive suggests
When you use SQL to query a Spanner table, Spanner automatically uses any indexes that are likely to make the query more efficient. As a result, you don't need to specify an index for SQL queries. However, for queries that are critical for your workload, Google advises you to use FORCE_INDEX directives in your SQL statements for more consistent performance.
Both links suggest YOU (The developer) should be supplying Force_Index on yours queries.. This means I now need business logic in my client to say something like:
If (object.SearchTermOne)
queryBuilder.IndexToUse = "Idx_SearchTermOne"
This feels like I'm essentially trying to do the job of the optimiser by setting the index to use.. It also means if I add an extra index I need a code change to make use of it
So what are the best practices when it comes to using Force_Index in spanner queries?
The best practice is to use the Force_Index as described in the documentation at this time.
This feels like I'm essentially trying to do the job of the optimiser by setting the index to use..
I feel the same.
https://cloud.google.com/spanner/docs/secondary-indexes#index-directive
Note: The query optimizer requires up to three days to collect the databases statistics required to select a secondary index for a SQL query. During this time, Cloud Spanner will not automatically use any indexes.
As noted in this note, even if an amount of data is added that would allow the index to function effectively, it may take up to three days for the optimizer to figure it out.
Queries during that time will probably be full scans.
If you want to prevent this other than using Force_Index, you will need to run ANALYZE DDL manually.
https://cloud.google.com/blog/products/databases/a-technical-overview-of-cloud-spanners-query-optimizer
But none of this changes the fact that we are essentially trying to do the optimizer's job...
I am new to AWS. while reading the docs here and example I came to know that sort key is not only use to sort the data in partitions but also used to enhance the searching criteria on dynamoDB table.But the same we can do with the help of filterCondition. So what is the difference,
and also acc. to example given we can use sort/range key in withKeyConditionExpression("CreateDate = :v_date and begins_with(IssueId, :v_issue)")
but when I tried it gave me exception
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Query key condition not supported
Thanks
To limit the Items returned rather than returning all Items with a particular HASH key.
There are two different ways we can handle this
The ideal way is to build the element we want to query into the RANGE key. This allows us to use Key Expressions to query our data, allowing DynamoDB to quickly find the Items that satisfy our Query.
A second way to handle this is with filtering based on non-key attributes. This is less efficient than Key Expressions but can still be helpful in the right situations. Filter expressions are used to apply server-side filters on Item attributes before they are returned to the client making the call. Filtering is Applied after DynamoDB Query is completed . If you retrieve 100KB of data in Query step but filter it down to 1KB of data, you will consume the Read Capacity Units for 100KB of data
Moral is - Filtering and projection expressions aren't a magic bullet - they won't make it easy to quickly query your data in additional ways. However, they can save network transfer time by limiting the number and size of items transferred back to your network. They can also simplify application complexity by pre-filtering your results rather than requiring application-side filtering.
From dynamodbguide
dynamodbguide
I save my order data on dyanmodb table. And the partition key is orderId, sort key is timestamp. Each order has many other attributes like category, userName, price, items, status`. I am going to build a filter service to let clients query order based on these attributes. Also I'd like to add a limit on the query for pagination. But I find some limitations on dynamodb.
In order to support querying different fields, I have two options:
Create GSI for each attribute. It is very expensive but it supports query each attribute very performance. This solution doesn't support combine multiple attributes in the filter.
Attach a filter expression on the SCAN to include attribute condition. SCAN is not very performance in the first place. Also the filter expression is applied after limits. Which means it is very likely to response less than users request limits.
so what is the good way to achieve this in dynamodb?
There is unfortunately no magic way to solve your problems. There is no DynamoDB feature which you missed. Indeed, as you said, making each of the attributes available for efficient queries requires a GSI which will cost you additional money - but that's reasonable. Indeed, as you said, there is no efficient way to search for an intersection of requirements on two different attribute. And indeed, the "limit" feature doesn't quite do what you want and you'll need to emulate your page size need in the client code (asking for more pages until your desired amount is recieved), potentially with unacceptably high latency.
It sounds that what you really need is a search engine. These have exactly the features that you asked for. You'll still be paying for these features (indexing of individual columns still takes up CPU and disk space, intersection of multiple attribute searches still requires significant work at query time) but search engines are designed for exactly these operations, and do them more efficiently and with lower latency (which is important for interactive searches, which are the bread-and-butter of search engines).
You can add the limit for pagination using the limit attribute in the query. But can you please be more specific about your access patterns, is your clients going to query all the orders or only the orders belonging to them ?
I have created a DynamoDB table and a Global Secondary Index on that table. I need to fetch all data from the GSI of that table.
There are two options:
Scan operation with No Filter Expression.
Query operation with no condition.
I need to find out which one has better performance, so that I start my implementation.
I have read a lot about the DynamoDB Scan and Query operations but could resolve my query.
Please help me in resolving my query.
Thanks in advance.
Abhishek
They will both impose the same performance overhead. So choosing either should be okay.
You should think of adding optimizations on top of whichever approach you use - for instance performing parallel scans as mentionedin the best practices:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScanGuidelines.html
or caching data in your application
Do note that parallel scans will eat up your provisions.
Another thing to watch out for while making your decision would be, how likely is the query pattern going to change? Do you plan on adding filters in the future? If so, then query would be better since scan loads all the data (consuming provisioned read capacity) and then filters results.
Question to all Cassandra experts out there.
I have a column family with about a million records.
I would like to query these records in such a way that I should be able to perform a Not-Equal-To kind of operation.
I Googled on this and it seems I have to use some sort of Map-Reduce.
Can somebody tell me what are the options available in this regard.
I can suggest a few approaches.
1) If you have a limited number of values that you would like to test for not-equality, consider modeling those as a boolean columns (i.e.: column isEqualToUnitedStates with true or false).
2) Otherwise, consider emulating the unsupported query != X by combining results of two separate queries, < X and > X on the client-side.
3) If your schema cannot support either type of query above, you may have to resort to writing custom routines that will do client-side filtering and construct the not-equal set dynamically. This will work if you can first narrow down your search space to manageable proportions, such that it's relatively cheap to run the query without the not-equal.
So let's say you're interested in all purchases of a particular customer of every product type except Widget. An ideal query could look something like SELECT * FROM purchases WHERE customer = 'Bob' AND item != 'Widget'; Now of course, you cannot run this, but in this case you should be able to run SELECT * FROM purchases WHERE customer = 'Bob' without wasting too many resources and filter item != 'Widget' in the client application.
4) Finally, if there is no way to restrict the data in a meaningful way before doing the scan (querying without the equality check would returning too many rows to handle comfortably), you may have to resort to MapReduce. This means running a distributed job that would scan all rows in the table across the cluster. Such jobs will obviously run a lot slower than native queries, and are quite complex to set up. If you want to go this way, please look into Cassandra Hadoop integration.
If you want to use not-equals operator on a specific partition key and get all other data from table then you can use a combination of range queries and TOKEN function from CQL to achieve this
For example, if you want to fetch all rows except the ones having partition key as 'abc' then you execute below 2 queries
select <column1>,<column2> from <keyspace1>.<table1> where TOKEN(<partition_key_column_name>) < TOKEN('abc');
select <column1>,<column2> from <keyspace1>.<table1> where TOKEN(<partition_key_column_name>) > TOKEN('abc');
But, beware that result is going to be huge (depending on size of table and fields you need). So you might want to use this in conjunction with dsbulk kind of utility. Also note that there is no guarantee of ordering in your result. This is just a kind of data dump which will most probably be useful for some kind of one-time data migration like scenarios.