I want to get the maximum and average of the api calls within a period of 10s from cloudwatch log insight. I have the query below. Is the query below correct? what are the differences between them?
stats count(*) as count, max(count) as max, avg(count) as avg by bin(10s)
Related
I am currently learning about AWS Logs Insights, and I was wondering if the following is possible.
Let's say I gather logs from Route53. So I have an event for each query that reaches the AWS DNS servers of course. Now, I know I can count the number of queries per resolverIp for example as such:
stats count(*) by resolverIp
I also know that I can count the number of the queries, per resolverIp, that returned the NXDOMAIN responseCode, as such:
filter responseCode="NXDOMAIN" | stats count(*) by resolverIp
My question is, is there a way to get a percentage of the later (number of queries that returned NXDOMAIN per resolverIp) from the former (number of queries per resolverIp)?
This query gives you the percentage sorted by resolver IP
stats
sum(strcontains(responseCode,"NXDOMAIN")) / count(*)
* 100 by resolverIp
Are Google Datastore queries slower when put into a transaction? Assuming the query is exactly the same, would the run time of a transaction + query be slower than the query not in a transaction?
Does the setup of the transaction add any execution time?
Here's some data from running a single document get 100 times sequentially.
type
avg
p99
transactional
46ms
86ms
nontransactional
16ms
27ms
Scenario is to update column descriptions in tables(About 1500 columns in 50 tables). Due to multiple restrictions I have been asked to use the bq query command to execute the ALTER TABLE sql for updating column descriptions, thorugh cloud CLI. query -
bq query --nouse_legacy_sql \ 'ALTER TABLE `<Table>` ALTER COLUMN <columnname> SET OPTIONS(DESCRIPTION="<Updated Description>")';
Issue is if I bunch the bq queries together for 1500 columns it is 1500 sql statements.
This is causing the standard Exceeded rate limits: too many table update operations for this table error.
Any suggestions on how to execute it better.
You are hitting the rate limit:
Maximum rate of table metadata update operations per table: 5 operations per 10 seconds
You will need to stagger the updates to make sure it happens in batch of 5 operations per 10 seconds. You could also try to alter all the columns in a single table with a single statement to reduce the number of calls required.
Is there a way to measure the impact to Kusto Cluster when we run a Query from Power BI. This is because the Query I use in Power BI might get large data even if it is for a limited time range. I am aware of setting - limit Query result record ,but I would like to measure the impact to Cluster for specific queries .
Do I need to use the metrics under - Data explorer monitoring. Is there a best way to do it and any specific metrics . Thanks.
You can use .show queries or Query diagnostics logs - these can show you the resources utilization per query (e.g. Total CPU time & memory peak), and you can filter to a specific user or application name (e.g. PowerBI).
I created a test table in cloud spanner and populated it with 120 million rows. i have created a composite primary key for the table.
when i run a simple "select count(*) from " query, it takes approximately a minute for cloud spanner web UI to return results.
Is anyone else facing similar problem?
Cloud Spanner does not materialize counts, so queries will like "select count(*) ...." will scan the entire table to return the count of rows, hence the higher time to execute.
If you require faster counts, recommend keeping a sharded counter updated transactionally with changes to the table.
#samiz - you answer "recommend keeping a sharded counter updated transactionally with changes to the table"
how can we detect how many sharded counter need for the table? there is no retry n transaction...
thank you