we're trying to debug why our presto query run times vary significantly over the day. We see several significant spikes, some during working hours and some outside of working hours. We're using EMR version 5.14 and Presto version 0.194. Our data is stored in S3 using parquet files created by Hive. The below graph shows the run times for the same query over time using the Presto CLI. Any ideas/suggestions on what we should focus on or what could potentially cause these spikes will be much appreciated. Thanks!
Posting this in case anyone else has this issue. We ended up disabling hive statistics in hive.properties and that improved performance.
Related
We are using Spark history 3.2.1 to monitor our Spark applications.
We have thousands of daily jobs (running on Kubernetes) that writes event logs to S3 bucket (in a dedicated folder).
We are using history-server to analyze and compare completed jobs (incomplete running jobs never appeared in the UI but it's not a requirement now).
Recently I've noticed increase in our ListBucket API Operation in AWS billing cost explorer. This cost is higher than the cost of the StandardStorage (the price we pay for storing the data itself). It's up to few hundreds per month!
Running history-server with DEBUG log level exposed the "problem": every 10s the the history-server list the bucket to get all logs and then it iterate over each folder to get it's content. So if I want to keep the last 10,000 jobs, I'll have to pay for 10,101 ListBucket requests every 10s!
Here is one example (out of the 10k) reproduced locally with minio as S3:
22/02/20 06:44:31 DEBUG wire: http-outgoing-57 << "<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>local-audience</Name><Prefix>history-logs/eventlog_v2_spark-ffffdf5903c841259f28b53981746b76/</Prefix><KeyCount>2</KeyCount><MaxKeys>5000</MaxKeys><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated><Contents><Key>history-logs/eventlog_v2_spark-ffffdf5903c841259f28b53981746b76/appstatus_spark-ffffdf5903c841259f28b53981746b76</Key><LastModified>2022-02-12T17:00:15.304Z</LastModified><ETag>"d41d8cd98f00b204e9800998ecf8427e"</ETag><Size>0</Size><Owner><ID></ID><DisplayName></DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents><Contents><Key>history-logs/eventlog_v2_spark-ffffdf5903c841259f28b53981746b76/events_1_spark-ffffdf5903c841259f28b53981746b76</Key><LastModified>2022-02-12T17:00:15.136Z</LastModified><ETag>"f91cc774d92c6f6c2ca4d0e1a1e76e13"</ETag><Size>868837</Size><Owner><ID></ID><DisplayName></DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents></ListBucketResult>"
To ensure that the cost comes from history-server I turned it off for a day and there was no charge per ListBucket since then:
To mitigate the problem (because we still need the history-server), I can set the spark.history.fs.update.interval to higher number (such as 3600s or so). As we are checking the history-server once a day it is overkill and doesn't worth it (cost wise).
Why does it scan the completed jobs every time (over and over again) and not only new jobs? is there a way to configure such behavior to avoid those ListBucket operations?
If I care only for completed jobs, and assuming I can wait few minutes to see the list, is there a mode that can load the list only when I login to the UI? (rather than periodically doing it for nothing).
P.S - I'm using AWS lifecycle rules to clean this folder every few few days (and not the server cleaning feature), by expiration objects after few days.
treewalking in s3 is (a) expensive and (b) horribly slow, especially given that a deep tree scan exists. If you want to fix this and can write scala code, see if you can fix the server to switch to a deep listing by moving to FileSystem.listFiles(path, true). Yes that involves coding, but the OSS community depends on everyone fixing their own personal issues and sharing the outcome
After digging into this issue, I decided to stop using the "rolling" feature for now - as my application jobs are relatively small.
I removed the:
spark.eventLog.rolling.enabled: true
spark.eventLog.rolling.maxFileSize: 16m
from the spark-submit command and the cost is now back to normal...
I also wrote about it here.
#stevel thanks for your answer - I will try to contribute and fix that! :)
I'm using a single node Bigtable cluster for my sample application running on GKE. Autoscaling feature has been incorporated within the client code.
Sometimes I experience slowness (>80ms) for the GET calls. In order to investigate it further, I need some clarity around the following Bigtable behaviour.
I have cached the Bigtable table object to ensure faster GET calls. Is the table object persistent on GKE? I have learned that objects are not persistent on Cloud Function. Do we expect any similar behaviour on GKE?
I'm using service account authentication but how frequently auth tokens get refreshed? I have seen frequent refresh logs for gRPC Java client. I think Bigtable won't be able to serve the requests over this token refreshing period (4-5 seconds).
What if client machine/instance doesn't scale enough? Will it cause slowness for GET calls?
Bigtable client libraries use connection pooling. How frequently connections/channels close itself? I have learned that connections are closed after minutes of inactivity (>15 minutes or so).
I'm planning to read only needed columns instead of entire row. This can be achieved by specifying the rowkey as well as column qualifier filter. Can I expect some performance improvement by not reading the entire row?
According to GCP official docs you can get here the cause of slower performance of Bigtable. I would like to suggest you to go through the docs that might be helpful. Also you can see Troubleshooting performance issues.
We are running into quota limits for a small data set which is less than 1Gb in Bigquery. Google cloud gives us no indication of what queries are running on the backend which isn't allowing us to tune the setup. We have a Bigquery dataset and a dashboard built in data studio which is querying on the data set.
I've used relational databases like Oracle in the past and they have excellent tooling to diagnose issues. But with Bigquery, I feel like I am staring into the dark.
I'd appreciate any help/pointers you can give.
The concurrent queries limit makes reference to the number of statements that are executed simultaneously in BigQuery. The quota limit for on-demand, interactive queries is 100 concurrent queries (updated).
Based on this, it is seems that your Data Studio is hitting this quota when running your reports in which case is suggested to re-design your dashboard build in order to avoid exceeding those limits.
Additionally, you can use the bq ls -j -a PROJECTNAME command to list the jobs that have been run in your project in order to identify the queries you need to work with, as well mentioned by Elliott Brossard
Online updating spanner schema takes minutes even for very very small tables (10s of rows).
i.e. - adding/dropping/altering columns, adding tables, etc.
This can be quite frustrating for development processes and new version deployments.
Any plans for improvement?
Few more questions:
Anyone knows a 3rd party schema comparison tool for spanner? couldn't find any.
What about data backups? in order to save historical snapshots.
Thanks in advance
Schema Updates:
Since Cloud Spanner is a distributed database, it makes sure to update all moving parts of the system which takes the latency as described.
As a suggestion, you could batch the schema updates. This ensures the lower latencies (nearly equivalent to executing a single schema update) and can be executed using API / gcloud command-line tools.
Schema Comparison Tool:
You could use the getDatabaseDdl API to maintain history of your schema changes and use your tool of choice to diff them.
We've noticed that some of our queries have seen degraded performance in the last couple of weeks. We suspect this is due to some combination of:
Increased data in the tables
Increased data in some results
Inefficient or over-aggressive use of transactions
Any advice on how to diagnose the performance of a particular query?
When running an interactive query against your database in the Google Cloud Platform online management console, you can request generation of a plan explanation with the tab below the 'Run Query' button. This explanation may help you understand why your query is running slowly.
One common reason for performance regressions is that you have recently deleted or updated a lot of data. It can take several days for deleted/overwritten data to be garbage-collected, and in the interim it can slow down operations since this old data must still be scanned for queries over its key-range.