com.microsoft.sqlserver.jdbc.SQLServerException: 110806 A distributed query failed - azure-sqldw

An ETL developer was running an UPDATE statement on a relatively large table and reports it failed with:
com.microsoft.sqlserver.jdbc.SQLServerException: 110806;A distributed query failed: Distribution(s):[1-60]The Microsoft Distributed Transaction Coordinator (MS DTC) has cancelled the distributed transaction.
Operation cancelled by user.
Is this caused by a client-initiated abort due to runtime limit being exceeded or did the error originate in the database?

Related

Informatica powercenter power exchange PWX-00267 DBAPI error

I am executing a workflow in informatica which is supposed to inset values in a target file.
Some of the records are getting inserted but i get an error after a few insertions saying:
[Informatica][ODBC PWX Driver] PWX-00267 DBAPI error for file……… Write error on record 119775 Requested 370 SQLSTATE [08S01]
Is this because of file constraints of how the record can be or due to some other reasons?
I'm not sure if this is exactly the case, but looking for the error code 08S01 I've found this site that lists Data Provider Error Codes. Under SQLCODE 370 (assuming this is what your error message indicates) I've found:
Message: There are insufficient resources on the target system to
complete the command. Contact your server administrator.
Reason: The resource limits reached reply message indicates that the
server could not be completed due to insufficient server resources
(e.g. memory, lock, buffer).
Action: Verify the connection and command parameters, and then
re-attempt the connection and command request. Review a client network
trace to determine if the server returned a SQL communications area
reply data (SQLCARD) with an optional reason code or other optional
diagnostic information.

What determines a "transient error" in AWS Athena query `FAILED` states?

According to https://docs.aws.amazon.com/athena/latest/APIReference/API_QueryExecutionStatus.html it states that
Athena automatically retries your queries in cases of certain transient errors. As a result, you may see the query state transition from RUNNING or FAILED to QUEUED.
As such, when a query execution is in a FAILED state, how can one determine (ideally from the API) if it is a transient error (and thus will transition back to RUNNING or QUEUED) or not?

WSO2 AM-Analytics worker error java.lang.OutOfMemoryError: GC overhead limit exceeded

I'm using APM-Analytic 2.6, I configured database (Oracle 12c) as the following docs (https://docs.wso2.com/display/AM260/Configuring+APIM+Analytics#standardsetup).
My worker has run few days, then has occurred the errors:
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /d01/WSO2/wso2am-2.6.0/wso2am-analytics-2.6.0/wso2/worker/logs/heap-dump.hprof ...
Unable to create /d01/WSO2/wso2am-2.6.0/wso2am-analytics-2.6.0/wso2/worker/logs/heap-dump.hprof: File exists
Exception in thread "MVStore background writer nio:/d01/WSO2/wso2am-2.6.0/wso2am-analytics-2.6.0/wso2/worker/database/WSO2_CARBON_DB.mv.db" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.h2.mvstore.Page.create(Page.java:122)
at org.h2.mvstore.Page.createEmpty(Page.java:101)
at org.h2.mvstore.MVMap.<init>(MVMap.java:75)
at org.h2.mvstore.MVMap.openReadOnly(MVMap.java:1156)
at org.h2.mvstore.MVStore.getMetaMap(MVStore.java:527)
at org.h2.mvstore.MVStore.openMapVersion(MVStore.java:409)
at org.h2.mvstore.MVMap.openVersion(MVMap.java:1133)
at org.h2.mvstore.MVMap.rewrite(MVMap.java:780)
at org.h2.mvstore.MVStore.compactRewrite(MVStore.java:1918)
at org.h2.mvstore.MVStore.compact(MVStore.java:1810)
at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2512)
at org.h2.mvstore.MVStore$BackgroundWriterThread.run(MVStore.java:2720)
[2019-11-12 22:16:27,292] ERROR {org.wso2.siddhi.core.util.Scheduler} - java.lang.OutOfMemoryError: GC overhead limit exceeded
[2019-11-12 22:16:34,477] ERROR {org.wso2.siddhi.core.util.Scheduler} - java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "MVStore background writer nio:/d01/WSO2/wso2am-2.6.0/wso2am-analytics-2.6.0/wso2/dashboard/database/MESSAGE_TRACING_DB.mv.db" java.lang.OutOfMemoryError: GC overhead limit exceeded
[2019-11-12 22:17:12,793] INFO {org.wso2.extension.siddhi.io.mgwfile.task.MGWFileCleanUpTask} - Uploaded API Usage data in the db will be cleaned up to : 2019-11-07 22:16:25.014
[2019-11-12 22:17:24,591] ERROR {org.wso2.siddhi.core.util.Scheduler} - java.lang.OutOfMemoryError: GC overhead limit exceeded
[2019-11-12 22:17:52,813] ERROR {org.wso2.siddhi.core.util.Scheduler} - java.lang.OutOfMemoryError: GC overhead limit exceeded
[2019-11-12 22:17:45,545] ERROR {org.wso2.carbon.deployment.engine.internal.SchedulerTask} - Error occurred while scanning deployment repository java.lang.OutOfMemoryError: GC overhead limit exceeded
Please let me know if you have any other solutions in my case.
I suspect this is because you are using H2 as the database for Analytics as well.
But since there going to be loads of data persisting in the database related to analytics, it is always recommended to use one of the following (Otherwise analytics instance could simply fail due to inconsistency of handling loads of data by the H2 database);
Postgres 9.5 and later
MySQL 5.6
MySQL 5.7
Oracle 12c
MS SQL Server 2012
DB2
You could follow the doc [1] (Check the standard-setup tab and specially the database configuration steps) and make the proper deployment.
[1] - https://docs.wso2.com/display/AM260/Configuring+APIM+Analytics#ConfiguringAPIMAnalytics-Step4-Configurethedatabases
Cheers!

BigQuery unable to insert job. Workflow failed

I need to run a batch job from GCS to BigQuery via Dataflow and Beam. All my files are avro with the same schema.
I've created a dataflow java application that is successful on a smaller set of data (~1gb, about 5 files).
But when I try to run it on a bigger set of data ( >500gb, >1000 files), i receive an error message
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: Failed to create load job with id prefix 1b83679a4f5d48c5b45ff20b2b822728_6e48345728d4da6cb51353f0dc550c1b_00001_00000, reached max retries: 3, last failed load job: ...
After 3 retries it terminates with:
Workflow failed. Causes: S57....... A work item was attempted 4 times without success....
This step is the load to BigQuery.
Stack Driver says the processing is stuck in step ....for 10m00s... and
Request failed with code 409, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes.....
I looked up the 409 error code stating that I might have an existing job, dataset, or table. I've removed all the tables and re-ran the application but it still shows the same error message.
I am currently limited on 65 workers and I have them using n1-standard-4 cpus.
I believe there are other ways to move the data from gcs to bq, but i need to demonstrate dataflow.
"java.lang.RuntimeException: Failed to create job with prefix beam_load_csvtobigqueryxxxxxxxxxxxxxx, reached max retries: 3, last failed job: null.
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers$PendingJob.runJob(BigQueryHelpers.java:198)..... "
One of the possible cause could be the privilege issue. Ensure the user account which interacts with the BigQuery has privilege "bigquery.jobs.create" in the predefined role "*BigQuery User"
Posting the comment of #DeaconDesperado as community wiki, where they experienced the same error and what they did was remove the restricted characters (eg. Unicode letters, marks, numbers, connectors, dashes or spaces) in the table name and the error is gone.
I got the same problem using "roles/bigquery.jobUser", "roles/bigquery.dataViewer", and "roles/bigquery.user". But only when granting "roles/bigquery.admin" did the issue get resolved.

A timeout occurred while waiting for memory resources to execute the query in resource pool 'SloDWPool'

I have a series of Azure SQL Data Warehouse databases (for our development/evaluation purposes). Due to a recent unplanned extended outage (due to an issue with the Tenant Ring associated with some of these databases), I decided to resume the canary queries I had been running before but had quiesced for a couple of months due to frequent exceptions.
The canary queries are not running particularly frequently on any specific database, say every 15 minutes. On one database, I've received two indications of issues completing the canary query in 24 hours. The error is:
Msg 110802, Level 16, State 1, Server adwscdev1, Line 1110802;An internal DMS error occurred that caused this operation to fail. Details: A timeout occurred while waiting for memory resources to execute the query in resource pool 'SloDWPool' (2000000007). Rerun the query.
This database is under essentially no load, running at more than 100 DWU.
Other databases on the same logical server may be running under a load, but I have not seen the error on them.
What is the explanation for this error?
Please open a support ticket for this issue, support will have full access to the DMS logs and be able to see exactly what is going on. this behavior is not expected.
While I agree a support case would be reasonable I think you should also try scaling up to say DWU400 and retrying. I would also consider trying largerc or xlargerc on DWU100 and DWU400 as described here. Note it gets more memory and resources per query.
Run the following then retry your query:
EXEC sp_addrolemember 'largerc', 'yourLoginName'