Informatica powercenter power exchange PWX-00267 DBAPI error - informatica

I am executing a workflow in informatica which is supposed to inset values in a target file.
Some of the records are getting inserted but i get an error after a few insertions saying:
[Informatica][ODBC PWX Driver] PWX-00267 DBAPI error for file……… Write error on record 119775 Requested 370 SQLSTATE [08S01]
Is this because of file constraints of how the record can be or due to some other reasons?

I'm not sure if this is exactly the case, but looking for the error code 08S01 I've found this site that lists Data Provider Error Codes. Under SQLCODE 370 (assuming this is what your error message indicates) I've found:
Message: There are insufficient resources on the target system to
complete the command. Contact your server administrator.
Reason: The resource limits reached reply message indicates that the
server could not be completed due to insufficient server resources
(e.g. memory, lock, buffer).
Action: Verify the connection and command parameters, and then
re-attempt the connection and command request. Review a client network
trace to determine if the server returned a SQL communications area
reply data (SQLCARD) with an optional reason code or other optional
diagnostic information.

Related

Is it normal to wait 18 hours for a log message to show up?

Is this an issue everyone just has to put up with? It takes log messages at least 12-18 hours to appear in Cloud Logging. We never have any idea what our Google VMs are doing. Most of the time we never see our log messages at all, just millions of lines of this stuff:
2021-11-09T18:52:38.887509624Z Unable to export to Monitoring service because: GaxError RPC failed, caused by 3:Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.. debug_error_string:{"created":"#1636483958.886243136","description":"Error received from peer ipv4:172.217.11.202:443","file":"src/core/lib/surface/call.cc","file_line":1062,"grpc_message":"Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.","grpc_status":3}
A
2021-11-09T18:52:38.903761Z 2021-11-09T18:52:38.903512137Z container kill 714e6b93c8c67b5a54569aed3bd4a5985591600230af8ed860b5789e686e77aa (image=gcr.io/stackdriver-agents/stackdriver-logging-agent:1.8.9, name=stackdriver-logging-agent, signal=23) A
2021-11-09T18:52:38.917711Z 2021-11-09T18:52:38.917505171Z container kill 714e6b93c8c67b5a54569aed3bd4a5985591600230af8ed860b5789e686e77aa (image=gcr.io/stackdriver-agents/stackdriver-logging-agent:1.8.9, name=stackdriver-logging-agent, signal=23) A
2021-11-09T18:53:38.890751337Z Unable to export to Monitoring service because: GaxError RPC failed, caused by 3:Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.. debug_error_string:{"created":"#1636484018.889384555","description":"Error received from peer ipv4:172.217.11.202:443","file":"src/core/lib/surface/call.cc","file_line":1062,"grpc_message":"Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.","grpc_status":3}
A
2021-11-09T18:54:38.879268968Z Unable to export to Monitoring service because: GaxError RPC failed, caused by 3:Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.. debug_error_string:{"created":"#1636484078.878076512","description":"Error received from peer ipv4:172.217.11.202:443","file":"src/core/lib/surface/call.cc","file_line":1062,"grpc_message":"Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.","grpc_status":3}
A
2021-11-09T18:55:38.889891450Z Unable to export to Monitoring service because: GaxError RPC failed, caused by 3:Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.. debug_error_string:{"created":"#1636484138.888728523","description":"Error received from peer ipv4:172.217.11.202:443","file":"src/core/lib/surface/call.cc","file_line":1062,"grpc_message":"Field timeSeries[0].points[0].interval.end_time had an invalid value of "2021-10-30T16:09:38.770613-07:00": Data points cannot be written more than 24h in the past.","grpc_status":3}
If our log messages do appear, they're always vastly outnumbered by these error messages.
Google's troubleshooting instructions (https://cloud.google.com/logging/docs/agent/logging/troubleshooting) are useless; right away I get "sudo: service: command not found" using their standard "container-optimized" VM image.
Please check this.
They mention of a fix in next release (mentioned 4 days ago) and workaround is to use ops-agent.

Unable to create environments on Google Cloud Composer

I tried to create a Google Cloud Composer environment but in the page to set it up I get the following errors:
Service Error: Failed to load GKE machine types. Please leave the field
empty to apply default values or retry later.
Service Error: Failed to load regions. Please leave the field empty to
apply default values or retry later.
Service Error: Failed to load zones. Please leave the field empty to apply
default values or retry later.
Service Error: Failed to load service accounts. Please leave the field
empty to apply default values or retry later.
The only parameters GCP lets me change are the region and the number of nodes, but still lets me create the environment. After 30 minutes the environment crashes with the following error:
CREATE operation on this environment failed 1 day ago with the following error message:
Http error status code: 400
Http error message: BAD REQUEST
Errors in: [Web server]; Error messages:
Failed to deploy the Airflow web server. This might be a temporary issue. You can retry the operation later.
If the issue persists, it might be caused by problems with permissions or network configuration. For more information, see https://cloud.google.com/composer/docs/troubleshooting-environment-creation.
An internal error occurred while processing task /app-engine-flex/flex_await_healthy/flex_await_healthy>2021-07-20T14:31:23.047Z7050.xd.0: Your deployment has failed to become healthy in the allotted time and therefore was rolled back. If you believe this was an error, try adjusting the 'app_start_timeout_sec' setting in the 'readiness_check' section.
Got error "Another operation failed." during CP_DEPLOYMENT_CREATING_STANDARD []
Is it a problem with permissions? If so, what permissions do I need? Thank you!
It looks like more of a temporary issue:
the first set of errors is stating you cannot load the metadata :
regions list, zones list ....
you dont have a clear
PERMISSION_DENIED error
the second error: is suggesting also:
This might be a temporary issue.

BigQuery unable to insert job. Workflow failed

I need to run a batch job from GCS to BigQuery via Dataflow and Beam. All my files are avro with the same schema.
I've created a dataflow java application that is successful on a smaller set of data (~1gb, about 5 files).
But when I try to run it on a bigger set of data ( >500gb, >1000 files), i receive an error message
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: Failed to create load job with id prefix 1b83679a4f5d48c5b45ff20b2b822728_6e48345728d4da6cb51353f0dc550c1b_00001_00000, reached max retries: 3, last failed load job: ...
After 3 retries it terminates with:
Workflow failed. Causes: S57....... A work item was attempted 4 times without success....
This step is the load to BigQuery.
Stack Driver says the processing is stuck in step ....for 10m00s... and
Request failed with code 409, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes.....
I looked up the 409 error code stating that I might have an existing job, dataset, or table. I've removed all the tables and re-ran the application but it still shows the same error message.
I am currently limited on 65 workers and I have them using n1-standard-4 cpus.
I believe there are other ways to move the data from gcs to bq, but i need to demonstrate dataflow.
"java.lang.RuntimeException: Failed to create job with prefix beam_load_csvtobigqueryxxxxxxxxxxxxxx, reached max retries: 3, last failed job: null.
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers$PendingJob.runJob(BigQueryHelpers.java:198)..... "
One of the possible cause could be the privilege issue. Ensure the user account which interacts with the BigQuery has privilege "bigquery.jobs.create" in the predefined role "*BigQuery User"
Posting the comment of #DeaconDesperado as community wiki, where they experienced the same error and what they did was remove the restricted characters (eg. Unicode letters, marks, numbers, connectors, dashes or spaces) in the table name and the error is gone.
I got the same problem using "roles/bigquery.jobUser", "roles/bigquery.dataViewer", and "roles/bigquery.user". But only when granting "roles/bigquery.admin" did the issue get resolved.

Route53 Domain Transfer - Registry error - 2400 : Command failed (421 SESSION TIMEOUT)

I am trying to transfer a domain using Route53 and after a few minutes I receive an email with the following error.
Registry error - 2400 : Command failed (421 SESSION TIMEOUT)
Anyone have any ideas what this means or how to get around it?
I have never seen your error. There is a document on transferring domains with error messages. The reason that I am responding is that I have seen domain transfers fail going to Route 53 without every learning why they failed. Maybe this will help you.
NSI Registry Registrar Protocol (RRP)
421 Command failed due to server error. Client should try again A
transient server error has caused RRP command failure. A subsequent
retry may produce successful results.

Getting error in Siebel EIM import process

SBL-EIM-00205: Failed to load the application dictionary.
SBL-SVR-01042: Internal: Communication protocol error while instantiating new task SBL-EIM-00205: Failed to load the application dictionary.
This is a generic message, Siebel throws it all the time there is an issue in the repository. You will have to go through the log file and get the actual error message. Increase the component log level for EIM component to maximum and re-submit the job for the logs.
The structure of the EIM tables have to match the final tables, so it could be a mismatch of schema causes this error.
SBL-SVR-01042 this is generic error when this error is encountered while attempting to instantiate a new instance of a given component and is generic. As to why the error has occurred, one needs to review the accompanying error messages which will help provide context and more detailed information
SBL-EIM-00205: There could be many reason for this error.This is caused due to the incorrect ODBC registry entries. or some issue with Foreign key mapping.
You should increase log level and get more details.