I have created a table in BQ and went ahead and made a cloud storage data trasfer job (ref - https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer). But the job is throwing the following error.
Job bqts_6091b5c4-0000-2b62-bad0-089e08e4f7e1 (table abc) failed with error INTERNAL: An internal error occurred and the request could not be completed. Error: 80038528; JobID: 742067653276:bqts_6091b5c4-0000-2b62-bad0-089e08e4f7e1
The reason of the job failure with (Error: 80038528) is not enough slots being available. With respect to resource allocation, there is no fixed resource availability guarantee when using the on-demand model for running queries in BigQuery. The only way to make sure a certain number of slots is always available is by moving to a flat-rate model [1].
If slot is not an issue, then you can follow what #Sakshi Gatyan as mentioned. That is the right way to get an exact solution for BigQuery internal error.
[1]. https://cloud.google.com/bigquery/docs/slots
Since it's an internalError, there can be multiple reasons for your job failure depending upon your environment.
check error table
The job needs to be inspected internally by the Google support team. So I would recommend filling a case with support.
Related
I have written a cloud storage trigger based cloud function. I have 10-15 files landing at 5 secs interval in cloud bucket which loads data into a bigquery table(truncate and load).
While there are 10 files in the bucket I want cloud function to process them in sequential manner i.e 1 file at a time as all the files accesses the same table for operation.
Currently cloud function is getting triggered for multiple files at a time and it fails in BIgquery operation as multiple files trying to access the same table.
Is there any way to configure this in cloud function??
Thanks in Advance!
You can achieve this by using pubsub, and the max instance param on Cloud Function.
Firstly, use the notification capability of Google Cloud Storage and sink the event into a PubSub topic.
Now you will receive a message every time that a event occur on the bucket. If you want to filter on file creation only (object finalize) you can apply a filter on the subscription. I wrote an article on this
Then, create an HTTP functions (http function is required if you want to apply a filter) with the max instance set to 1. Like this, only 1 function can be executed in the same time. So, no concurrency!
Finally, create a PubSub subscription on the topic, with a filter or not, to call your function in HTTP.
EDIT
Thanks to your code, I understood what happens. In fact, BigQuery is a declarative system. When you perform a request or a load job, a job is created and it works in background.
In python, you can explicitly wait the end on the job, but, with pandas, I didn't find how!!
I just found a Google Cloud page to explain how to migrate from pandas to BigQuery client library. As you can see, there is a line at the end
# Wait for the load job to complete.
job.result()
than wait the end of the job.
You did it well in the _insert_into_bigquery_dwh function but it's not the case in the staging _insert_into_bigquery_staging one. This can lead to 2 issues:
The dwh function work on the old data because the staging isn't yet finish when you trigger this job
If the staging take, let's say, 10 seconds and run in "background" (you don't wait the end explicitly in your code) and the dwh take 1 seconds, the next file is processed at the end of the dwh function, even if the staging one continue to run in background. And that leads to your issue.
The architecture you describe isn't the same as the one from the documentation you linked. Note that in the flow diagram and the code samples the storage events triggers the cloud function which will stream the data directly to the destination table. Since BigQuery allow for multiple streaming insert jobs several functions could be executed at the same time without problems. In your use case the intermediate table used to load with write-truncate for data cleaning makes a big difference because each execution needs the previous one to finish thus requiring a sequential processing approach.
I would like to point out that PubSub doesn't allow to configure the rate at which messages are sent, if 10 messages arrive to the topic they all will be sent to the subscriber, even if processed one at a time. Limiting the function to one instance may lead to overhead for the above reason and could increase latency as well. That said, since the expected workload is 15-30 files a day the above maybe isn't a big concern.
If you'd like to have parallel executions you may try creating a new table for each message and set a short expiration deadline for it using table.expires(exp_datetime) setter method so that multiple executions don't conflict with each other. Here is the related library reference. Otherwise the great answer from Guillaume would completely get the job done.
Since I'm not allowed to ask my question in the same thread where another person have the same problem (but not using a template) I'm creating this new thread.
The problem: Im creating a dataflow job from a template in gcp to ingest data from pub/sub into BQ. This works fine until the job executes. The job gets "stuck" and does not write anything to BQ.
I cant do so much because I cant choose the beam version in the template. This is the error:
Processing stuck in step WriteSuccessfulRecords/StreamingInserts/StreamingWriteTables/StreamingWrite for at least 01h00m00s without outputting or completing in state finish
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:803)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:867)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.flushRows(StreamingWriteFn.java:140)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.finishBundle(StreamingWriteFn.java:112)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn$DoFnInvoker.invokeFinishBundle(Unknown Source)
Any ideas how to get this to work?
The issue is coming from the step WriteSuccessfulRecords/StreamingInserts/StreamingWriteTables/StreamingWrite which suggest a problem while writing the data.
Your error can be replicated by (using either Pub/Sub Subscription to BigQuery or Pub/Sub Topic to BigQuery):
Configuring a template with a table that doesn't exist.
Starting the
template with a correct table and delete it during the job execution.
In both cases the stuckness message happens because the data is being read from Pubsub but it is waiting for the table availability to insert the data. The error is being reported each 5 minutes and it gets resolved when the table is created.
To verify the table configured in your template, see the property outputTableSpec in the PipelineOptions in the Dataflow UI.
I had the same issue before. The problem was that I used NestedValueProviders to evaluate the Pub/Sub topic/subscription and this is not supported in case of templated pipelines.
I was getting the same error and reason was that I created an empty BigQuery table without specifying an schema. Make sure to create a BQ table with a schema before you can load data via Dataflow.
Recently I was trying to get the status of an operation calling operation.get on API explorer in GCP https://cloud.google.com/resource-manager/reference/rest/v1/operations/get
The request throws:
field [name] has issue [invalid operation name]
I tried the same request using node Lib for GCP and got the same result.
Operation name used is in format as below:
operations/operation-1552901443197-5845b0ae4997f-496bcbdb-xxxxxx
Did someone encounter this error before?
The issue is caused because the operation you are trying to fetch is from a different resource from the API that you are calling.
Most Cloud APIs have their own APIs, and each one performs its own operations under their own resource.
In this case you are trying to get your operation in the Resource Manager API, as a Resource Management resource, while it is from the Compute Engine resource, and this API (or this one if the operation is global to your project) should be used instead.
Using this API instead would solve this issue.
I agree that the response message could be improved in order to point users to which is the actual issue, when facing issues like the one in this question.
That's why I have opened a feature request that you can see in the following link. You can star it in order to give it more visibility and notifications on the updates made there, and add comments here if you have any additional information you might add to this improvement.
After modifying my job to start using timestampLabel when reading from PubSub, resource setup seems to break every time I try to start the job with the following error:
(c8bce90672926e26): Workflow failed. Causes: (5743e5d17dd7bfb7): Step setup_resource_/subscriptions/project-name/subscription-name__streaming_dataflow_internal25: Set up of resource /subscriptions/project-name/subscription-name__streaming_dataflow_internal failed
where project-name and subscription-name represent the actual values of my project and PubSub subscription I'm trying to read from. Before trying to attach timestampLabel on message entry, the job was working correctly, consuming messages from the specified PubSub subscription, which should mean that my API/network settings are OK.
I'm also noticing two warnings with the payload
Internal Issue (119d3b54af281acf): 65177287:8503
but no more information can be found in the worker logs. For the few seconds that my job is setting up I can see the timestampLabel being set in the first step of the pipeline. Unfortunately I can't find any other cases or documentation about this error.
When using the timestampLabel feature, a second subscription is created for tracking purposes. Double check the permission settings on your topic to make sure it matches the permissions required.
I have a problem with connection to DynamoDB. I get this exception:
com.amazonaws.services.dynamodb.model.ResourceNotFoundException:
Requested resource not found (Service: AmazonDynamoDB; Status Code:
400; Error Code: ResourceNotFoundException; Request ID: ..
But I have a table and region is correct.
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it
My problem was stupid but maybe someone has the same... I changed recently the default credentials of aws (~/.aws/credentials), I was testing in another account and forgot to rollback the values to the regular account.
I spent 1 day researching the problem in my project and now I should repay a debt to humanity and reduce the entropy of the universe a little.
Usually, this message says that your client can't reach a table in your DB.
You should check the next things:
1. Your database is running
2. Your accessKey and secretKey are valid for the database
3. Your DB endpoint is valid and contains correct protocol ("http://" or "https://"), and correct hostname, and correct port
4. Your table was created in the database.
5. Your table was created in the database in the same region that you set as a parameter in credentials. Optional, because some
database environments (e.g. Testcontainers Dynalite) don't have an incorrect value for the region. And any nonempty region value will be correct
In my case problem was that I couldn't save and load data from a table in tests with DynamoDB substituted by Testcontainers and Dynalite. I found out that in our project tables creates by Spring component marked with #Component annotation. And in tests, we are using a global setting for lazy loading components to test, so our component didn't load by default because no one call it in the test explicitly. ¯_(ツ)_/¯
If DynamoDB table is in a different region, make sure to set it before initialising the DynamoDB by
AWS.config.update({region: "your-dynamoDB-region" });
This works for me:)
Always ensure that you do one of the following:
The right default region is set up in the AWS CLI configuration files on all the servers, development machines that you are working on.
The best choice is to always specify these constants explicitly in a separate class/config in your project. Always import this in code and use it in the boto3 calls. This will provide flexibility if you were to add or change based on the enterprise requirements.
If your resources are like mine and all over the place, you can define the region_name when you're creating the resource.
I do this for all my instantiations as it forces me to think about what I'm putting/calling where.
boto3.resource("dynamodb", region_name='us-east-2')
I was getting this issue in my .NetCore Application.
Following fixed the issue for me in Startup class --> ConfigureServices method
services.AddDefaultAWSOptions(
new AWSOptions
{
Region = RegionEndpoint.GetBySystemName("eu-west-2")
});
I got Error warning Lambda : lifecycleIteration=0 lambda handler returned an error: ResourceNotFoundException: Requested resource not found
I spent 1 week to fix the issue.
And so its root cause and steps to find issue is mentioned in below Git Issue thread and fixed it.
https://github.com/soto-project/soto/issues/595