I am trying to figure out how to specify the dataset location in a BigQuery API query using v0.27 of the BigQuery API.
I have a dataset located in northamerica-northeast1 and the BigQuery API is returning 404 errors since this is not the default multi-regional location "US."
I am using the run_async_query method to execute my queries but based on documentation am unsure how to add a location to this field to make it location aware.
I have also tried to previously update my client instantiation like this:
def _get_client(self):
bigquery.Client.SCOPE = (
'https://www.googleapis.com/auth/bigquery',
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive')
client = bigquery.Client.from_service_account_json(_KEY_FILE)
if self._params['bq_data_location'].strip():
client.location = self._params['bq_data_location']
return client
However, it does not appear that this is the correct way to inform the BigQuery API of a dataset location.
For additional context, in my SQL that I am passing to the BigQuery API, I am already specifying the PROJECT_ID.DATASET_ID.TABLE_ID, however, this does not seem to be sufficient to find regional data.
Furthermore, I am making this request from Google App Engine using the CRMint open source data flow platform.
Can you please help me with an example of how location can be added to the BigQuery API for v0.27 so that the API does not return 404?
Thank you!
From the code sample it seems you're likely talking about google-cloud-bigquery 0.27, which was released in Aug 2017 and predates location support (as well as many other features).
Your best bet is to update that dependency to something more recent.
Related
I'm trying to connect to the HDFS from the ADF. I created a folder and sample file (orc format) and put it in the newly created folder.
Then in ADF I created successfully linked service for HDFS using my Windows credentials (the same user which was used for creating sample file):
But when trying to browse the data through dataset:
I'm getting an error: The response content from the data store is not expected, and cannot be parsed.:
Is there something I'm doing wrongly or it is kind of permissions issue?
Please advise
This appears to be a generic issue, you need to point to a file with appropriate extension rather than a folder itself. Also make sure you are using a supported data store activity.
You can follow this official MS doc to use HDFS server with Azure Data Factory
Is there a way to bulk tag bigquery tables with python google.cloud.datacatalog?
If you want to take a look at sample code which uses the python google.cloud.datacatalog client library, I've put together a utilities open source script, that creates bulk Tags using a CSV as source. If you want to use a different source, you may use this script as reference, hope it helps.
create bulk tags from csv
For this purpose you may consider using DataCatalogClient() method which is included in google.cloud.datacatalog_v1 class as a part of PyPI Python google-cloud-datacatalog package leveraging Google Cloud Data Catalog API service.
By the first, you have to enable Data Catalog and BigQuery APIs
in your project;
Install Python Cloud Client Libraries for the Data Catalog API:
pip install --upgrade google-cloud-datacatalog
Set up authentication, exporting
GOOGLE_APPLICATION_CREDENTIALS environment variable holding JSON
file that contains your service account key:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"
Refer to this example from official documentation that
intelligibly reflects a way creating Data catalog tag template,
attaching appropriate tag fields to the target Bigquery table using
create_tag_template() function.
Having any doubts feel free to extend you initial question or add a comment below this answer, thus we can address particular use case according to your needs.
I am trying to learn how to use the cloud billing API and playing around with it's methods. I copied a code snippet in Java that shows how to use the updateBillingInfo method. I have a project in my cloud account, and it has a billing account associated with it, and I wanted to change it to a different billing account.
Here's what I tried:
String name = "projects/My project";
ProjectBillingInfo info = new ProjectBillingInfo();
info.setBillingAccountName("billingAccounts/$BILLING_ID");
Cloudbilling.Projects.UpdateBillingInfo request = cloudbillingService.projects().updateBillingInfo(name, info);
ProjectBillingInfo response = request.execute();
and my problem is that request.execute() (as well as the API browser explorer) throws an exception with code "500 - internal error encountered".
Am I not using it correctly? It was my understanding that after this, when I check my project in GCP, I should see my project listed to the new billing account. Help is much appreciated.
You are using an invalid Project ID, since GCP project IDs have no spaces in them. Note that Project IDs and Project names are different things. It needs to be the ID as seen here. The rest of your code snippet seems fine, just make sure you put the actual project ID like this: projects/your-project-id
I am currently using protege 5.0 and have created a very simple ontology (the pizza example). I was wondering how I would export this ontology to dynamodb on AWS. I was hoping someone could post a link to a good tutorial on protege 5.0 or walk me through this. Thanks!
If you are using dynamodb just to store the content of a file and to be able to access the file at a specific URL, then the process required is just the same as for any other file type you would store on dynamodb. The default way for Protege and most other OWL related tools to access an ontology is a simple HTTP get from a provided IRI.
Looks like Parse.com stores the PFFile objects on AWS S3 and only stores a reference to the actual files on S3 in Parse for the PFFile object types.
So my problem here is I only get a link to AWS S3 link for my PFFile if I export the data using the out of the box Parse.com export functionality. After I import the same data to my Parse application, for some reason the security setting on those PFFiles on S3 is changed in a way that all PFFiles won't be accessible to me after an import due to security error.
My question is, does anyone know how the security is being set on the PFFiles? Here's a link to PFFile https://parse.com/docs/osx/api/Classes/PFFile.html but I guess this is rather an advanced topic and wasn't revealed on this page.
Also looking a solution for this, all I found is this from their forum:
In this case, the PFFiles are stored in a different app. You might
need to download these files and upload them again to the new app and
update the pointers. I know this is not a great answer but we're
working on making this process more straightforward.
https://www.parse.com/questions/import-pffile-object-not-working-in-iphone-application