BigQuery - How to instantiate BigQueryTemplate without env variables? getting The Application Default Credentials are not available - google-cloud-platform

I'm trying to instantiate BigQueryTemplate without the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Steps tried:
Implemented CredentialsSupplier by instantiating Credentials and setting location to service account json file.
Instantiated Bean BigQuery using BigQueryOptions::newBuilder() and setting credentials and project id.
Instantiating Bean BigQueryTemplate using the BigQuery bean created in step 2.
spring-cloud-gcp-dependencies 3.4.0 version is used.
The application executing in VM (non-gcp env).
Another option I tried is adding below properties
spring.cloud.gcp.bigquery.dataset-name=datasetname
spring.cloud.gcp.bigquery.credentials.location=file:/path/to/json
spring.cloud.gcp.bigquery.project-id=project-id
I'm getting below error
com.google.cloud.spring.bigquery.core.BigQueryTemplate,
applog.mthd=lambda$writeJsonStream$0,
applog.line=299, applog.msg=Error:
The Application Default Credentials are not available.
They are available if running in Google Compute Engine.
Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials.
Please let me know if I have missed any thing.
Thanks in advance.

Related

java.io.IOException: The Application Default Credentials are not available

I am fairly new to GCP API functions.
I am currently trying to the use text-to-speech module following these steps: https://cloud.google.com/text-to-speech/docs/libraries
I did not set up the environmental variable since I used the authExplicit(String jsonPath) for its authentication: https://cloud.google.com/docs/authentication/production
my code looks like following;
public void main() throws Exception {
String jsonPath = "/User/xxx/xxxx/xxxxxx/xxxx.json";
authExplicit(jsonPath);
//calling the text-to-speech function form the above link.
text2speech("some text");
}
authExplicit(jsonPath) goes through without any problem and prints a bucket. I thought the credential key in JSON was checked. However, text2speech function returns the error as follows:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I want to get the text2speech function work by call Google Cloud API functions.
Please let me know how to solve this issue.
Your advice would be highly appreciated.
It's confusing.
Application Default Credentials (ADC) is a process that looks for the credentials in various places including the env var GOOGLE_APPLICATION_CREDNTIALS.
If GOOGLE_APPLICATION_CREDNTIALS is unset and the code is running on a Google Cloud Platform (GCP) Compute Engine (GCE) service (e.g. Compute Engine), then it use the Metadata service to determine the credentials. If not, ADC fails and raises an error.
Your code fails because, authExplicit does not use ADC but loads the Service Account key from the file and creates a Storage account client using these credentials. Only the Storage client is thus authenticated.
I recommend a (simpler) solution: Use ADC and have Storage and Text2Speech clients both use ADC.
You will need to set the GOOGLE_APPLICATION_CREDENTIALS env var to the path to a key if you run your code off GCP (i.e. not on GCE or similar) but when it runs on GCP, it will leverage the service's credentials.
You will need to create both the Storage and Text2Speech clients to use ADCs:
See:
Cloud Storage
Text-to-Speech
Storage storage = StorageOptions.getDefaultInstance().getService();
...
And:
TextToSpeechClient textToSpeechClient = TextToSpeechClient.create()
...

Druid can not see/read GOOGLE_APPLICATION_CREDENTIALS defined on env path

I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.

AWS DotNet Core credentials

I have an existing dotnet core web application that I need to use a profile other than [default] when I'm developing locally.
I'm running into an issue in that the location of credential file appears to not be defaulted yet to ~/.aws/credentials. Based on the credential lookup sequence check 2 should work if I set the value of AWSConfigs.AWSProfileName before creating the SSM Client but it doesn't and just falls through the remaining flow and throws an error saying it can't find the EC2 Meta Data. The same is the case for check 3. When the credentials are in the [default] definition check 4 will succeed which I expected would fail as well if defaults haven't been initialized yet. I have multiple AWS accounts that I get temporary security tokens from an SSO system based on the config file and because of temporary token requirement I can't use the [default] profile as I need to be able to switch between them to run the same code base.
I've been able to get around this by explicitly accessing the credential store and generating a set of credentials to pass into the constructor for the SSM Client.
Amazon.Runtime.CredentialManagement.CredentialProfile developerProfile;
AmazonSimpleSystemsManagementClient ssmClient;
if (new Amazon.Runtime.CredentialManagement.SharedCredentialsFile().TryGetProfile(Configuration["AWS:Profile"], out developerProfile)) //Test to determine if we have local credentials file with a profile
{
AWSCredentials credentials = new Amazon.Runtime.SessionAWSCredentials(developerProfile.Options.AccessKey, developerProfile.Options.SecretKey, developerProfile.Options.Token);
ssmClient = new AmazonSimpleSystemsManagementClient(credentials, developerProfile.Region);
}
else
{
ssmClient = new AmazonSimpleSystemsManagementClient(Region);
}
The above snippet is designed to allow for running locally with a specific profile and file location and when either do not exist assumes that it's running in an EC2 or ECS environment and can source the credentials from the metadata.
The location of the code that needs access AWS' Parameter Store in located in the Startup method so other properties can be initialized before the ConfigureServices method is run. I have additional AWS services that I initialize a client for that work as expected after the ConfigureServices has run. Should I not expect the credential provider to be properly initialized before the ConfigureServices method is run?

How to specify the GCP Credential Location in application.properties file (for using the Pub/Sub in GCP)?

This seems straightforward to do that passing the Service Account key file (generated from the GCP console) by specifying the file location in the application.properties file. However, I tried all the following options:
1. spring.cloud.gcp.credentials.location=file:/home/my_user_id/mp6key.json
2. spring.cloud.gcp.credentials.location=file:src/main/resources/mp6key.json
3. spring.cloud.gcp.credentials.location=file:./main/resources/mp6key.json
4. spring.cloud.gcp.credentials.location=file:/src/main/resources/mp6key.json
It all ended up with the same error:
java.io.FileNotFoundException: /home/my_user_id/mp6key.json (No such file or directory)
Could anyone advise where I should put the key file and then how should I specify the path to the file properly?
The same programs run successfully in Ecplise with messages published and subscribed using the Pub/Sub processing from GCP (using the Project Id/Service Account key generated in GCP), but now stuck with the above issue after deployed to run on GCP.
As mentioned in the official documentation, the credentials file can be obtained from a number of different locations such as the file system, classpath, URL, etc.
for example, if the service account key file is stored in the classpath as src/main/resources/key.json, pass the following property
spring.cloud.gcp.credentials.location=classpath:key.json
if the key file is stored somewhere else in your local file system, use the file prefix in the property value
spring.cloud.gcp.credentials.location=file:<path to key file>
My line looks like this:
spring.cloud.gcp.credentials.location=file:src/main/resources/[my_json_file]
And this works.
The following also works if I put it in the root of the project directory:
spring.cloud.gcp.credentials.location=file:./[my_json_file]
Have you tried to follow this quickstart? Please, try to follow it thoughtfully and explain if you get any error finishing the quickstart.
Anyway, before running your Java script, try running on the console the following (please modify with the exact path where you store your key):
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/mp6key.json"
How are you authenticating your credentials in your Java script?
My answer is easy: if you run you code on GCP, you don't have to use service account key file. Problem eliminated, problem solved!
More seriously, have a look on service identity. I don't know what is your current service (Compute? Function? Cloud Run?). Anyway, you can attach any service account on GCP components. Then, when you code, simply use the default credential. Automatically the component identity is loaded. No key to manage, no key to store securely, no key to rotate!
If you provide more detail on your target platform, I could provide your some guidance to achieve this.
Keep in mind that the service account key file are designed to be used by automatic apps (w/o user account involved) hosted outside GCP (on prem, other Cloud Provider, a CI/CD, Apigee,...)
UPDATE
When you use your personal account, you can also use the default credential.
Install gcloud SDK on your computer
Use the command gcloud auth application-default login
Follow the instructions
Enjoy!
If it doesn't work, get the <path> displayed after the login command and set this value in the environment variable named GOOGLE_APPLICATION_CREDENTIALS.
If you definitively want to use service account key file (which are a security issue for the previous reason, but...), you can use it locally
Either set the json key file path into the GOOGLE_APPLICATION_CREDENTIALS environment variable
Or run this command gcloud auth activate-service-account --key-file=<path to your json key file>
Provided your file is in the resources folder try
file://mp6key.json
using file:// instead of file:/ works for me at least

specifying aws region while building S3 client using java property file?

I am using AmazonS3ClientBuilder.defaultClient() to build AmazonS3Client client in a Java application.
When I run this, it gives following error:
Caused by: com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
I want to supply the region via java property file. I referred to many posts but didn't get what I am looking for.
My question is: What is the variable name which I used to specify the region in a Java property file?
I am not looking for environment variable or credential file. Can anyone help me in this.
Try creating your client using builder pattern:-
AmazonS3ClientBuilder.standard().withRegion("YOUR_REGION_STRING_HERE").build();