I am using AmazonS3ClientBuilder.defaultClient() to build AmazonS3Client client in a Java application.
When I run this, it gives following error:
Caused by: com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
I want to supply the region via java property file. I referred to many posts but didn't get what I am looking for.
My question is: What is the variable name which I used to specify the region in a Java property file?
I am not looking for environment variable or credential file. Can anyone help me in this.
Try creating your client using builder pattern:-
AmazonS3ClientBuilder.standard().withRegion("YOUR_REGION_STRING_HERE").build();
Related
I'm trying to instantiate BigQueryTemplate without the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Steps tried:
Implemented CredentialsSupplier by instantiating Credentials and setting location to service account json file.
Instantiated Bean BigQuery using BigQueryOptions::newBuilder() and setting credentials and project id.
Instantiating Bean BigQueryTemplate using the BigQuery bean created in step 2.
spring-cloud-gcp-dependencies 3.4.0 version is used.
The application executing in VM (non-gcp env).
Another option I tried is adding below properties
spring.cloud.gcp.bigquery.dataset-name=datasetname
spring.cloud.gcp.bigquery.credentials.location=file:/path/to/json
spring.cloud.gcp.bigquery.project-id=project-id
I'm getting below error
com.google.cloud.spring.bigquery.core.BigQueryTemplate,
applog.mthd=lambda$writeJsonStream$0,
applog.line=299, applog.msg=Error:
The Application Default Credentials are not available.
They are available if running in Google Compute Engine.
Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials.
Please let me know if I have missed any thing.
Thanks in advance.
I am fairly new to GCP API functions.
I am currently trying to the use text-to-speech module following these steps: https://cloud.google.com/text-to-speech/docs/libraries
I did not set up the environmental variable since I used the authExplicit(String jsonPath) for its authentication: https://cloud.google.com/docs/authentication/production
my code looks like following;
public void main() throws Exception {
String jsonPath = "/User/xxx/xxxx/xxxxxx/xxxx.json";
authExplicit(jsonPath);
//calling the text-to-speech function form the above link.
text2speech("some text");
}
authExplicit(jsonPath) goes through without any problem and prints a bucket. I thought the credential key in JSON was checked. However, text2speech function returns the error as follows:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I want to get the text2speech function work by call Google Cloud API functions.
Please let me know how to solve this issue.
Your advice would be highly appreciated.
It's confusing.
Application Default Credentials (ADC) is a process that looks for the credentials in various places including the env var GOOGLE_APPLICATION_CREDNTIALS.
If GOOGLE_APPLICATION_CREDNTIALS is unset and the code is running on a Google Cloud Platform (GCP) Compute Engine (GCE) service (e.g. Compute Engine), then it use the Metadata service to determine the credentials. If not, ADC fails and raises an error.
Your code fails because, authExplicit does not use ADC but loads the Service Account key from the file and creates a Storage account client using these credentials. Only the Storage client is thus authenticated.
I recommend a (simpler) solution: Use ADC and have Storage and Text2Speech clients both use ADC.
You will need to set the GOOGLE_APPLICATION_CREDENTIALS env var to the path to a key if you run your code off GCP (i.e. not on GCE or similar) but when it runs on GCP, it will leverage the service's credentials.
You will need to create both the Storage and Text2Speech clients to use ADCs:
See:
Cloud Storage
Text-to-Speech
Storage storage = StorageOptions.getDefaultInstance().getService();
...
And:
TextToSpeechClient textToSpeechClient = TextToSpeechClient.create()
...
I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.
When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.
I am trying to use AWS Powershell with Eucalyptus.
I can do this with AWS CLI with parameter --endpoint-url.
Is it possible to set endpoint url in AWS powershell?
Can I create custom region with my own endpoint URL in AWS Powershell?
--UPDATE--
The newer versions of the AWS Tools for Windows PowerShell (I'm running 3.1.66.0 according to Get-AWSPowerShellVersion), has an optional -EndpointUrl parameter for the relevant commands.
Example:
Get-EC2Instance -EndpointUrl https://somehostnamehere
Additionally, the aforementioned bug has been fixed.
Good stuff!
--ORIGINAL ANSWER--
TL;TR
Download the default endpoint config file from here: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json
Customize it. Example:
{
"version": 2,
"endpoints": {
"*/*": {
"endpoint": "your_endpoint_here"
}
}
}
After importing the AWSPowerShell module, tell the SDK to use your customized endpoint config. Example:
[Amazon.AWSConfigs]::EndpointDefinition = "path to your customized Amazon.endpoints.json here"
Note: there is a bug in the underlying SDK that causes endpoints that have a path component from being signed correctly. The bug affects this solution and the solution #HyperAnthony proposed.
Additional Info
Reading through the .NET SDK docs, I stumbled across a section that revealed that one can global set the region rules given a file: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-other.html#config-setting-awsendpointdefinition
Unfortunately, I couldn't find anywhere where the format of such a file is documented.
I then splunked through the AWSSDK.Core.dll code and found where the SDK loads the file (see LoadEndpointDefinitions() method at https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/RegionEndpoint.cs).
Reading through the code, if a file isn't explicitly specified on AWSConfigs.EndpointDefinition, it ultimately loads the file from an embedded resource (i.e. https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json)
I don't believe that it is. This list of common parameters (that can be used with all AWS PowerShell cmdlets) does not include a Service URL, it seems instead to opt for a simple string Region to set the Service URL based on a set of known regions.
This AWS .NET Development forum post suggests that you can set the Service URL on a .NET SDK config object, if you're interested in a possible alternative in PowerShell. Here's an example usage from that thread:
$config=New-Object Amazon.EC2.AmazonEC2Config
$config.ServiceURL = "https://ec2.us-west-1.amazonaws.com"
$client=[Amazon.AWSClientFactory]::CreateAmazonEC2Client($accessKeyID,$secretKeyID,$config)
It looks like you can use it with most config objects when setting up a client. Here's some examples that have the ServiceURL property. I would imagine that this is on most all AWS config objects:
AmazonEC2Config
AmazonS3Config
AmazonRDSConfig
Older versions of the documentation (for v1) noted that this property will be ignored if the RegionEndpoint is set. I'm not sure if this is still the case with v2.