I am integrating my application with AWS parameter store. For local development which may have no access to AWS I need to disable fetching property values from AWS and use values from application.yml. The issue seems to be not application.yml, but the dependencies: as soon as AWS starter appears in POM, AWS integration is being initialized: Spring is trying to use AwsParamStorePropertySourceLocator. I guess what I need to do is to force my application to use Spring's property source locator regardless of AWS jar being on the class path. Not sure how to do that.
For parameter store it is quite easy: AwsParamStoreBootstrapConfiguration bean is conditional on property aws.paramstore.enabled. Creating aws.paramstore.enabled environment variable and setting its value to false will disable AWS parameter store.
I also tried disabling AWS secrets manager and setting aws.secretsmanager.enabled to false is not sufficient. To fully disable it I had to disable auto configuration for few classes:
import org.springframework.cloud.aws.autoconfigure.context.ContextCredentialsAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextInstanceDataAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextRegionProviderAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextResourceLoaderAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.mail.MailSenderAutoConfiguration;
#Configuration
#Profile("local")
#EnableAutoConfiguration(exclude = { ContextCredentialsAutoConfiguration.class,
ContextInstanceDataAutoConfiguration.class, ContextRegionProviderAutoConfiguration.class,
ContextResourceLoaderAutoConfiguration.class, ContextStackAutoConfiguration.class,
MailSenderAutoConfiguration.class })
public class LocalScanConfig {
}
Related
I'm trying to instantiate BigQueryTemplate without the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Steps tried:
Implemented CredentialsSupplier by instantiating Credentials and setting location to service account json file.
Instantiated Bean BigQuery using BigQueryOptions::newBuilder() and setting credentials and project id.
Instantiating Bean BigQueryTemplate using the BigQuery bean created in step 2.
spring-cloud-gcp-dependencies 3.4.0 version is used.
The application executing in VM (non-gcp env).
Another option I tried is adding below properties
spring.cloud.gcp.bigquery.dataset-name=datasetname
spring.cloud.gcp.bigquery.credentials.location=file:/path/to/json
spring.cloud.gcp.bigquery.project-id=project-id
I'm getting below error
com.google.cloud.spring.bigquery.core.BigQueryTemplate,
applog.mthd=lambda$writeJsonStream$0,
applog.line=299, applog.msg=Error:
The Application Default Credentials are not available.
They are available if running in Google Compute Engine.
Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials.
Please let me know if I have missed any thing.
Thanks in advance.
I am fairly new to GCP API functions.
I am currently trying to the use text-to-speech module following these steps: https://cloud.google.com/text-to-speech/docs/libraries
I did not set up the environmental variable since I used the authExplicit(String jsonPath) for its authentication: https://cloud.google.com/docs/authentication/production
my code looks like following;
public void main() throws Exception {
String jsonPath = "/User/xxx/xxxx/xxxxxx/xxxx.json";
authExplicit(jsonPath);
//calling the text-to-speech function form the above link.
text2speech("some text");
}
authExplicit(jsonPath) goes through without any problem and prints a bucket. I thought the credential key in JSON was checked. However, text2speech function returns the error as follows:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I want to get the text2speech function work by call Google Cloud API functions.
Please let me know how to solve this issue.
Your advice would be highly appreciated.
It's confusing.
Application Default Credentials (ADC) is a process that looks for the credentials in various places including the env var GOOGLE_APPLICATION_CREDNTIALS.
If GOOGLE_APPLICATION_CREDNTIALS is unset and the code is running on a Google Cloud Platform (GCP) Compute Engine (GCE) service (e.g. Compute Engine), then it use the Metadata service to determine the credentials. If not, ADC fails and raises an error.
Your code fails because, authExplicit does not use ADC but loads the Service Account key from the file and creates a Storage account client using these credentials. Only the Storage client is thus authenticated.
I recommend a (simpler) solution: Use ADC and have Storage and Text2Speech clients both use ADC.
You will need to set the GOOGLE_APPLICATION_CREDENTIALS env var to the path to a key if you run your code off GCP (i.e. not on GCE or similar) but when it runs on GCP, it will leverage the service's credentials.
You will need to create both the Storage and Text2Speech clients to use ADCs:
See:
Cloud Storage
Text-to-Speech
Storage storage = StorageOptions.getDefaultInstance().getService();
...
And:
TextToSpeechClient textToSpeechClient = TextToSpeechClient.create()
...
I stored my MySQL DB credentials in AWS secrets manager using the Credentials for other database option. I want to import these credentials in my application.properties file. Based on a few answers I found in this thread, I did the following:
Added the dependency spring-cloud-starter-aws-secrets-manager-config
Added spring.application.name = <application name> and spring.config.import = aws-secretsmanager: <Secret name> in application.properties
Used secret keys as place holders in the following properties:
spring.datasource.url = jdbc:mysql://${host}:3306/db_name
spring.datasource.username=${username}
spring.datasource.password=${password}
I am getting the following error while running the application:
java.lang.IllegalStateException: Unable to load config data from 'aws-secretsmanager:<secret_name>'
Caused by: java.lang.IllegalStateException: File extension is not known to any PropertySourceLoader. If the location is meant to reference a directory, it must end in '/' or File.separator
First, is the process I am following correct? If yes, what is this error regarding and how to resolve this?
I found the problem that was causing the error. Apparently I was adding the wrong dependency.
According to the latest docs, the configuration support for using spring.config.import to import AWS secrets has been moved to io.awspring.cloud from org.springframework.cloud. So the updated dependency would be io.awspring.cloud:spring-cloud-starter-aws-secrets-manager-config:2.3.3 and NOT org.springframework.cloud:spring-cloud-starter-aws-secrets-manager-config:2.2.6
You are trying to use spring.config.import, and the support for this was introduced in Spring Cloud 2.3.0:
https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available
Secrets Manager
Support loading properties through spring.config.import, introduced in Spring Cloud 2020.0 Read more about integrating your
Spring Cloud applicationwiththe AWS secrets manager.
Removed the dependency to auto-configure module #526.
Dropped the dependency to javax.validation:validation-api.
Allow Secrets Manager prefix without “/” in the front #736.
In spring-cloud 2020.0.0 (aka Ilford), the bootstrap phase is no
longer enabled by default. In order enable it you need an additional
dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
<version>{spring-cloud-version}</version>
</dependency>
However, starting at spring-cloud-aws 2.3, allows import default aws'
secretsmanager keys (spring.config.import=aws-secretsmanager:) or
individual keys
(spring.config.import=aws-secretsmanager:secret-key;other-secret-key)
https://github.com/spring-cloud/spring-cloud-aws/blob/main/docs/src/main/asciidoc/secrets-manager.adoc
application.yml
spring.config.import: aws-secretsmanager:/secrets/spring-cloud-aws-sample-app
Or try to leave it empty:
spring.config.import=aws-secretsmanager:
As such, it will take spring.application.name by default,
App:
#SpringBootApplication
public class App {
private static final Logger LOGGER = LoggerFactory.getLogger(App.class);
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
#Bean
ApplicationRunner applicationRunner(#Value("${password}") String password) {
return args -> {
LOGGER.info("`password` loaded from the AWS Secret Manager: {}", password);
};
}
}
I am trying to have a serviceHost stage variable to be set for each request from API GATEWAY, exactly like picture attached below.
According to doc https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-set-stage-variables-aws-console.html we can have something like this from console, but since my app is totally on CDK so just wanted to figure out a way to have it configured through CDK itself.
Couldn't find that in https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-apigateway.IntegrationOptions.html or anywhere.
Is it possible to attain through CDK somehow.
You can set the stage variables when declaring a stage. As per the documentation:
import aws_cdk.aws_apigateway
my_stage = aws_cdk.aws_apigateway.Stage(
self,
"my_stage",
variables = {"serviceHost": "my_value"}
)
I'm attempting to use the python package awswrangler to access a non-AWS S3 service.
The AWS Data Wranger docs state that you need to create a boto3.Session() object.
The problem is that the boto3.client() supports setting the endpoint_url, but boto3.Session() does not (docs here).
In my previous uses of boto3 I've always used the client for this reason.
Is there a way to create a boto3.Session() with a custom endpoint_url or otherwise configure awswrangler to accept the custom endpoint?
I finally found the configuration for awswrangler:
import awswrangler as wr
wr.config.s3_endpoint_url = 'https://custom.endpoint'
Any configuration variables for awswrangler can be overwritten directly using the wr.config config object as you stated in your answer, but it may be cleaner or preferable in some use cases to use environment variables.
In that case, simply set WR_S3_ENDPOINT_URL to your custom endpoint, and the configuration will reflect that when you import the library.
Once you create your session, you can use client as well. For example:
import boto3
session = boto3.Session()
s3 = session.client('s3', endpoint_url='<custom-endpoint>')