Importing secrets in Spring Boot application from AWS Secrets Manager - amazon-web-services

I stored my MySQL DB credentials in AWS secrets manager using the Credentials for other database option. I want to import these credentials in my application.properties file. Based on a few answers I found in this thread, I did the following:
Added the dependency spring-cloud-starter-aws-secrets-manager-config
Added spring.application.name = <application name> and spring.config.import = aws-secretsmanager: <Secret name> in application.properties
Used secret keys as place holders in the following properties:
spring.datasource.url = jdbc:mysql://${host}:3306/db_name
spring.datasource.username=${username}
spring.datasource.password=${password}
I am getting the following error while running the application:
java.lang.IllegalStateException: Unable to load config data from 'aws-secretsmanager:<secret_name>'
Caused by: java.lang.IllegalStateException: File extension is not known to any PropertySourceLoader. If the location is meant to reference a directory, it must end in '/' or File.separator
First, is the process I am following correct? If yes, what is this error regarding and how to resolve this?

I found the problem that was causing the error. Apparently I was adding the wrong dependency.
According to the latest docs, the configuration support for using spring.config.import to import AWS secrets has been moved to io.awspring.cloud from org.springframework.cloud. So the updated dependency would be io.awspring.cloud:spring-cloud-starter-aws-secrets-manager-config:2.3.3 and NOT org.springframework.cloud:spring-cloud-starter-aws-secrets-manager-config:2.2.6

You are trying to use spring.config.import, and the support for this was introduced in Spring Cloud 2.3.0:
https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available
Secrets Manager
Support loading properties through spring.config.import, introduced in Spring Cloud 2020.0 Read more about integrating your
Spring Cloud applicationwiththe AWS secrets manager.
Removed the dependency to auto-configure module #526.
Dropped the dependency to javax.validation:validation-api.
Allow Secrets Manager prefix without “/” in the front #736.
In spring-cloud 2020.0.0 (aka Ilford), the bootstrap phase is no
longer enabled by default. In order enable it you need an additional
dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
<version>{spring-cloud-version}</version>
</dependency>
However, starting at spring-cloud-aws 2.3, allows import default aws'
secretsmanager keys (spring.config.import=aws-secretsmanager:) or
individual keys
(spring.config.import=aws-secretsmanager:secret-key;other-secret-key)
https://github.com/spring-cloud/spring-cloud-aws/blob/main/docs/src/main/asciidoc/secrets-manager.adoc
application.yml
spring.config.import: aws-secretsmanager:/secrets/spring-cloud-aws-sample-app
Or try to leave it empty:
spring.config.import=aws-secretsmanager:
As such, it will take spring.application.name by default,
App:
#SpringBootApplication
public class App {
private static final Logger LOGGER = LoggerFactory.getLogger(App.class);
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
#Bean
ApplicationRunner applicationRunner(#Value("${password}") String password) {
return args -> {
LOGGER.info("`password` loaded from the AWS Secret Manager: {}", password);
};
}
}

Related

java.io.IOException: The Application Default Credentials are not available

I am fairly new to GCP API functions.
I am currently trying to the use text-to-speech module following these steps: https://cloud.google.com/text-to-speech/docs/libraries
I did not set up the environmental variable since I used the authExplicit(String jsonPath) for its authentication: https://cloud.google.com/docs/authentication/production
my code looks like following;
public void main() throws Exception {
String jsonPath = "/User/xxx/xxxx/xxxxxx/xxxx.json";
authExplicit(jsonPath);
//calling the text-to-speech function form the above link.
text2speech("some text");
}
authExplicit(jsonPath) goes through without any problem and prints a bucket. I thought the credential key in JSON was checked. However, text2speech function returns the error as follows:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I want to get the text2speech function work by call Google Cloud API functions.
Please let me know how to solve this issue.
Your advice would be highly appreciated.
It's confusing.
Application Default Credentials (ADC) is a process that looks for the credentials in various places including the env var GOOGLE_APPLICATION_CREDNTIALS.
If GOOGLE_APPLICATION_CREDNTIALS is unset and the code is running on a Google Cloud Platform (GCP) Compute Engine (GCE) service (e.g. Compute Engine), then it use the Metadata service to determine the credentials. If not, ADC fails and raises an error.
Your code fails because, authExplicit does not use ADC but loads the Service Account key from the file and creates a Storage account client using these credentials. Only the Storage client is thus authenticated.
I recommend a (simpler) solution: Use ADC and have Storage and Text2Speech clients both use ADC.
You will need to set the GOOGLE_APPLICATION_CREDENTIALS env var to the path to a key if you run your code off GCP (i.e. not on GCE or similar) but when it runs on GCP, it will leverage the service's credentials.
You will need to create both the Storage and Text2Speech clients to use ADCs:
See:
Cloud Storage
Text-to-Speech
Storage storage = StorageOptions.getDefaultInstance().getService();
...
And:
TextToSpeechClient textToSpeechClient = TextToSpeechClient.create()
...

AWS DotNet Core credentials

I have an existing dotnet core web application that I need to use a profile other than [default] when I'm developing locally.
I'm running into an issue in that the location of credential file appears to not be defaulted yet to ~/.aws/credentials. Based on the credential lookup sequence check 2 should work if I set the value of AWSConfigs.AWSProfileName before creating the SSM Client but it doesn't and just falls through the remaining flow and throws an error saying it can't find the EC2 Meta Data. The same is the case for check 3. When the credentials are in the [default] definition check 4 will succeed which I expected would fail as well if defaults haven't been initialized yet. I have multiple AWS accounts that I get temporary security tokens from an SSO system based on the config file and because of temporary token requirement I can't use the [default] profile as I need to be able to switch between them to run the same code base.
I've been able to get around this by explicitly accessing the credential store and generating a set of credentials to pass into the constructor for the SSM Client.
Amazon.Runtime.CredentialManagement.CredentialProfile developerProfile;
AmazonSimpleSystemsManagementClient ssmClient;
if (new Amazon.Runtime.CredentialManagement.SharedCredentialsFile().TryGetProfile(Configuration["AWS:Profile"], out developerProfile)) //Test to determine if we have local credentials file with a profile
{
AWSCredentials credentials = new Amazon.Runtime.SessionAWSCredentials(developerProfile.Options.AccessKey, developerProfile.Options.SecretKey, developerProfile.Options.Token);
ssmClient = new AmazonSimpleSystemsManagementClient(credentials, developerProfile.Region);
}
else
{
ssmClient = new AmazonSimpleSystemsManagementClient(Region);
}
The above snippet is designed to allow for running locally with a specific profile and file location and when either do not exist assumes that it's running in an EC2 or ECS environment and can source the credentials from the metadata.
The location of the code that needs access AWS' Parameter Store in located in the Startup method so other properties can be initialized before the ConfigureServices method is run. I have additional AWS services that I initialize a client for that work as expected after the ConfigureServices has run. Should I not expect the credential provider to be properly initialized before the ConfigureServices method is run?

Spring Cloud: using AWS Parameter Store only with certain profiles

I am integrating my application with AWS parameter store. For local development which may have no access to AWS I need to disable fetching property values from AWS and use values from application.yml. The issue seems to be not application.yml, but the dependencies: as soon as AWS starter appears in POM, AWS integration is being initialized: Spring is trying to use AwsParamStorePropertySourceLocator. I guess what I need to do is to force my application to use Spring's property source locator regardless of AWS jar being on the class path. Not sure how to do that.
For parameter store it is quite easy: AwsParamStoreBootstrapConfiguration bean is conditional on property aws.paramstore.enabled. Creating aws.paramstore.enabled environment variable and setting its value to false will disable AWS parameter store.
I also tried disabling AWS secrets manager and setting aws.secretsmanager.enabled to false is not sufficient. To fully disable it I had to disable auto configuration for few classes:
import org.springframework.cloud.aws.autoconfigure.context.ContextCredentialsAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextInstanceDataAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextRegionProviderAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextResourceLoaderAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration;
import org.springframework.cloud.aws.autoconfigure.mail.MailSenderAutoConfiguration;
#Configuration
#Profile("local")
#EnableAutoConfiguration(exclude = { ContextCredentialsAutoConfiguration.class,
ContextInstanceDataAutoConfiguration.class, ContextRegionProviderAutoConfiguration.class,
ContextResourceLoaderAutoConfiguration.class, ContextStackAutoConfiguration.class,
MailSenderAutoConfiguration.class })
public class LocalScanConfig {
}

ef core migration can't use secret manager

When I create .net core web applications, I use the secret manager during testing. I am generally able to create a new web project (mvc and web api), right click on the project and select "manage user secrets". This opens a json file where I add the secrets. I then use this in my startup.cs something like this:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseMySql(Configuration["connectionString"]));
The website works fine with this and connects well to the database. However when I try using ef core migration commands such as add-migration, they don't seem to be able to access the connection string from the secret manager. I get the error saying "connection string can't be null". The error is gone when I hard code Configuration["connectionString"] with the actual string. I have checked online and checked the .csproj file, they already contain the following lines:
<UserSecretsId>My app name</UserSecretsId>
And later:
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.1" />
<DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
Is there anything I need to add so the migrations can access the connection string?
Update
I only have one constructor in the context class:
public ApplicationDBContext(DbContextOptions<ApplicationDBContext> options) : base(options)
{
}
I am currently coming across this exact problem as well. I have come up with a solution that works for now, but one may consider messy at best.
I have created a Configuration Class that provides the Configuration Interface when requested:
public static class Configuration
{
public static IConfiguration GetConfiguration()
{
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
}
}
In the Migration, you can then get the Configuration File and access its UserSecrets like this:
protected override void Up(MigrationBuilder migrationBuilder)
{
var conf = Configuration.GetConfiguration();
var secret = conf["Secret"];
}
I have tested creating a SQL Script with these User Secrets, and it works (you obviously wouldn't want to keep the Script laying around since it would expose the actual secret).
Update
The above config can also be set up into Program.cs class in the BuildWebHost method:
var config = new ConfigurationBuilder().AddUserSecrets<Startup>().Build();
return WebHost.CreateDefaultBuilder(args).UseConfiguration(config)...Build()
Or in the Startup Constructor if using that Convention
Update 2 (explanation)
It turns out this issue is because the migration scripts runs with the environment set to "Production". The secret manager is pre-set to only work in "Development" environment (for a good reason). The .AddUserSecrets<Startup>() function simply adds the secrets for all environment.
To ensure that this isn't set to your production server, there are two solutions I have noticed, one is suggested here: https://learn.microsoft.com/en-us/ef/core/miscellaneous/cli/powershell
Set env:ASPNETCORE_ENVIRONMENT before running to specify the ASP.NET Core environment.
This solution would mean there is no need to set .AddUserSecrets<Startup>() on every project created on the computer in future. However if you happen to be sharing this project across other computers, this needs to be configured on each computer.
The second solution is to set the .AddUserSecrets<Startup>() only on debug build like this:
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
#if DEBUG
.AddUserSecrets<Startup>()
#endif
.AddEnvironmentVariables()
.Build();
Additional Info
The Configuration Interface can be passed to Controllers in their Constructor, i.e.
private readonly IConfiguration _configuration;
public TestController(IConfiguration configuration)
{
_configuration = configuration;
}
Thus, any Secrets and Application Setting are accessible in that Controller by accessing _configuration["secret"].
However, if you want to access Application Secrets from, for example, a Migration-File, which exists outside of the Web Application itself, you need to adhere to the original answer because there's no easy way (that I know of) to access those secrets otherwise (one use case I can think of would be seeding the Database with an Admin and a Master Password).
To use migrations in NetCore with user secrets we can also set a class (SqlContextFactory) to create its own instance of the SqlContext using a specified config builder. This way we do not have to create some kind of workaround in our Program or Startup classes. In the below example SqlContext is an implementation of DbContext/IdentityDbContext.
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;
using Microsoft.Extensions.Configuration;
public class SqlContextFactory : IDesignTimeDbContextFactory<SqlContext>
{
public SqlContext CreateDbContext(string[] args)
{
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", optional: false)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
var builder = new DbContextOptionsBuilder<SqlContext>();
builder.UseSqlServer(config.GetConnectionString("DefaultConnection"));
return new SqlContext(builder.Options);
}
}
Since I have noticed a lot of people running into this confusion, I am writing a simplified version of this resolution.
The Problem/Confusion
The secret manager in .net core is designed to work only in the Development environment. When running your app, your launchSettings.json file ensures that your ASPNETCORE_ENVIRONMENT variable is set to "Development". However, when you run EF migrations it doesn't use this file. As a result, when you run migrations, your web app does not run on the Development environment and thus no access to the secret manager. This often causes confusion as to why EF migrations can't use the secret manager.
The Resolution
Make sure your environment variable "ASPNETCORE_ENVIRONMENT" is set to "Development" in your computer.
The way of using .AddUserSecrets<Startup>() will make a circular reference if we having our DbContext in a separate class library and using DesignTimeFactory
The clean way of doing that is:
public class DesignTimeDbContextFactory : IDesignTimeDbContextFactory<AppDbContext>
{
public AppDbContext CreateDbContext(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
#if DEBUG
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{project path}/appsettings.Development.json",
optional: true, reloadOnChange: true)
#else
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{startup project path}/appsettings.json",
optional: true, reloadOnChange: true)
#endif
.AddEnvironmentVariables()
.Build();
var connectionString = configuration.GetConnectionString("DefaultConnection");
var builder = new DbContextOptionsBuilder<AppDbContext>();
Console.WriteLine(connectionString);
builder.UseSqlServer(connectionString);
return new AppDbContext(builder.Options);
}
}
The Explanation:
Secret Manager is meant to be in the development time only, so this will not affect the migration in case if you have it in a pipeline in QA or Production stages, so to fix that we will use the dev connection string which exists in appsettings.Development.json during the #if Debug.
The benefit of using this way is to decouple referencing the Web project Startup class while using class library as your Data infrastructure.

kinesis stream account incorrect

I have setup my pc with python and connections to AWS. This has been successfully tested using the s3_sample.py file, I had to create an IAM user account with the credentials in a file which worked fine for S3 buckets.
My next task was to create an mqtt bridge and put some data in a stream in kinesis using the awslab - awslabs/mqtt-kinesis-bridge.
This seems to be all ok except I get an error when I run the bridge.py. The error is:
Could not find ACTIVE stream:my_first_stream error:Stream my_first_stream under account 673480824415 not found.
Strangely this is not the account I use in the .boto file that is suggested to be set up for this bridge, which are the same credentials I used for the S3 bucket
[Credentials]
aws_access_key_id = AA1122BB
aws_secret_access_key = LlcKb61LTglis
It would seem to me that the bridge.py has a hardcoded account but I can not see it and i can't see where it is pointing to the .boto file for credentials.
Thanks in Advance
So the issue of not finding the Active stream for the account is resolved by:
ensure you are hooked into the US-EAST-1 data centre as this is the default data centre for bridge.py
create your stream, you will only need 1 shard
The next problem stems from the specific version of MQTT and the python library paho-mqtt I installed. The bridge application was written with the API of MQTT 1.2.1 using paho-mqtt 0.4.91 in mind.
The new version which is available for download on their website has a different way of interacting with the paho-mqtt library which passes an additional "flags" object to the on_connect callback. This generates the error I was experiencing, since its not expecting the 5th argument.
You should be able to fix it by making the following change to bridge.py
Line 104 currently looks like this:
def on_connect(self, mqttc, userdata, msg):
Simply add flags, after userdata, so that the callback function looks like this:
def on_connect(self, mqttc, userdata,flags, msg):
This should resolve the issue of the final error of the incorrect number of arguments being passed.
Hope this helps others, thank for the efforts.
When you call python SDK for aws service, there is a line to import the boto modules for aws services in bridge.py.
import boto
The setting is pointing to the .boto for credentials and defined defaultly in boto.
Here is the explanation Boto Config :
Details
A boto config file is a text file formatted like an .ini configuration file that specifies values for options that control the behavior of the boto library. In Unix/Linux systems, on startup, the boto library looks for configuration files in the following locations and in the following order:
/etc/boto.cfg - for site-wide settings that all users on this machine will use
~/.boto - for user-specific settings
~/.aws/credentials - for credentials shared between SDKs
Of course, you can set the environment directly,
export AWS_ACCESS_KEY_ID="Your AWS Access Key ID"
export AWS_SECRET_ACCESS_KEY="Your AWS Secret Access Key"