Spring boot with KMS - amazon-web-services

My Spring boot microservice is running in a docker container. It requires an encryption key for encrypting the incoming payload. I thought of using AWS KMS for storing the keys. Reading them at runtime, and encrypt the payload.
I was trying to find out the libraries that can be used for accessing the AWS KMS from Spring boot microservice. Searching on the google results in below github projects.
https://github.com/zalando/spring-cloud-config-aws-kms
https://github.com/kinow/spring-boot-aws-kms-configuration
There is an SDK from AWS as well.
https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/java-example-code.html
I am little confused on which one I should use? The two github projects seems to be more open approach than using AWS SDK to me. Also, the "zalando" project was last updated on May 2020.. so they appears to be active.

Add the needed dependency
Create some secrets in AWS as 'other type of secret'. Name those secrets based on your project.
/secret/application for properties shared across all services
/secret/{spring.application.name} for the properties to look up for this specific service.
the above can be changed via some spring configuration properties - see section 3.3
then just inject them as you would any other property.
#Value("${verySpecialKey}")
private String verySpecialKey;

Related

How would you access Google Secret Manager from an external environment?

I have googled quite heavily the last couple of hours to see if I could use Google Secret Manager from an external service like AWS Lambda or my local PC. I could not find anything helpful, or something that describes properly the steps to do so.
I do not want to play with the APIs and end up doing the authenticating via OAuth myself, I wish to use the client library. How would I go about doing so?
I have so far referred to the following links:
https://cloud.google.com/secret-manager/docs/configuring-secret-manager - Describes setting up secret manager, and prompts you to set up Google Cloud SDK.
https://cloud.google.com/sdk/docs/initializing - Describes setting up the cloud SDK (doesn't seem like I get some kind of config file that helps me to point my client library to the correct GCP project)
The issue I have is that it doesn't seem like I get access to some form of credential that I can use with the client library that consumes the secret manager service of a particular GCP project. Something like a service account token or a means of authenticating and consuming the service from an external environment.
Any help is appreciated, it just feels like I'm missing something. Or is it simply impossible to do so?
PS: Why am I using GCP secret manager when AWS offers a similar service? The latter is too expensive.
I think that your question applies to all GCP services, there isn't anything that is specific to Secret Manager.
As you mentioned, https://cloud.google.com/docs/authentication/getting-started documents how to create and use a Service Account. But this approach has the downside that now you need to figure out to store the service account key (yet another Secret!)
If you're planning to access GCP Secret Manager from AWS you can consider using: https://cloud.google.com/iam/docs/configuring-workload-identity-federation#aws which uses identity federation to map an AWS service account to a GCP service account, without the need to store an extra Secret somewhere.

Override default AWS profile for SpringBoot Application

I'm running SpringBoot applications that use AWS resources from two different AWS accounts (depending on the project—each application only needs resources from one of the two AWS accounts).
I have two different profiles set up in my AWS config file (a default one and a secondary one). When I use AWS CLI, I just specify --profile=secondary and everything is happily working.
I can't seem to find any way to specify the secondary profile for a SpringBoot Application using AWS Java SDK. What are my options?
This can be achieved using ProfileCredentialsProvider(String profile) where profile is, the question's case, secondary.

Can AWS Secret Manager be accessed from Tomcat using JNDI?

We have a WAR file deployed on Tomcat and the database credentials are fetched through JNDI. This WAR now has to be moved to AWS cloud, and the requirement is db credentials has to be stored in AWS Secret Manager. My question is can I continue using JNDI/Tomcat along with Secret Manager ? I understand AWS SM has API and SDKs to access it, can that be integrated with JNDI/Tomcat somehow ? All posts I have seen mentions using the API/SDK directly from code, none i have found say anything about server integration. Is accessing AWS SM from code really the best way to do it ? Thanks.
A side note - for some reason unknown to me, we cannot use BeanStalk, it is just Tomcat on an EC2 instance.
Maybe you could use the JDBC driver wrapper: https://github.com/aws/aws-secretsmanager-jdbc. If you are using a connection pool manager you can follow the example in the README and replace the JDBC library with the wrapper library, specifying the secret in the configuration. The wrapper will then retrieve the secret and pass it to the real JDBC library.
If you are not using a connection pool manager, you could still replace the existing JDBC driver with the wrapper, but this would take some code modifications.
By using the wrapper, you can also turn on auto-rotation on the DB password since the wrapper knows to re-fetch the secret after it changes.

Listing available shared VPC in a project

I have a host project with 2 VPCs , both of them aew shared with a service project that has no VPCs. in the console all works great, but I want to create automation for that. I am not able to list the VPCs in the service project. I am trying to use
https://www.googleapis.com/compute/v1/projects/{project}/aggregated/subnetworks/listUsable
from the documantation
Retrieves an aggregated list of all usable subnetworks in the project. The list contains all of the subnetworks in the project and the subnetworks that were shared by a Shared VPC host project.
but I am getting empty result set
what I am missing?
You have to be relatively careful with the permissions and what user you authenticate as. You will only be able to see subnetworks where the calling user has the appropriate compute.subnetworks.* permissions.
If you're looking at the Cloud Console, you will be acting with your Google Account which most likely has owners or at least roles/compute.networkUser access.
Depending on how you authenticate your API calls, you are most likely using a service account. Ensure that this service account has the required roles as well.
For further debugging, you can also try using the gcloud CLI tool. It has a handy option: --log-http that will show you all HTTP calls done. This is often a great help when piecing together functionality in external code.
I have looked on how GCP console is doing it
1. it query to see if there is host project
2. if there is host project - it send query to the host project to list the subnets

targeting AWS services locally

I was wondering if its possible to target the services of aws, for example, dynamoDB, from outside of aws, for example, code that runs on my personal computer.
All I could find is creating a mock of dynamodb localy and configuring to it, but not a way to configure the code to target the real thing.
Thanks.
By target I mean to use only the sdk of the language to access the service, not some kind of rest api.
Ok so after more search, and as #JohnRotenstein recommended, I have searched for a way to configure the credentials.
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html
The link above shows how to configure all the needed credentials.
Of course there is an IAM user with a key and secret key.
Cheers.