Connecting to AWS RDS from java without exposing password - amazon-web-services

I was successfully able to connect to RDS like any other database connection.
I use spring jpa data ( repository ) to do CRUD operation on postgres db.
currently I provide the db url and the credential in the properties file
spring:
datasource:
url: jdbc:postgresql://<rds-endpoint>:5432/<dbschema>
username: <dbuser>
password: <dbpassword>
However this is not an option while connecting to production or preproduction.
what is the best practise here.
Does AWS provide any inbuild mechanism to read these details from an endpoint like in the case of accessing S3 ?
My intention is not expose the password.

Several options are available to you:
Use the recently announced IAM access to Postgres RDS
Use Systems Manager Parameter Store to store the password
Use Secrets Manager to store the password and automatically rotate credentials
For 2 and 3, look up the password on application start in Spring using a PropertyPlaceholderConfiguration and the AWSSimpleSystemsManagement client (GetParameter request). SystemsManager can proxy requests to SecretsManager to keep a single interface in your code to access parameters.
IAM credentials is more secure in that:
If using EC2 instance profiles, access to the database uses short lived temporary credentials.
If not on EC2 you can generate short lived authentication tokens.
The password is not stored in your configuration.
If you have a separate database team they can manage access independent of the application user.
Removing access can be done via IAM

another generic option I found was to use AWS Secret Manager
(doc link)
RDS specific solution is to connect to DB Instance Using the AWS SDK using IAMDBAuth

Related

How an app deployed on GKE can deploy other app on same GCP project without authentication

I have a java application that is deployed on GKE cluster. Let's call it the "orchestrator"
The application should be able to deploy other applications on same GCP project where the "orchestrator" app is running (can be same GKE or different GKE cluster), using helm cli commands.
We were able to do that using Google Service Account authentication, where the JSON key is provided to the "orchestrator" and we could use it to generate tokens.
My question is.. since both theĀ "orchestrator" and the others apps are running on same GCP project (sometimes on same GKE cluster), is there a way to use some default credentials auto discovered by GCP, instead of generating and providing a Service Account JSON key to theĀ "orchestrator" app?
That way, the customer won't need to expose this Key to our system and the authentication will be happened behind the scenes, without our app intervention.
Is there something a GCP admin can do which make this use case work seamlessly?
I will elaborate on my comment.
When you are using a Service Account, you have to use keys to authenticate - Each service account is associated with a public/private RSA key pair. As you are working on GKE cluster, did you consider using Workload identity, like mentioned in Best practices for using and managing SA?
According to Best practices for using and managing service accounts all non-human accounts should be represented by Service Account:
Service accounts represent non-human users. They're intended for scenarios where a workload, such as a custom application, needs to access resources or perform actions without end-user involvement.
So in general, whenever you want to provide some permissions to applications, you should use Service Account.
In Types of keys for service accounts you can find information, that all Service Accounts needs RSA pair key:
Each service account is associated with a public/private RSA key pair. The Service Account Credentials API uses this internal key pair to create short-lived service account credentials, and to sign blobs and JSON Web Tokens (JWTs). This key pair is known as the Google-managed key pair.
In addition, you can create multiple public/private RSA key pairs, known as user-managed key pairs, and use the private key to authenticate with Google APIs. This private key is known as a service account key.
You could also think about Workload Identity, but I am not sure if this would fulfill your needs as there are still many unknowns about your environment.
Just as additional information, there was something called Basic Authentication which could be an option for you, but due to security reasons it's not supported since GKE 1.19. This was mentioned in another stack case: We have discouraged Basic authentication in Google Kubernetes Engine (GKE).
To sum up:
Best Practice to provide permissions for non-human accounts is to use Service Account. Each service account requires a pair of RSA Keys and you can create multiple keys.
Good Practice is also to use Workload Identity if you have this option, but due to lack of details it is hard to determine if this would work in your scenario.
Additional links:
Authenticating to the Kubernetes API server
Use the Default Service Account to access the API server
One way to achieve that is to use use default credentials approach mentioned here :
Finding credentials automatically. Instead of exposing the SA key to our App, the GCP admin can attach the same SA to the GKE cluster resource (see attached screenshot), and the default credentials mechanism will use that SA credentials to get access the APIs and resources (depends on the SA roles and permissions).

Use AWS API and RDS to securely access data

I want an API which can query a MySQL database and return the desired data. Currently for development, I am using a AWS lambda. I am passing an access token in the request, so I am able to verify that a valid user is making the request. However, when I fetch data from the database, I am logging on to the database using a username and password and the database is open for public access. I think this application has security vulnerabilities because if anyone knows the database endpoint, they can brute force the username and password.
Is there a more secure approach in accomplishing this. The general workflow is:
API Gateway -> Lambda -> RDS (MySQL) -> Client
And the vulnerability I would like to avoid is the open access of the RDS MySQL DB.
You should configure the Lambda function to run inside the VPC with the RDS instance, and then disable public access to the RDS instance.

Azure DevOps Pipelines

I am new to working with Azure DevOps, I am trying to create a pipeline using Azure DevOps for deploying my terraform code onto AWS, for authentication I am aware that we can use service principles but that will mean I will need to specify my acess and secret keys in azure DevOps which I do not want to do, so I wanted to check if there are any other ways of doing this?
For accessing/storing these kinds of secrets you can try the Azure Key Vault
Store all your secrets in Azure Key Vault secrets.
When you want to access secrets:
Ensure the Azure service connection has at least Get and List permissions on the vault. You can set these permissions in the Azure
portal:
Open the Settings blade for the vault, choose Access policies, then Add new.
In the Add access policy blade, choose Select principal and select the service principal for your client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are checked (ticked).
Choose OK to save the changes.
Reference
You can use
Secure Azure DevOps Variables or Variable Groups
Azure Key Vault
If you use a Service Principal, then you need a password / certificate as well to authenticate. Maybe you can also try to work with MSI (Managed Service Identity). In that case, the AAD will take care of the secret storage.
If you don't want to store credentials on Azure Devops itself, best way is to store credentials in a credential store (Azure Key Vault) and access it through a service connection. I assume that you are using YAML based pipelines. If so use the following steps to integrate your pipeline with the key vault,
Prerequisites,
Azure key vault is set up and keys are securely stored
Steps,
In edit mode of the pipeline click on the kebab menu (three dots on upper right corner) and select Triggers
On the opened menu click on the Variables tab and then Variable Groups
Open Manage variable groups in a new tab
Click on + Variable group button to add a new variable
Give a name and a description. Switch on the Link secrets from an Azure key vault as variables toggle.
Add a new service connection and once authenticated select the key vault name
Now add variables in to the variable group
Once done save the variable group and go back to the previous tab in step 2 and link the new variable group.
Once done save the pipeline
Important: You need to grant secret read permission to the service connection's service principal from your key vault.
Reference: Link secrets from an Azure key vault
Perhaps use the Azure Devops Libary > Variable Groups to securely store you keys.
Alternatively you may be able to use the Project Settings> Service connection. Perhaps using credentials connection or a generic on.
Service principals is the industry standard for this case. You should create a specific service principal for Azure DevOps and limit its scope to only what's necessary.
you can write variables into your powershell script file and can use powershell task into your pipeline. Now give powershell file path into this task and just give variables names. It will work like a charm.
For Service principle connection, you need to have
service principle id and service principle key
service principle id is same as application id
service principle key is there in certificates and secrets
You can use Azure Key Vault for storing all your keys and secrets. Give permission to your Azure pipeline to fetch keys from Key Vault.
Following link will guide you from scratch to develop a pipeline and fetch keys:
https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/
The only method to truly not store AWS credentials in Azure/Azure DevOps would be to make a hosted build pool inside your AWS account. These machines will have the azure DevOps agent installed and registered to your Organization and to a specific agent pool. Then add the needed permissions to the Iam instance profile attached to these build servers. When running your terraform commands using this agent pool, terraform will have access to the credentials on the instance. The same concept works for a container based build pool in AWS ECS.
You can use Managed identity in your pipeline to authenticate with the Azure Key Vault.
You can read more on Managed Identity here and Azure Key Vault here
You have to create a private key for Devops pipeline with limited services at your AWS machine
store the key in the Secure library of Devops Pipeline
from your AWS firewall disable the SSH connection from unknows IP addresses, and white-list Devops agents IP address, to get the list of the ips check this link https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#agent-ip-ranges

Using EC2 instance profile with IAM authentication in RDS

I set up IAM authentication on an RDS instance, and I'm able to use IAM to get database passwords that work for 15-minutes. This is fine to access the database for backups, but this database backs an web application so currently after 15 minutes the password used by the app to connect to the DB becomes invalid and the app crashes as it can no longer access the DB.
However, in the RDS IAM docs there's this line:
For applications running on Amazon EC2, you can use EC2 instance profile credentials to access the database, so you don't need to use database passwords on your EC2 instance.
This implies that on EC2 there's no need to use the IAM temporary DB password, which would mean that my app should be able to connect to the DB as long as it's running on EC2 and I set up the role permissions (which I think I did correctly). However, I can't get my app running on EC2 to be able to connect to the RDS DB except by using the 15-minute temporary password. If I try connecting with a normal MySQL connection with no password I get permission denied. Is there something special that needs to be done to connect to RDS using the EC2 instance profile, or is it not possible without using 15-minute temporary passwords?
According to the documentation you linked (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html), you need to perform the following steps (See under "Authenticating to a DB Instance or DB Cluster Using IAM Database Authentication"):
Use the AWS SDK for Java or AWS CLI to get an authentication token you can use to identify the IAM user or role. To learn how to get an authentication token, see Getting an Authentication Token.
Connect to the database using an SSL connection, specifying the IAM user or role as the database user account and the authentication token as the password. For more information, see Connecting to a DB Instance or DB Cluster Using IAM Database Authentication.
That means for every connection you intend to open, you need to get a valid Token using the AWS SDK. This is where using the correct instance profile with the RDS permission is needed. See also the code examples further down the AWS documentation page.
I think however this requires quite a bit of effort on your side, to always get a valid token before opening a connection. It makes using an off-the-shelf connection pool difficult. Probably once open, the connection will remain open even after the token expires, but you still need to handle the case where more connections need to be opened at a later time.
I would stick with a normal user/password access for the application, using IAM for this case seems to be too much effort.
For applications running on Amazon EC2, you can use EC2 instance profile credentials to access the database, so you don't need to use database passwords on your EC2 instance.
You're misinterpreting what this means. It means you don't have to use static passwords or store them on the instance.
The idea is that you generate a new authentication token each time you establish a connection to the database. The token is generated on your instance, using the instance role credentials. It can only be used to authenticate for 15 minutes, but once connected, you don't lose your database connection after 15 minutes. You remain connected.
If your application doesn't reuse database connections, then you likely have a design flaw there.

password encryption for use by aws automated process

I have an application that creates automatically some AWS instances and runs a script on them.
Each script tries to connect to a remote DB for which I need to provide the Public DNS Hostname, DB password, DB Username, etc...
What is the most secure way to do that without having to store the plain password?
And without risking somebody else running the same script being able to get those credentials?
Thanks a lot
You could use the AWS SSM service's Parameter Store:
Parameter Store centralizes the management of configuration data -
such as passwords, license keys, or database connection strings - that
you commonly reference in scripts, commands, or other automation and
configuration workflows. With granular security controls for managing
user access and strong encryption for sensitive data such as
passwords, Parameter Store improves the overall security posture of
your managed instances. Encrypting parameters with Parameter Store is
not supported in all regions.
You would create an IAM role that has access to the Parameter Store values, and assigned that role to the EC2 instances you are dynamically creating. Then the script would be able to use the AWS SDK/CLI to retrieve those values from the parameter store.
Alternatively, if the database is an RDS database that supports IAM authentication (only MySQL and Aurora at this time) then you could create an IAM role that has direct access to the database and assign that role to the EC2 instances.