Azure DevOps Pipelines - amazon-web-services

I am new to working with Azure DevOps, I am trying to create a pipeline using Azure DevOps for deploying my terraform code onto AWS, for authentication I am aware that we can use service principles but that will mean I will need to specify my acess and secret keys in azure DevOps which I do not want to do, so I wanted to check if there are any other ways of doing this?

For accessing/storing these kinds of secrets you can try the Azure Key Vault
Store all your secrets in Azure Key Vault secrets.
When you want to access secrets:
Ensure the Azure service connection has at least Get and List permissions on the vault. You can set these permissions in the Azure
portal:
Open the Settings blade for the vault, choose Access policies, then Add new.
In the Add access policy blade, choose Select principal and select the service principal for your client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are checked (ticked).
Choose OK to save the changes.
Reference

You can use
Secure Azure DevOps Variables or Variable Groups
Azure Key Vault
If you use a Service Principal, then you need a password / certificate as well to authenticate. Maybe you can also try to work with MSI (Managed Service Identity). In that case, the AAD will take care of the secret storage.

If you don't want to store credentials on Azure Devops itself, best way is to store credentials in a credential store (Azure Key Vault) and access it through a service connection. I assume that you are using YAML based pipelines. If so use the following steps to integrate your pipeline with the key vault,
Prerequisites,
Azure key vault is set up and keys are securely stored
Steps,
In edit mode of the pipeline click on the kebab menu (three dots on upper right corner) and select Triggers
On the opened menu click on the Variables tab and then Variable Groups
Open Manage variable groups in a new tab
Click on + Variable group button to add a new variable
Give a name and a description. Switch on the Link secrets from an Azure key vault as variables toggle.
Add a new service connection and once authenticated select the key vault name
Now add variables in to the variable group
Once done save the variable group and go back to the previous tab in step 2 and link the new variable group.
Once done save the pipeline
Important: You need to grant secret read permission to the service connection's service principal from your key vault.
Reference: Link secrets from an Azure key vault

Perhaps use the Azure Devops Libary > Variable Groups to securely store you keys.
Alternatively you may be able to use the Project Settings> Service connection. Perhaps using credentials connection or a generic on.

Service principals is the industry standard for this case. You should create a specific service principal for Azure DevOps and limit its scope to only what's necessary.

you can write variables into your powershell script file and can use powershell task into your pipeline. Now give powershell file path into this task and just give variables names. It will work like a charm.

For Service principle connection, you need to have
service principle id and service principle key
service principle id is same as application id
service principle key is there in certificates and secrets

You can use Azure Key Vault for storing all your keys and secrets. Give permission to your Azure pipeline to fetch keys from Key Vault.
Following link will guide you from scratch to develop a pipeline and fetch keys:
https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/

The only method to truly not store AWS credentials in Azure/Azure DevOps would be to make a hosted build pool inside your AWS account. These machines will have the azure DevOps agent installed and registered to your Organization and to a specific agent pool. Then add the needed permissions to the Iam instance profile attached to these build servers. When running your terraform commands using this agent pool, terraform will have access to the credentials on the instance. The same concept works for a container based build pool in AWS ECS.

You can use Managed identity in your pipeline to authenticate with the Azure Key Vault.
You can read more on Managed Identity here and Azure Key Vault here

You have to create a private key for Devops pipeline with limited services at your AWS machine
store the key in the Secure library of Devops Pipeline
from your AWS firewall disable the SSH connection from unknows IP addresses, and white-list Devops agents IP address, to get the list of the ips check this link https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#agent-ip-ranges

Related

How to secure service account key for an application that is NOT running on google cloud

I have request from users to be able to connect to my datasets and table in bigquery to fetch the data and manipulate it programmatically outside of GCP
The situation now that i created a service account with credentials to view data and i share the json key of this service account with users in email .
I want to avoid users to use the key inside their code
best way to secure sharing this key with them
The best way to share your application outside Google Cloud is through Workload Identity Federation. Although, creating public/private key pairs is also a secured way to use and share your user-managed service account, it can still impose a threat and security risk if not correctly managed.
Just run through this documentation and use IAM external identities to impersonate a service account to avoid any security issues on your security account keys even without mantaining it.

Amazon Marketplace Web Services in Azure Data Factory - Error multiple values AWSAccessKeyId?

We are struggling to connect with Azure Data Factory to Amazon Marketplace Web Services.
It seems that we have all information required, however, we are getting the following error:
Parameter AWSAccessKeyId cannot have multiple values.
All data seems to be correct. However, we think it is strange that a Access Key Id and Secret Access Key are needed to connect to the Marketplace Web Services. Both keys come from the AWS environment which is currently not connected to anything.
Any help is appreciated.
Kind regards,
Jens
Yes, you need Access key ID and Secret key while creating the Amazon Marketplace Web Service linked service in Azure Data Factory. There should only be one Access Key assigned to per user in AWS Marketplace. Apart from this, other properties are also required. Please refer below image for the same. Some properties are mandatory and others not.
To allow people in your company to sign in to the AWS Marketplace Management Portal, create an IAM user for each person who needs access.
To create IAM users
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Users and then choose Create New Users.
In the numbered text boxes, enter a name for each user that you want to create.
Clear the Generate an access key for each user check box and then choose Create.
This key now you will pass in Linked Service in ADF.
Also, for better security, you can save the SecretKey in Azure Key Vault and use Azure Key Vault Linked Service to access the SecretKey. Refer Store credentials in Azure Key Vault.

Rotate service accounts with Vault in GCP

I'm in process of implementing Vault in my organization. We run our services on GCP on compute engine instances as docker containers.
Each compute node can run multiple services and hence we use JSON Service Accounts Keys to auth against other google services (Dataproc, Google Cloud Storage etc.).
One of the challenge that we are facing right now is that we generate these json keys using terraform and are baked into the machines when infrastructure is getting provisioned.
Once provisioned these keys lives on forever, which is a bad way to handling the keys as if any key get compromised then we are at high risk.
To reduce the surface area, we are planning to have key rotation in place for which we are looking into vault. Vault will also help us have centralized secrets (instead of secrets in gitlab variables) and dynamic database credentials for MySQL.
While reading Vault's documentation the Vault's architecture is as follows.
You authenticate with vault using a service account.
Based on membership of the service account in a group you have different policies assigned to you.
Those policies have role-sets based on which ephemeral service accounts are generated.
You use the ephemeral service account which has a lease and can be revoked centrally.
Now from what I understand, you need a service account to authenticate with vault, so that you can get service account from Vault. This seemed to me like a chicken and egg problem.
I want a service account from vault, but to get that I need a service account to authenticate.
So how will I get my first service account ? Lets say I bake in the first service accounts via Terraform ? I couldn't find a way to rotate them ?
Am I missing something in my understanding of Vault ?

How do I configure multiple AWS Connect instances from different accounts with AWS Single Sign On in a top level account?

I am setting up our telephony system in AWS and we're utilizing AWS Single Sign On for our primary SAML authentication. This has worked fine for normal cli and console access but has kind of been a struggle for implementing Amazon Connect through the SSO Cloud Applications configuration.
Background
I have done a proof of concept with a single Amazon Connect instance and was able to federate login with a number of different permissions sets to simulate admin, developer, and user access for the single instance. This worked fine until I started adding additional instances and each time any user permission set tries to login to Amazon Connect they get Session Expired on the Connect screen.
Our setup is as follows:
Root account contains AWS SSO Directory
Dev Account has 1 Connect instance in the east
QA Account has 2 Connect instances total in east and west
Prod account has 2 Connect instances total in east and west
A lot of the documentation I've been reading seems it assumes the Amazon Connect instances are in the same account as the Amazon SSO service. Additionally the documentation mentions creating additional IAM Identity Providers for each Amazon Connect instance's SAML Metadata file, and a role associated that allows the SSO user to access that instance. I see where this would work in a single account, but I don't understand how to adopt the access role and implement it as a permissions policy in AWS SSO for the user group thats logging into the instance.
I've configured everything as close as possible to the Amazon Connect SAML Setup Guide, and I'm working on troubleshooting the permissions policy stuff to configure access, I'm just at a loss.
If anyone has previous Amazon SSO experience, or has done something similar with Amazon Connect that would be greatly appreciated. I just want to be able to validate whether this is feasible in the current iteration of Amazon SSO (granted its a newer service), or we need to architect and integrate a 3rd party SSO for Amazon Connect.
Thanks!
We recently have this kind of setup and requirements and still in the testing phase but so far, it is working as expected.
In the Amazon Connect SAML Guide that you linked, there's a lacking piece of information in there with regards to the Attributes Mapping (Step 10)
Change From:
Field: https://aws.amazon.com/SAML/Attributes/Role
Value:
arn:aws:iam::<12-digit-account_id>:saml-provider/,arn:aws:iam::<12-digit-account_id>:role/
To This:
Field: https://aws.amazon.com/SAML/Attributes/Role
Value:
arn:aws:iam::ACCOUNT-ID:saml-provider/IDP_PROVIDER_NAME,arn:aws:iam::ACCOUNT-ID:role/ROLE_NAME
Sample Value:
arn:aws:iam::123456301789:saml-provider/AWSSSO_DevelopmentConnect,arn:aws:iam::123456301789:role/AmazonConnect_Development_Role
The Setup:
Root AWS
Configured with AWS SSO
In AWS SSO page, you can have 1 or more Amazon Connect Applications here
AmazonConnect-Development
AmazonConnect-QAEast
AmazonConnect-QAWest
Dev AWS:
You have setup Amazon Connect
AmazonConnect-Development as the Instance Name (Record the ARN)
Create a new Identity Provider (for ex: AWSSSO_DevelopmentConnect)
Create a Policy (to be attached in the Role)
Create a Role (for ex: AmazonConnect_Development_Role)
See more here for the content of Policy
In Root AWS, configure your AmazonConnect-Development application to have the Attribute Mapping pattern same with my above example value.
You also specify the Relay State URL for you want the users to be redirected to a specific Amazon Connecct application.
xxx AWS:
Same steps will be applied as the above
Key Points:
For each AWS Account:
You will need to Create Identity Provider, name it with a pattern
Create a Policy to be attached in the Role
Create a Role and Choose SAML 2.0 Federation
Checked: Allow programmatic and AWS Management Console access
Link the Identity Provider with the Role
For the Applications that you configure in the AWS SSO page, make sure the additional Attribute Mappings have the correct value

Connecting to AWS RDS from java without exposing password

I was successfully able to connect to RDS like any other database connection.
I use spring jpa data ( repository ) to do CRUD operation on postgres db.
currently I provide the db url and the credential in the properties file
spring:
datasource:
url: jdbc:postgresql://<rds-endpoint>:5432/<dbschema>
username: <dbuser>
password: <dbpassword>
However this is not an option while connecting to production or preproduction.
what is the best practise here.
Does AWS provide any inbuild mechanism to read these details from an endpoint like in the case of accessing S3 ?
My intention is not expose the password.
Several options are available to you:
Use the recently announced IAM access to Postgres RDS
Use Systems Manager Parameter Store to store the password
Use Secrets Manager to store the password and automatically rotate credentials
For 2 and 3, look up the password on application start in Spring using a PropertyPlaceholderConfiguration and the AWSSimpleSystemsManagement client (GetParameter request). SystemsManager can proxy requests to SecretsManager to keep a single interface in your code to access parameters.
IAM credentials is more secure in that:
If using EC2 instance profiles, access to the database uses short lived temporary credentials.
If not on EC2 you can generate short lived authentication tokens.
The password is not stored in your configuration.
If you have a separate database team they can manage access independent of the application user.
Removing access can be done via IAM
another generic option I found was to use AWS Secret Manager
(doc link)
RDS specific solution is to connect to DB Instance Using the AWS SDK using IAMDBAuth