I have installed the airflow 1.10.15 in standalone server & trying to integrate aws secret manager with it but values are not coming.
I have added backend = airflow.contrib.secrets.aws_secrets_manager.SecretsManagerBackend and backend_kwargs = {"connections_prefix": "airflow/test} under secrets in airflow.cfg. Also, i have added role to ec2 server which has secret manager read/write access but still it is not taking value from secret manager.
You can use the airflow secret backend with the aws secret manager by creating a new airflow secret and then setting the backend to awssecretmanager.
Related
I'm trying to incorporate Secret Manager with my projects for security but running into issues setting it up. I currently have a service account in project-b where I downloaded the JSON credential keys and have been using that to access my BigQuery table in my backend code.
My current setup:
I have project-a that uses Cloud Run to host my code.
I have project-b that uses BigQuery to hold some data for me.
From project-a, I'm trying to access the BigQuery table in project-b just like I've been doing with the JSON keys.
I keep running into this error:
PermissionDenied: 403 Permission 'secretmanager.versions.access' denied for resource 'projects/project-b/secrets/stockdata-secret/versions/1' (or it may not exist).
I have assigned the Secret Manager Secret Accessor and Secret Manager Viewer roles to a couple of my accounts but it still doesn't seem to work.
The client_email from the keys is set to the top service account in the screenshot below:
Permissions for the secret
Here is my part of my back-end code:
# Grabbing keys from Secret Manager, got this code from Google docs
def access_secret_version(project_id, secret_id, version_id):
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(request={"name": name})
payload = response.payload.data.decode("UTF-8")
return payload
---
# Routing to the page
#app.route('/projects/random-page')
def random_page():
payload = access_secret_version("project-b", "stockdata-secret", "1")
# Authenticating service account.
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = payload
# old way, which worked
google_cloud_service_account = "creds.json"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = google_cloud_service_account
I'm facing a issue, status code is:401
"creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1"
the instance code:
resource "aws_instance" "test-EC2" {
instance_type = "t2.micro"
ami = "ami-07ffb2f4d65357b42"
}
I have checked the AMI region still not working
any help would be appreciated
I am looking for a way to create and destroy tokens via the management console provided by AWS. I am learning about terraform AWS provider which requires an access key, a secret key and a token.
As stated in the error message :
creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1".
It is clear that terraform is not able to authenticate itself using terraform AWS-provider.
You have to have a provider block in your terraform configuration to use one of the supported ways to get authenticated.
provider "aws" {
region = var.aws_region
}
In general, the following are the ways to get authenticated to AWS via the AWS-terraform provider.
Parameters in the provider configuration
Environment variables
Shared credentials files
Shared configuration files
Container credentials
Instance profile credentials and region
For more details, please take a look at: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration
By default, if you are already programmatically signed in to your AWS account AWS-terraform provider will use those credentials.
For example:
If you are using aws_access_key_id and aws_secret_access_key to authenticate yourself then you might have a profile for these credentials. you can check this info in your $HOME/.aws/credentials config file.
export the profile using the below command and you are good to go.
export AWS_PROFILE="name_of_profile_using_secrets"
If you have a SSO user for authentication
Then you might have a sso profile available in $HOME/.aws/config In that case you need to sign in with the respective aws sso profile using the below command
aws sso login --profile <sso_profile_name>
If you don't have a SSO profile yet you can also configure it using the below commands and then export it.
aws configure sso
[....] # configure your SSO
export AWS_PROFILE=<your_sso_profile>
Do you have an aws provider defined in your terraform configuration?
provider "aws" {
region = var.aws_region
profile = var.aws_profile
}
if you are running this locally, please have an IAM user profile set (use aws configure) and export that profile in your current session.
aws configure --profile xxx
export AWS_PROFILE=xxx
once you have the profile set, this should work.
If you are running this deployment in any pipleine like Github Action, you could also make use of OpenId connect to avoid any accesskey and secretkey.
Please find the detailed setup for OpenId connect here.
I am trying to add Airflow connection for GCP cloud(SA key should be fetched from secret manager) but in my Airflow UI(version 2.1.4) i couldnt find option for adding by using secret manager. is it because of version problem?
enter image description here
if so can we add the airflow connection (by using secret manager) via command line(gcloud) or via programmatically to add it
I tried via command line but it throws below error:
gcloud composer environments run project_id --location europe-west2 connections add -- edw_test --conn-type=google_cloud_platform --conn-extra '{"extra__google_cloud_platform__project": "proejct", "extra__google_cloud_platform__key_secret_name": "test_edw","extra__google_cloud_platform__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for europe-west2--902058d8-gke.
Unable to connect to the server: dial tcp 172.16.10.2:443: i/o timeout
ERROR: (gcloud.composer.environments.run) kubectl returned non-zero status code.
I have upgraded both composer and airflow version which paved the way for creating the airflow connection by keeping the keys in secret manager
You can do this by configuring airflow to use Secret Manager as a secrets backend. For this to work, however, the service account you use to access the backend needs to have permission to access secrets.
Secrets Backend
For example, you can set the value directly in airflow.cfg:
[secrets]
backend = airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Via environment variable:
export AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Creating Connection
Then you can create a secret directly in Secret Manager. If you have configured your Airflow instance to use Secret Manager as the secrets backend, it will pick up any secrets that have the correct prefix.
The default prefixes are:
airflow-connections
airflow-variables
airflow-config
In your case, you would create a secret named airflow-connections-edw_test, and set the value to google-cloud-platform://?extra__google_cloud_platform__project=project&extra__google_cloud_platform__key_secret_name=test_edw&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform
Note that the parameters have to be url encoded.
More info:
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/secrets-backends/google-cloud-secret-manager-backend.html#enabling-the-secret-backend
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/connections/gcp.html
I am using AWS Secret Manager Service to retrieve some confidential information like SMTP details or connection strings. However, to get secret value from AWS Secret Manager Service it seems like we need to pass the Access key and secret key apart from which secret we want to retrieve. So I am maintaining those values in config file.
public AwsSecretManagerService(IOptions<AwsAppSettings> settings)
{
awsAppSettings = settings.Value;
amazonSecretsManagerClient = new AmazonSecretsManagerClient
(awsAppSettings.Accesskey, awsAppSettings.SecretKey, RegionEndpoint.GetBySystemName(awsAppSettings.Region));
}
public async Task<SecretValueResponse> GetSecretValueAsync(SecretValueRequest secretValueRequest)
{
return _mapper.Map<SecretValueResponse>(await amazonSecretsManagerClient.GetSecretValueAsync(_mapper.Map<GetSecretValueRequest>(secretValueRequest)));
}
So I am thinking I am kind of defeating the whole purpose of using secret manager by maintaining the AWS credentials in app settings file. I am wondering what is the right way to do this
It is not a good practice to pass or add AWS credentials of an IAM User (access key and secret access key) in the code.
Instead, don't pass it and update your code as follows:
amazonSecretsManagerClient = new AmazonSecretsManagerClient
(RegionEndpoint.GetBySystemName(awsAppSettings.Region));
Question: Then how would it access the AWS services?
Answer: If you are going to execute your code on your local system, install and configure AWS CLI instead of passing AWS credentials via CLI or Terminal, it will use those AWS configured credentials to access the AWS services.
Reference for AWS CLI Installation: Installing the AWS CLI
Reference for AWS CLI Configuration: Configuring the AWS CLI
If you are going to execute your code on an AWS service (e.g., EC2 instance), attach an IAM role with that AWS resource (e.g., EC2 instance) having sufficient permissions, it will use that IAM role to access the AWS services.
For different AWS services, I need different IAM users to secure the access control. Sometimes, I even need to use different IAM user credentials within a single project in a EC2 instance. What's the proper way to manage this and how I can deploy/attach these IAM user credentials to a single EC2 instance?
While I fully agree with accepted answer that using static credentials is one way of solving this problem, I would like to suggest some improvements over it (and proposed Secrets Manager).
What I would advise as architectural step forward to achieve full isolation of credentials, having them dynamic, and not stored in central place (Secrets Manager proposed above) is dockerizing application and running on AWS Elastic Container Service (ECS). This way you can assign different IAM role to different ECS Tasks.
Benefits over Secrets Manager solution
- use case of someone tampering with credentials in Secrets Manager is fully avoided, as credentials are of dynamic nature (temporary, and automatically assumed through SDKs)
Credentials are managed on AWS side for you
Only ECS Service can assume this IAM role, meaning you can't have actual person stealing the credentials, or developer connecting to production environment from his local machine with this credentials.
AWS Official Documentation for Task Roles
The normal way to provide credentials to applications running on an Amazon EC2 instance is to assign an IAM Role to the instance. Temporary credentials associated with the role when then be provided via Instance Metadata. The AWS SDKs will automatically use these credentials.
However, this only works for one set of credentials. If you wish to use more than one credential, you will need to provide the credentials in a credentials file.
The AWS credentials file can contain multiple profiles, eg:
[default]
aws_access_key_id = AKIAaaaaa
aws_secret_access_key = abcdefg
[user2]
aws_access_key_id = AKIAbbbb
aws_secret_access_key = xyzzzy
As a convenience, this can also be configured via the AWS CLI:
$ aws configure --profile user2
AWS Access Key ID [None]: AKIAbbbb
AWS Secret Access Key [None]: xyzzy
Default region name [None]: us-east-1
Default output format [None]: text
The profile to use can be set via an Environment Variable:
Linux: export AWS_PROFILE="user2"
Windows: set AWS_PROFILE="user2"
Alternatively, when calling AWS services via an SDK, simply specify the Profile to use. Here is an example with Python from Credentials — Boto 3 documentation:
session = boto3.Session(profile_name='user2')
# Any clients created from this session will use credentials
# from the [user2] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')
There is an equivalent capability in the SDKs for other languages, too.