Terraform Import AWS Secrets Manager Secret Version - amazon-web-services

AWS maintains a secret versioning system, a new version is created if the secret value is updated or if the secret is rotated.
I am in the process of getting existing secrets in AWS under the purview of Terraform. As step 1 I declared all the Terraform resources I needed :
resource "aws_secretsmanager_secret" "secret" {
name = var.secret_name
description = var.secret_description
kms_key_id = aws_kms_key.main.id
recovery_window_in_days = var.recovery_window_in_days
tags = var.secret_tags
policy = data.aws_iam_policy_document.secret_access_policy.json
}
// AWS secrets manager secret version
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = jsonencode(var.secret_name_in_secrets_file)
}
Next I imported :
Import secret to state :
terraform import module.<module_name>.aws_secretsmanager_secret.secret
arn:aws:secretsmanager:<region>:<account_id>:secret:<secret_name>-<hash_value>```
Import secret version to state :
terraform import module.<module_name>.aws_secretsmanager_secret_version.secret
arn:aws:secretsmanager:<region>:<account_id>:secret:<secret_name>-<hash_value>|<unique_secret_id aka AWSCURRENT>
Post this I expected the Terraform plan to only involve changes to the resource policy. But Terraform tried to destroy and recreate the secret version, which did not make sense to me.
After going ahead with the plan the secret version that was initially associated with the AWSCURRENT staging label, the one that I used above in the import became the AWSPREVIOUS staging label id and a new AWSCURRENT was created.
Before terraform import :
{
"Versions": [
{
"VersionId": "initial-current",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxx"
},
{
"VersionId": "initial-previous",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxxx"
}
],
"ARN": "xxxx",
"Name": "xxxx"
}
After TF import and apply:
{
"Versions": [
{
"VersionId": "post-import-current",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxx"
},
{
"VersionId": "initial-current",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxxx"
}
],
"ARN": "xxxx",
"Name": "xxxx"
}
I was expecting initial-current to remain in the AWSCURRENT stage part. Why did AWS make the initial AWSCURRENT secret ID that I imported using TF into AWSPREVIOUS and create a new one since nothing changed in terms of value or rotation? I expected no changes on that front since TF imported the version

Related

How to remove bigquery dataset permission via CLI

Want to remove multiple dataset permission by cli (if possible by one go). Is there a way for the same via CLI?
for example,
abc#gmail.com roles/bigquery.dataowner to dataset/demo_aa
xyzGroupSA#gmail.com role/bigquery.user to dataset/demo_bb
Want to remove the email id permission from the respective dataset via CLI with "bq".
[Went through the ref https://cloud.google.com/bigquery/docs/dataset-access-controls#bq_1 , its having local file reference and its very lengthy. But what about when you having a jump server in production environment and need to perform via running commands.]
You can delegate this responsability to CI CD for example.
Solution 1 :
You can create a project :
my-project
------dataset_accesses.json
Run your script on Cloud Shell on your production project or in a Docker image from gcloud-sdk image.
Use a service account having the permissions to update dataset accesses
run the bq script with bq update-access-control :
bq update --source dataset_accesses.json mydataset
dataset_accesses.json :
{
"access": [
{
"role": "READER",
"specialGroup": "projectReaders"
},
{
"role": "WRITER",
"specialGroup": "projectWriters"
},
{
"role": "OWNER",
"specialGroup": "projectOwners"
},
{
"role": "READER",
"specialGroup": "allAuthenticatedUsers"
},
{
"role": "READER",
"domain": "domain_name"
},
{
"role": "WRITER",
"userByEmail": "user_email"
},
{
"role": "WRITER",
"userByEmail": "service_account_email"
},
{
"role": "READER",
"groupByEmail": "group_email"
}
],
...
}
Solution 2
Use Terraform to update the permission on your dataset :
resource "google_bigquery_dataset_access" "access" {
dataset_id = google_bigquery_dataset.dataset.dataset_id
role = "OWNER"
user_by_email = google_service_account.bqowner.email
}
With Terraform, it is also easy to pass a list and apply the resource on it :
Json file :
{
"datasets_members": {
"dataset_your_group1": {
"dataset_id" : "your_dataset",
"member": "group:your_group#loreal.com",
"role": "bigquery.dataViewer"
},
"dataset_your_group2": {
"dataset_id" : "your_dataset",
"member": "group:your_group2#loreal.com",
"role": "bigquery.dataViewer"
}
}
}
locals.tf :
locals {
datasets_members = jsondecode(file("${path.module}/resource/datasets_members.json"))["datasets_members"]
}
resource "google_bigquery_dataset_access" "accesses" {
for_each = local.datasets_members
dataset_id = each.value["dataset_id"]
role = each.value["role"]
group_by_email = each.value["member"]
}
This work also with google_bigquery_dataset_iam_binding

Terraform v0.11 accessing remote state empty outputs

I have few terraform state file with empty attributes in there and few with some values
{
"version": 3,
"terraform_version": "0.11.14",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
}]
}
I want to pick the default value say "Null" if i don't have the attributes and my remote config is
data "terraform_remote_state" "name" {
backend = "s3"
config {
.....
}
}
If i use "${lookup(data.terraform_remote_state.name.outputs, "attribute", "Null")}" it throws error 'data.terraform_remote_state.name' does not have attribute 'outputs'
Can someone guide me to solve this?

Cannot assume role by code pipeline on code pipeline action AWS CDK

I have been playing with AWS CDK and was working on building a code pipeline stack on my AWS educate account. The user that I am using has enough permission to access and use the code pipeline. My problem is, AWS CDK generates a role for the code pipeline action whose Principle is ARN of the root account. So it doesn't have the permission to perform assume the role on the root account.
Action code:
{
stageName: "Build",
actions: [
new codepipelineActions.CodeBuildAction(
{
actionName: "Build",
input: sourceOutput,
project: builder
}
)
]
}
Cloudformation Template Output:
"devPipelineBuildCodePipelineActionRole8696D056": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":iam::",
{
"Ref": "AWS::AccountId"
},
":root"
]
]
}
}
}
],
"Version": "2012-10-17"
}
},
"Metadata": {
"aws:cdk:path": "PipeLineStack/dev-Pipeline/Build/Build/CodePipelineActionRole/Resource"
}
}
...
{
"Actions": [
{
"ActionTypeId": {
"Category": "Build",
"Owner": "AWS",
"Provider": "CodeBuild",
"Version": "1"
},
"Configuration": {
"ProjectName": {
"Ref": "BuildAndTestB9A2F419"
}
},
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Name": "Build",
"RoleArn": {
"Fn::GetAtt": [
"devPipelineBuildCodePipelineActionRole8696D056",
"Arn"
]
},
"RunOrder": 1
}
],
"Name": "Build"
}
This will throw the error:
arn:aws:iam::acount_id:role/PipeLineStack-devPipelineRole5B29FEBC-1JK24J0K5N1UG is not authorized to perform AssumeRole on role arn:aws:iam::acount_id:
role/PipeLineStack-devPipelineBuildCodePipelineActionRo-17ETJU1KZCCNQ (Service: AWSCodePipeline; Status Code: 400; Error Code: InvalidStructureException; Req
uest ID: c8c8af89-2409-4cc1-aad8-4de553a1764f; Proxy: null)
If I remove the RoleArn from the Action and execute the template it works.
My question is, How do I prevent CDK to prevent adding default role with Principle using the root account or a work around to it?
It looks like actions are not allowed to assume any role in AWS Educate currently. So to have a workaround and remove the manual overhead, use CDK L1 Constructs to modify the generated cloud formation.
The pipeline can be created like:
// Custom role to pass in to pipeline
const pipeLineRole = new iam.Role(this, "CodePipeLineRole", {
assumedBy: new iam.ServicePrincipal("codepipeline.amazonaws.com"),
});
pipeLineRole.addToPolicy(
// Required policy for each aciton to run
)
const pipeline = new codepipeline.Pipeline(this, "Pipeline", {
role: pipeLineRole,
stages: [
// ...
{
actions: [action1, action2],
},
// ...
],
});
// Altering cloudformation to remove role arn from actions
const pipelineCfn = pipeline.node.defaultChild as cdk.CfnResource;
// addDeletionOverride removes the property from the cloudformation itself
// Delete action arn for every stage and action created
pipelineCfn.addDeletionOverride("Properties.Stages.1.Actions.0.RoleArn");
pipelineCfn.addDeletionOverride("Properties.Stages.2.Actions.0.RoleArn");
pipelineCfn.addDeletionOverride("Properties.Stages.3.Actions.0.RoleArn");
This is a workaround, it works, but there are still unwanted and dangling policies and roles created that have not been assigned to any service which had been created for individual actions.
The following code in pipeline configuration:
"RoleArn": {
"Fn::GetAtt": [
"devPipelineBuildCodePipelineActionRole8696D056",
"Arn"
]
},
... means when CodePipeline service will invoke the "Build" action, it will "assume" the role "devPipelineBuildCodePipelineActionRole8696D056" but this role does not have a trust policy with "codepipeline.amazonaws.com" service hence the error.
The 'RoleArn' property under the action is useful when you have a cross account action (CodeBuild project is in another account) so unless that is the case, it is better to drop this property.
We will need to see the cdk code to answer your question:
How do I prevent CDK to prevent adding default role with Principle using the root account or a work around to it?
Subesh's code works in removing RoleArn. But in my AWS env, RoleArn is still required. I am trying to replaced it with an existing role, but it still only removes RoleArn. What is wrong with my code?
pipelineCfn.addDeletionOverride("Properties.Stages.1.Actions.0.RoleArn");
pipelineCfn.addDeletionOverride("Properties.Stages.2.Actions.0.RoleArn");
pipelineCfn.addPropertyOverride(
"Properties.Stages.1.Actions.0.RoleArn",
pipeline_role_arn
);
pipelineCfn.addPropertyOverride(
"Properties.Stages.2.Actions.0.RoleArn",
pipeline_role_arn
);

How to check AWS secretsmanager rotation is completed successfully

Created a secret in AWS secretsmanager, enabled automatic rotation with lambda function.
when I trigger rotation for the first time from cli, It's not completed. This is the initial state of secret when updated secret in aws console manually.
# aws secretsmanager list-secret-version-ids --secret-id ******
{
"Versions": [
{
"VersionId": "9e82b9e2-d074-478e-83a5-baf4e578cb49",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": 1592870400.0,
"CreatedDate": 1592889913.431
},
{
"VersionId": "e32ddaf8-7f21-40e2-adf8-f976b8f3f104",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": 1592870400.0,
"CreatedDate": 1592887518.46
}
],
"ARN": "arn:aws:secretsmanager:us-east-1:***********:secret:***********",
"Name": "*******"
}
Now I triggered rotation from aws cli
aws secretsmanager rotate-secret --secret-id ******
# aws secretsmanager list-secret-version-ids --secret-id ********
{
"Versions": [
{
"VersionId": "704102f3-b36d-4529-b257-0457354d3c93",
"VersionStages": [
"AWSPENDING"
],
"CreatedDate": 1592890351.334
},
{
"VersionId": "e32ddaf8-7f21-40e2-adf8-f976b8f3f104",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": 1592870400.0,
"CreatedDate": 1592887518.46
},
{
"VersionId": "9e82b9e2-d074-478e-83a5-baf4e578cb49",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": 1592870400.0,
"CreatedDate": 1592889913.431
}
],
"ARN": "arn:aws:secretsmanager:us-east-1:**********:secret:********",
"Name": "********"
}
Cloudwatch log stopped at this createSecret: Successfully put secret for ARN arn:aws:secretsmanager:xxxxxxx.. looks like only createsecret function is called.
When I rotate the secret again, Gets this output in cli
An error occurred (InvalidRequestException) when calling the RotateSecret operation: A previous rotation isn't complete. That rotation will be reattempted.
Unable to understand what's happening. Can someone help?
Unfortunately there is no out-of-the-box way for that, as Secrets Manger does not have build in SNS notification nor CloudWatch Events for when rotation completes.
Thus, you have to construct a solution yourself, which can be done using SDK or CLI.
For CLI you can use describe-secret and pull secret details in a loop. In the loop, you have to look into AWSPENDING and AWSCURRENT labels for the versions.
From the docs:
If instead the AWSPENDING staging label is present but is not attached to the same version as AWSCURRENT then any later invocation of RotateSecret assumes that a previous rotation request is still in progress and returns an error.
So basically, looking at your output:
{
"VersionId": "704102f3-b36d-4529-b257-0457354d3c93",
"VersionStages": [
"AWSPENDING"
],
"CreatedDate": 1592890351.334
}
you have a version with AWSPENDING label, which is not attached to the same version as AWSCURRENT. This indicates that the rotation is in progress.
The rotation completes, when a version is in one of the two states:
The AWSPENDING and AWSCURRENT staging labels are attached to the same version of the secret, or The AWSPENDING staging label is not attached to any version of the secret.
Secrets Manager will publish an event via CloudTrail - 'RotationSucceeded' when there is a successful rotation.
See this for more information on how to setup a Cloudwatch alarm off that CloudTrail event - https://docs.aws.amazon.com/secretsmanager/latest/userguide/monitoring.html

AWS CodePipeLine :Execute deploy action in diffent region than the one codepipeline is triggered in

I'm setting up a pipeline to automate cloudformation stack templates deployment.
The pipeline itself is created in the aws eu-west-1 region, but cloudformation stacks templates would be deployed in any other region.
Actually I know and can execute pipeline action in a different account, but I don't see where to specify the region I would like my template to be deployed in, like we do with aws cli : aws --region cloudformation deploy.....
Is there anyway to trigger a pipeline in one region and execute a deploy action in another region please?
The action configuration properties don't offer such possibility...
A workaround would be to run aws cli deploy command from cli in the codebuild container and speficy the good region, But I would like to know if there is a more elegant way to do it
If you're looking to deploy to multiple regions, one after the other, you could create a Code Pipeline pipeline in every region you want to deploy to, and set up S3 cross-region replication so that the output of the first pipeline becomes the input to a pipeline in the next region.
Here's a blog post explaining this further: https://aws.amazon.com/blogs/devops/building-a-cross-regioncross-account-code-deployment-solution-on-aws/
Since late Nov 2018, CodePipeline supports cross regional deploys. However it still leaves a lot to be desired as you need to create artifact buckets in each region and copy over the deployment artifacts (e.g. in the codebuild container as you mentioned) to them before the Deploy action is triggered. So it's not as automated as it could be, but if you go through the process of setting it up, it works well.
CodePipeline now supports cross region deployment and for to trigger the pipeline in different region we can specify the "Region": "us-west-2" property in the action stage for CloudFormation which will trigger the deployment in that specific region.
Steps to follow for this setup:
Create two bucket in two different region which for example bucket in "us-east-1" and bucket in "us-west-2" (We can also use bucket already created by CodePipeline when you will setup pipeline first time in any region)
Configure the pipeline in such a way that is can use respective bucket while taking action in respective account.
specify the region in the action for CodePipeline.
Note: I have attached the sample CloudFormation template which will help you to do the cross region CloudFormation deployment.
{
"Parameters": {
"BranchName": {
"Description": "CodeCommit branch name for all the resources",
"Type": "String",
"Default": "master"
},
"RepositoryName": {
"Description": "CodeComit repository name",
"Type": "String",
"Default": "aws-account-resources"
},
"CFNServiceRoleDeployA": {
"Description": "CFN service role for create resourcecs for account-A",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/CloudFormation-service-role-cp"
},
"CodePipelineServiceRole": {
"Description": "Service role for codepipeline",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/AWS-CodePipeline-Service"
},
"CodePipelineArtifactStoreBucket1": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-east-1"
},
"CodePipelineArtifactStoreBucket2": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-west-2"
}
},
"Resources": {
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": {"Fn::Sub": "${AWS::StackName}-cross-account-pipeline" },
"ArtifactStores": [
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket1"
}
},
"Region": "us-east-1"
},
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket2"
}
},
"Region": "us-west-2"
}
],
"RoleArn": {
"Ref": "CodePipelineServiceRole"
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
{
"Name": "Deploy-to-account-A",
"Actions": [
{
"Name": "stage-1",
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"ActionTypeId": {
"Category": "Deploy",
"Owner": "AWS",
"Version": 1,
"Provider": "CloudFormation"
},
"Configuration": {
"ActionMode": "CREATE_UPDATE",
"StackName": "cloudformation-stack-name-account-A",
"TemplatePath":"SourceOutput::accountA.json",
"Capabilities": "CAPABILITY_IAM",
"RoleArn": {
"Ref": "CFNServiceRoleDeployA"
}
},
"RunOrder": 2,
"Region": "us-west-2"
}
]
}
]
}
}
}
}