get secret from aws in systemd service file - amazon-web-services

I have a systemd service in ubuntu and I want to retrieve the secret in the envoirment section and not put it hardcoded like this:
Environment="PASSWORD=`aws secretsmanager get-secret-value --secret-id aws_pwds | jq -r '.SecretString | fromjson | .test_pwd'`"
the command work in the shell but the service failed to get it any suggestions why?
I also tried using $()

Related

Any way to populate an Azure DevOps library variable from a shell script?

While performing an Azure DevOps release is it possible to populate an Azure DevOps library variable from a shell script?
My end goal is to use it in the "Replace tokens" task in the release pipeline as to put the secret in a yaml (much cleaner than what I currently have). Replace tokens only works with ADO library variables.
My current workaround is using sed to replace what the secret gives me and output that to another yaml which I use to deploy Kubernetes. Any alternatives to this would be great!
Here is what I have now -
# Lets get the DB and Redis PW from AWS Secrets - used so we only have to set or change the passwords in one place - AWS Secrets
# Note that the AWS_secret_arn is different between stage and release and the variable is set in the library AppConfigs_xxxxx
DB_PW=$(aws secretsmanager get-secret-value --secret-id $(AWS_secret_arn) | jq -r '.SecretString' | jq -r '.db_pw')
echo " *** The secret is - " $DB_PW
# We are replacing the db_password with the one we acquired from AWS secrets
sed "s/db_pw_placeholder/$DB_PW/g" service.yaml > service-final.yaml
echo "### kubectl apply now running the service manifest ###"
kubectl apply -f service-final.yaml
I would also like to use the same methodology to get other parameters over from AWS to populate the ADO variable library - like an RDS DB endpoint.
If you use this Replace token all what you actually need is AWS Secrets Manager Get Secret task. It maps secret from AWS Secret Manager into secret variable. And since Replace Token works with secrets, you should be fine.
I would like to share another method.
In addition to Azure DevOps library variable can be used to Replace Tokens task, pipeline variables defined in the build process can also be used.
You can make some modifications to your powershell codeļ¼š
For example:
DB_PW=$(aws secretsmanager get-secret-value --secret-id $(AWS_secret_arn) | jq -r '.SecretString' | jq -r '.db_pw')
echo " *** The secret is - " $DB_PW
echo "##vso[task.setvariable variable=test]$DB_PW"
Then the variable $(test) could be directly used in Replace tokens taks(#{test}#).
This method uses the logging command to define variables in the pipeline

Question about verifying if the credential rotation Lambda function for Secrets Manager successful

I created a Lambda rotation function manually, configured it in Secrets Manager console (enabled the rotation, told SM to use this newly created function), everything looks fine so far but I don't know how to verify if the rotation is working now.
I found this document, I was going to follow step 4 'Verify Successful Rotation', but the command they provide is not for AWS CLI:
secret=$(aws secretsmanager get-secret-value --secret-id xxxxxxx | jq .SecretString | jq fromjson)
I got error if I tried in AWS CLI:
'secret' is not recognized as an internal or external command,
operable program or batch file.
Their approach is to use MySQL Client, is there a way to test it in AWS CLI or command prompt? Many thanks.
You can use the aws cli to verify that the credentials were rotated.
You should also verify using the MySQL client to verify that you can use the rotated credentials to access the database - https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_db-rotate.html#tut-db-rotate-step5
This command - secret=$(aws secretsmanager get-secret-value --secret-id xxxxxxx | jq .SecretString | jq fromjson) is a linux command to use the aws cli to retrieve the secret value and assigns it to a shell variable called 'secret'

Deleting cloudwatch group via cli

I need to delete Cloud watch group via the aws cli and running this script
aws logs describe-log-groups --region eu-west-2 | \
jq -r .logGroups[].logGroupName | \
xargs -L 1 -I {} aws logs delete-log-group --log-group-name {}
The first bit works fine , returning the log group names. Unfortunately, I cant seem to pipe it to xargs and it returns the following error:
An error occurred (ResourceNotFoundException) when calling the DeleteLogGroup operation: The specified log group does not exist.
i will be appreciative of any pointers.
As an alternative to using the AWS CLI, here's a Python script to delete log groups:
import boto3
logs_client = boto3.client('logs')
response = logs_client.describe_log_groups()
for log in response['logGroups']:
logs_client.delete_log_group(logGroupName=log['logGroupName'])

Launch ECS container instance to cluster and run task definition using userdata

I am trying to launch an ECS contianer instance and passing through userdata to register it to a cluster and also start run a task definition.
When the task is complete the instance will be terminated.
I am using the guide on AWS docs to start a task at container launch.
Below userdata(cluster and task def params omitted)
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=my_cluster
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
When the instance is created it is launched to the default cluster not the one I specify in the userdata and no tasks are started.
I have deconstructed the above script to work out where it is failing but Ive had no luck.
Any help would be appreciated.
From the AWS Documentation.
Configure your Amazon ECS container instance with user data, such as
the agent environment variables from Amazon ECS Container Agent
Configuration. Amazon EC2 user data scripts are executed only one
time, when the instance is first launched.
By default, your container instance launches into your default
cluster. To launch into a non-default cluster, choose the Advanced
Details list. Then, paste the following script into the User data
field, replacing your_cluster_name with the name of your cluster.
So, in order for you to be able to add that EC2 instance to your ECS cluster, You should change this variable to the name of your cluster:
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
Change your_cluster_name to whatever the name is of your cluster.

AWS EC2 user data docker system prune before start ecs task

I have followed the below code from the AWS to start a ECS task when the EC2 instance launches. This works great.
However my containers only run for a few minutes(max ten) then once finished the EC# is shutdown using a cloudwatch rule.
The problem I am find is that due to the instances shutting down straight after the task is finished the auto clean up of the docker containers doesn't happen resulting in the EC2 instance getting full up stopping other tasks to fail. I have tried the lower the time between clean up but it still can be a bit flaky.
Next idea was to add docker system prune -a -f to the user data of the EC2 instance but it doesnt seem to get ran. I think its because I am putting it in the wrong part of the user data, I have searched through the docs for this but cant find anything to help.
Question where can I put the docker prune command in the user data to ensure that at each launch the prune command is ran?
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
I hadn't considered terminated then creating a new instance.
I use cloud formation currently to create EC2.
What's the best workflow for terminating an EC2 after the task definition has completed then on schedule create a new one registering it to the ECS cluster?
Cloud watch scheduled rule to start lambda that creates EC2 then registers to cluster?