I want a list of my s3 buckets in TF.
With the AWS CLI I can get that list and format it into JSON:
my-mac$ eval aws s3 ls | awk '{print $3}' | jq -R -s -c 'split("\n")' | jq '.[:-1]'
[
"xxx-bucket-1",
"xxx-bucket-2"
]
But TF won't fetch it, and it's driving me mad.
Here's my data source config:
data "external" "bucket_objects" {
program = ["bash", "cli.sh"]
}
The shell script file contents:
#!/bin/bash
set -e
eval aws s3 ls | awk '{print $3}' | jq -R -s -c 'split("\n")' | jq '.[:-1]'
And finally, the error message:
│ Error: Unexpected External Program Results
│
│ with data.external.bucket_objects,
│ on data.tf line 22, in data "external" "bucket_objects":
│ 22: program = ["bash", "cli.sh"]
│
│ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's
│ logging at TRACE level. Terraform documentation on logging:
│ https://www.terraform.io/internals/debugging
│
│ Program: /bin/bash
│ Result Error: json: cannot unmarshal array into Go value of type
│ map[string]string
My shell scripting game is weak and it's going to be something dumb, but I don't know what that is.
Here is an example that worked in my case:
#!/bin/bash
set -e
BUCKET_NAMES=$(aws s3 ls | awk '{print $3}' | jq -R -s -c 'split("\n")' | jq '.[:-1]')
jq -n --arg bucket_names "$BUCKET_NAMES" '{"bucket_names":$bucket_names}'
Related
I am able to display the OSConfig guest policies that are applied to a Google Cloud Platform (GCP) Compute Engine (GCE) instance ($GCE_INSTANCE_NAME) using the Cloud SDK (gcloud):
gcloud beta compute os-config guest-policies lookup \
$GCE_INSTANCE_NAME \
--zone=$GCE_INSTANCE_ZONE
#=>
┌──────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ SOFTWARE RECIPES │
├───────────────────────────────────────────────────────────┬────────────────────┬─────────┬───────────────┤
│ SOURCE │ NAME │ VERSION │ DESIRED_STATE │
├───────────────────────────────────────────────────────────┼────────────────────┼─────────┼───────────────┤
│ projects/$GCP_PROJECT_ID/guestPolicies/. . . │ . . . │ . . . │ . . . │
│ projects/$GCP_PROJECT_ID/guestPolicies/$GUEST_POLICY_NAME │ $GUEST_POLICY_NAME │ 1.0 │ INSTALLED │
│ projects/$GCP_PROJECT_ID/guestPolicies/. . . │ . . . │ . . . │ . . . │
└───────────────────────────────────────────────────────────┴────────────────────┴─────────┴───────────────┘
How would I retrieve the same response using the REST API? The lookup method seems to be missing from the projects.guestPolicies resource page here.
You're looking for the projects.zones.instances.lookupEffectiveGuestPolicy REST method, found here.
An example for a guest policy that installs software on any version of Ubuntu:
curl \
--data-raw '{ "osArchitecture": "", "osShortName": "UBUNTU", "osVersion": "" }' \
--header "Authorization: Bearer $(gcloud auth print-access-token)" \
--header 'Content-Type: text/plain' \
--location \
--request POST \
"https://osconfig.googleapis.com/v1beta/projects/$GCP_PROJECT_NUMBER/zones/$GCE_INSTANCE_ZONE/instances/$GCE_INSTANCE_NAME:lookupEffectiveGuestPolicy"
#=>
{
"softwareRecipes": [
. . .
{
"source": "projects/$GCP_PROJECT_NUMBER/guestPolicies/$GUEST_POLICY_NAME",
"softwareRecipe": {
"name": "$GUEST_POLICY_NAME",
"version": "1.0",
. . .
"desiredState": "INSTALLED"
}
},
. . .
]
}
Note: $GCP_PROJECT_NUMBER is different than $GCP_PROJECT_ID:
gcloud projects describe $GCP_PROJECT_NAME
#=>
. . .
projectId: $GCP_PROJECT_ID
projectNumber: "$GCP_PROJECT_NUMBER"
Note: The values for the POST body keys
"osArchitecture"
"osShortName"
"osVersion"
can be found for $GUEST_POLICY_NAME using either gcloud:
gcloud beta compute os-config guest-policies describe \
$GUEST_POLICY_NAME \
--flatten="assignment.osTypes" \
--format="table[box](assignment.osTypes.osArchitecture,
assignment.osTypes.osShortName,
assignment.osTypes.osVersion)"
#=>
┌─────────────────┬───────────────┬────────────┐
│ OS_ARCHITECTURE │ OS_SHORT_NAME │ OS_VERSION │
├─────────────────┼───────────────┼────────────┤
│ . . . │ . . . │ . . . │
│ │ UBUNTU │ │
│ . . . │ . . . │ . . . │
└─────────────────┴───────────────┴────────────┘
or the REST API:
curl \
--header "Authorization: Bearer $(gcloud auth print-access-token)" \
--location \
--request GET \
"https://osconfig.googleapis.com/v1beta/projects/$GCP_PROJECT_NUMBER/guestPolicies/$GUEST_POLICY_NAME"
#=>
{
. . .
"assignment": {
. . .
"osTypes": [
. . .
{
"osShortName": "UBUNTU"
}
. . .
]
. . .
},
. . .
}
Note: if "osArchitecture" and/or "osVersion" are missing or blank, you should leave these values as empty strings when using the REST method above ("").
I'm building a Terraform config and I'm stuck with a tricky ami selection. My CI builds AMIs and add MyVersionTag on it based on the current build of the app on the CI. I would like to select the AMI based on this tag sort by version (X.Y.Z format) to take the latest.
Here is my command line using aws cli to select the AMI I want to use:
aws ec2 describe-images --filters 'Name=tag-key,Values=MyVersionTag' --query 'reverse(sort_by(Images[].{TagValue:Tags|[0].Value,ImageId:ImageId},&TagValue))|[0].ImageId'
I'm searching a way to configure an EC2 instance with this AMI id. I see 2 possible ways (please correct me):
aws_instance with pure AMI ID as text (result of a command line through variable)
aws_ami with filters
Any idea?
I finally make is using external key word. Here is my solution:
# example.tf
resource "aws_instance" "my-instance" {
ami = data.external.latest_ami.result.ImageId
# Other config
}
data "external" "latest_ami" {
program = ["sh", "latest_ami_id.sh"]
# Or simply
program = ["aws", "ec2", "describe-images", "--filters", "Name=tag-key,Values=MyVersionTag", "--query", "reverse(sort_by(Images[].{TagValue:Tags|[0].Value,ImageId:ImageId},&TagValue))|[0].ImageId"]
}
# latest_ami_id.sh
#!/bin/bash
# It returns a json with following keys :
#. ImageId, Description, Tags (version actually)
aws ec2 describe-images --filters "Name=tag-key,Values=SecretCore" --query 'reverse(sort_by(Images[].{TagValue:Tags|[0].Value,ImageId:ImageId},&TagValue))|[0].ImageId'
Hope it will help someone else. 👌🏻
i was try with your command but i got the error,
Error: Unexpected External Program Results
│
│ with data.external.latest_ami,
│ on main.tf line 25, in data "external" "latest_ami":
│ 25: program = ["sh", "latest_ami.sh"]
│
│ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging:
│ https://www.terraform.io/internals/debugging
│
│ Program: /usr/bin/sh
│ Result Error: json: cannot unmarshal string into Go value of type map[string]string
it's my main.tf
resource "aws_instance" "web" {
ami = data.external.latest_ami.result.ImageId
instance_type = "t3.micro"
}
data "external" "latest_ami" {
program = ["sh", "latest_ami.sh"]
}
and here my latest_ami.sh
#latest_ami_id.sh
#!/bin/bash
# It returns a json with following keys :
#. ImageId, Description, Tags (version actually)
aws ec2 describe-images --filters 'Name=name,Values=packer-2022-04-06' --query 'reverse(sort_by(Images[].{TagValue:Tags|[0].Value,ImageId:ImageId},&TagValue))|[0].ImageId'
if i try running ./latest_ami.sh it's work
"ami-xxxxxxx"
I'm interested in creating a backup of the current application as an application version. As a result I'd like to try and track down the S3 Bucket where the App Versions are saved to save a new file to that location.
Does anyone know how this is done?
To do this:
Get the Instance Id
Get the environment Id.
Get the App Name
Get the App Versions
The data is in the response of the App Versions.
Based on the above, here is my entire backup script. It will figure out steps 1-4 above, then zip up the current app directory, send it to S3, and create a new app version based on it.
#/bin/bash
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
EB_ENV_ID="`aws ec2 describe-instances --instance-ids $EC2_INSTANCE_ID --region $EC2_REGION | grep -B 1 \"elasticbeanstalk:environment-id\" | grep -Po '\"Value\":\\s+\"[^\"]+\"' | cut -d':' -f 2 | grep -Po '[^\"]+'`"
EB_APP_NAME="`aws elasticbeanstalk describe-environments --environment-id $EB_ENV_ID --region $EC2_REGION | grep 'ApplicationName' | cut -d':' -f 2 | grep -Po '[^\\",]+'`"
EB_BUCKET_NAME="`aws elasticbeanstalk describe-application-versions --application-name $EB_APP_NAME --region $EC2_REGION | grep -m 1 'S3Bucket' | cut -d':' -f 2 | grep -Po '[^\", ]+'`"
DATE=$(date -d "today" +"%Y%m%d%H%M")
FILENAME=application.$DATE.zip
cd /var/app/current
sudo -u webapp zip -r -J /tmp/$FILENAME *
sudo mv /tmp/$FILENAME ~/.
sudo chmod 755 ~/$FILENAME
cd ~
aws s3 cp $FILENAME s3://$EB_BUCKET_NAME/
rm -f ~/$FILENAME
aws elasticbeanstalk create-application-version --application-name $EB_APP_NAME --version-label "$DATE-$FILENAME" --description "Auto-Save $DATE" --source-bundle S3Bucket="$EB_BUCKET_NAME",S3Key="$FILENAME" --no-auto-create-application --region $EC2_REGION
tldr: most probably you want
jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)
Detailed version:
Add to any of your .ebextensions
packages:
yum:
jq: []
(assuming you are using Amazon Linux or Centos)
and run
export BUCKET="$(jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)' || sudo !!)"
echo bucketname: $BUCKET
then full s3 path would be accessible at
bash -c 'source /etc/elasticbeanstalk/.aws-eb-stack.properties; echo s3:\\${BUCKET} -r ${region:-eu-east-3}'
I'm trying to write a script that with the help of Jenkins will look at the updated files in git, download it and will encrypt them using AWS KMS. I have a working script that does it all and the file is downloaded to the Jenkins repository on local server. But my problem is encrypting this file in Jenkins repo. Basically, when I encrypt files on local computer, I use the command:
aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://file.json --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
and all is ok, but if I try to do it with Jenkins I get an error that the AWS command not found.
Does somebody know how to solve this problem and how do I make it run AWS through Jenkins?
Here is my working code which breaks down on the last line:
bom_sniffer() {
head -c3 "$1" | LC_ALL=C grep -qP '\xef\xbb\xbf';
if [ $? -eq 0 ]
then
echo "BOM SNIFFER DETECTED BOM CHARACTER IN FILE \"$1\""
exit 1
fi
}
check_rc() {
# exit if passed in value is not = 0
# $1 = return code
# $2 = command / label
if [ $1 -ne 0 ]
then
echo "$2 command failed"
exit 1
fi
}
# finding files that differ from this commit and master
echo 'git fetch'
check_rc $? 'echo git fetch'
git fetch
check_rc $? 'git fetch'
echo 'git diff --name-only origin/master'
check_rc $? 'echo git diff'
diff_files=`git diff --name-only $GIT_PREVIOUS_COMMIT $GIT_COMMIT | xargs`
check_rc $? 'git diff'
for x in ${diff_files}
do
echo "${x}"
cat ${x}
bom_sniffer "${x}"
check_rc $? "BOM character detected in ${x},"
aws configure kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
done
After discussion with you this is how this issue was resolved :
First corrected the command by removing configure from it.
Installed the awscli for the jenkins user :
pip install awscli --user
Used the absolute path of aws in your script
for eg. say if it's in ~/.local/bin/ use ~/.local/bin/aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json in your script.
Or add the path of aws in PATH.
For example, if I wish to mount a particular volume that is defined by an environment variable.
I ended up using the following code:
---
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/02my_setup.sh":
owner: root
group: root
mode: "000755"
content: |
#!/bin/bash
set -e
. /opt/elasticbeanstalk/hooks/common.sh
EB_CONFIG_APP_CURRENT=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir)
EB_SUPPORT_FILES_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_files_dir)
# load env vars
eval $($EB_SUPPORT_FILES_DIR/generate_env | sed 's/$/;/')
You can use /opt/elasticbeanstalk/bin/get-config environment in a bash script
Example:
# .ebextensions/efs_mount.config
commands:
01_mount:
command: "/tmp/mount-efs.sh"
files:
"/tmp/mount-efs.sh":
mode: "000755"
content : |
#!/bin/bash
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_REGION')
EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_MOUNT_DIR')
EFS_VOLUME_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_VOLUME_ID')