CloudFormation error output - amazon-web-services

I have this function in my template in CloudFormation:
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WebServerWaitHandle" }, "'\n",
" exit 1\n",
"}\n",
It prints out in the resources when an error occurs, because this line:
sudo puppet apply --verbose --debug --environment=qa /puppet.pp > /tmp/puppet.log 2>&1 || error_exit 'failed to apply puppet manifests'\n",
But i want to show what error is in the /tmp/puppet.log, something like grep Error /tmp/puppet.log

Related

How to make Terraform continue with execution and ignore an error with resource creation during terraform apply?

I am installing CNI using null_resource in terraform. Now if the CNI is already installed the terraform script fails with error:
exit status 254. Output: │ An error occurred (ResourceInUseException) when calling the CreateAddon │ operation: Addon already exists.
How can I make terraform continue with execution if the CNI is already installed, rather than failing.
Below is my Configuration for installing CNI:
### Installing CNI Addon ###
resource "null_resource" "install-CNI" {
provisioner "local-exec" {
when = create
interpreter = ["bash", "-c"]
command = <<EOT
aws eks create-addon \
--cluster-name ${data.aws_eks_cluster.Custom_Dev-cluster-deploy.name} \
--addon-name vpc-cni \
--addon-version v1.11.2-eksbuild.1 \
--service-account-role-arn ${aws_iam_role.Custom_Dev-cluster.arn} \
--resolve-conflicts OVERWRITE
EOT
}
triggers = {
"before" = null_resource.eks-config-file.id
}
}
you can handle the error base on the response. if the command response contains Addon already exists you can exit 0 and return an error if something else, it can be aws cli permission or wrong command.
resource "null_resource" "install-CNI" {
provisioner "local-exec" {
when = create
interpreter = ["bash", "-c"]
command = <<EOT
RESULT=$(aws eks create-addon --cluster-name ${data.aws_eks_cluster.Custom_Dev-cluster-deploy.name} --addon-name vpc-cni --addon-version v1.11.2-eksbuild.1 --service-account-role-arn ${aws_iam_role.Custom_Dev-cluster.arn} --resolve-conflicts OVERWRITE 2>&1)
if [ $? -eq 0 ]
then
echo "Addon installed successfully $RESULT"
exit 0
elif [[ "$RESULT" =~ .*"Addon already exists".* ]]
then
echo "Plugin already exists $RESULT" >&2
exit 0
else
echo "Encounter error $RESULT" >&2
exit 1
fi
EOT
}
triggers = {
"before" = null_resource.eks-config-file.id
}
}

AWS Cloudformation - mount to existing file system

Currently, I have a json that will create a EFS in an auto scaling group. However, how can I make it so it mounts an existing EFS that is previously created (so i can pre-load data)
this is the current set up
"FileSystem": {
"Type": "AWS::EFS::FileSystem",
"Properties": {
"PerformanceMode": "generalPurpose",
"FileSystemTags": [
{
"Key": "Name",
"Value": { "Ref" : "VolumeName" }
}
]
}
},
"MountTarget": {
"Type": "AWS::EFS::MountTarget",
"Properties": {
"FileSystemId": { "Ref": "FileSystem" },
"SubnetId": { "Ref": "Subnet" },
"SecurityGroups": [ { "Ref": "MountTargetSecurityGroup" } ]
}
},
As per #MaiKaY Suggested, Just Pass the FileSystemId as a Parameter.
here is the mounting Script
container_commands:
1chown:
command: "chown webapp:webapp /wpfiles"
2create:
command: "sudo -u webapp mkdir -p wp-content/uploads"
3link:
command: "sudo -u webapp ln -s /wpfiles wp-content/uploads"
option_settings:
aws:elasticbeanstalk:application:environment:
FILE_SYSTEM_ID: 'FileSystemId' #Provide your FileSystemId
MOUNT_DIRECTORY: '/wpfiles'
REGION: '`{"Ref": "AWS::Region"}`'
packages:
yum:
nfs-utils: []
jq: []
commands:
01_mount:
command: "/tmp/mount-efs.sh"
files:
"/tmp/mount-efs.sh":
mode: "000755"
content : |
#!/bin/bash
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.REGION')
EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MOUNT_DIRECTORY')
EFS_FILE_SYSTEM_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.FILE_SYSTEM_ID')
echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..."
echo 'Stopping NFS ID Mapper...'
service rpcidmapd status &> /dev/null
if [ $? -ne 0 ] ; then
echo 'rpc.idmapd is already stopped!'
else
service rpcidmapd stop
if [ $? -ne 0 ] ; then
echo 'ERROR: Failed to stop NFS ID Mapper!'
exit 1
fi
fi
echo 'Checking if EFS mount directory exists...'
if [ ! -d ${EFS_MOUNT_DIR} ]; then
echo "Creating directory ${EFS_MOUNT_DIR} ..."
mkdir -p ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo 'ERROR: Directory creation failed!'
exit 1
fi
else
echo "Directory ${EFS_MOUNT_DIR} already exists!"
fi
mountpoint -q ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo "mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 ${EFS_FILE_SYSTEM_ID}:/ ${EFS_MOUNT_DIR}"
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 ${EFS_FILE_SYSTEM_ID}:/ ${EFS_MOUNT_DIR}
if [ $? -ne 0 ] ; then
echo 'ERROR: Mount command failed!'
exit 1
fi
chmod 777 ${EFS_MOUNT_DIR}
runuser -l ec2-user -c "touch ${EFS_MOUNT_DIR}/it_works"
if [[ $? -ne 0 ]]; then
echo 'ERROR: Permission Error!'
exit 1
else
runuser -l ec2-user -c "rm -f ${EFS_MOUNT_DIR}/it_works"
fi
else
echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!"
fi
echo 'EFS mount complete.'

Not able to SSH into the EC2 Instance with CloudFormation Template

I want to start a task from at Container Instance launch time.So I have followed the this Starting task at instance launch Document which provided the MIME multi-part user data script. I have created a cloud formation template to launch an instance with the MIME multi-part user data script.
EC2 Resource has been created with the Cloud formation template, but I am not able to SSH into that instance and I am not able to System logs from EC2 management console as well.
CloudFormation Template
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" :" ECS instance",
"Parameters" : {
},
"Resources" :{
"EC2Instance":{
"Type" : "AWS::EC2::Instance",
"Properties" : {
"SecurityGroupIds":["sg-16021f35"]
"ImageId" : "ami-ec33cc96",
"UserData":{
"Fn::Base64":{
"Fn::Join":[
"\n",
[
{
"Fn::Join":[
"",
[
"Content-Type: multipart/mixed; boundary=",
"==BOUNDARY=="
]
]
},
"MIME-Version: 1.0",
"--==BOUNDARY==",
{
"Fn::Join":[
"",
[
"Content-Type: text/upstart-job; charset=",
"us-ascii"
]
]
},
"#!/bin/bash",
"# Specify the cluster that the container instance should register into",
"echo ECS_CLUSTER=Demo >> /etc/ecs/ecs.config",
"# Install the AWS CLI and the jq JSON parser",
"yum install -y aws-cli jq",
"#upstart-job",
{
"Fn::Join":[
" ",[
"description",
"Amazon EC2 Container Service (start task on instance boot)"
]
]
},
{
"Fn::Join":[
" ",[
"author",
"Amazon Web Services"
]
]
},
"start on started ecs",
"script",
"exec 2>>/var/log/ecs/ecs-start-task.log",
"set -x",
"until curl -s http://localhost:51678/v1/metadata",
"do",
"sleep 1",
"done",
"# Grab the container instance ARN and AWS region from instance metadata",
"instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )",
"cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )",
"region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')",
"# Specify the task definition to run at launch",
"task_definition=ASG-Task",
"# Run the AWS CLI start-task command to start your task on this container instance",
"aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn",
"end script",
"--==BOUNDARY==--"
]
]
}
},
"IamInstanceProfile":"ecsInstanceRole",
"InstanceType":"t2.micro",
"SubnetId":"subnet-841103e1"
}
}
},
"Outputs" : {
}
}
MIME multi-part User-Data:
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
you need to specify the key file and security group in your formation template as suggested by jarmod above.

'VBoxManage guestcontrol' to run shell script on guest

I have VirtualBox VM that runs a server that can be accessed via localhost and forwarded ports.
I need to run some shells scripts and implement some business logic based on the results.
I tried following command as example:
VBoxManage guestcontrol <UUID> exec --image /bin/sh --username <su username> --password <su password> --wait-exit --wait-stdout --wait-stderr -- "[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed""
but I'm getting the error:
/bin/sh: [ -d <server_folder> ] && echo : No such file or directory
What is wrong with the syntax above?
First make sure that VBoxManage.exe is in your path!
Secondly you have to be carefull with your quotations. You used:
"[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed""
you have to use singel quotaions for the outermost quotation:
'[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed"'
Finally you have to add a -c in front of your arguments (to call /bin/sh -c '...').
The complete command:
VBoxManage guestcontrol <UUID> exec --image /bin/sh --username <su username> --password <su password> --wait-exit --wait-stdout --wait-stderr -- -c '[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed"'

user data scripts fails without giving reason

I am starting a Amazon Linux instance (ami-fb8e9292) using the web console, pasting data into the user data box to run a script upon startup. If I use the example given by amazon to start a web server, it works. But when I run my own script (also a #!/bin/bash script), it does not get run.
If I look in var/log/cloud-init.log, it gives no useful information on the topic:
May 22 21:06:12 cloud-init[1286]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/part-001'] with allowed return codes [0] (shell=True, capture=False)
May 22 21:06:16 cloud-init[1286]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [2]
May 22 21:06:16 cloud-init[1286]: util.py[DEBUG]: Failed running /var/lib/cloud/instance/scripts/part-001 [2]
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/cloudinit/util.py", line 637, in runparts
subp([exe_path], capture=False, shell=True)
File "/usr/lib/python2.6/site-packages/cloudinit/util.py", line 1528, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
Command: ['/var/lib/cloud/instance/scripts/part-001']
Exit code: 2
Reason: -
Stdout: ''
Stderr: ''
If I ssh into the instance and sudo su and execute the shell script directly:
/var/lib/cloud/instance/scripts/part-001
then it runs fine. Also, it works if I emulate the way cloud-init runs it:
python
>>> import cloudinit.util
>>> cloudinit.util.runparts("/var/lib/cloud/instance/scripts/")
Using either of those methods, if I intentionally introduce errors into the script then it produces error messages. How can I debug the selective absence of useful debugging output?
Instead of /var/log/cloud-init.log consider searcing for keywords like "Failed", "ERROR" "WARNING" or "/var/lib/cloud/instance/scripts/" inside /var/log/cloud-init-output.log - which in most cases, contains very clear error messages.
For example - running a bad command will produce the following error in /var/log/cloud-init-output.log:
/var/lib/cloud/instance/scripts/part-001: line 10: vncpasswd: command not found
cp: cannot stat '/lib/systemd/system/vncserver#.service': No such file or directory
sed: can't read /etc/systemd/system/vncserver#.service: No such file or directory
Failed to execute operation: No such file or directory
Failed to start vncserver#:1.service: Unit not found.
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
Cleaning repos: amzn2-core amzn2extra-docker amzn2extra-epel
And at the end of /var/log/cloud-init.log you'll receive a quiet general error message:
Aug 31 15:14:00 cloud-init[3532]: util.py[DEBUG]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 910, in runparts
subp(prefix + [exe_path], capture=False, shell=True)
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 2105, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
Command: ['/var/lib/cloud/instance/scripts/part-001']
Exit code: 1
Reason: -
Stdout: -
Stderr: -
cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
(*) Try to grep just the relevant error message with:
grep -C 10 '<search-keyword>' cloud-init-output.log
I'm not sure if this is going to be the case for everyone, but I was having this issue and was able to fix it by changing my first line from this:
#!/bin/bash -e -v
to just this:
#!/bin/bash
Of course, now my script is failing and I have no idea how far it's getting, but at least I got past it not running it at. :)
Hope it will reduce the debugging time for someone.
I didn't have any explicit error messages in my /var/log/cloud-init-output.log, just this:
2021-04-07 10:36:57,748 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
2021-04-07 10:36:57,748 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py'>) failed
After some investigation, I've realized that the cause was a typo in the shebang string: #!?bin/bash instead of #!/bin/bash.
I had a similar issue and I was able to get around it. I realized that the environment variables EC2_HOME would not be setup for the sudo. I was doing a bunch of stuff in my configset which uses aws cli and for these to work, the EC2_HOME needs to be setup. So, I went in and removed sudo everywhere in my configset and UserData.
Earlier when I was hitting the issue, my UserData looked like:
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"sudo yum update -y aws-cfn-bootstrap\n",
"# Install the files and packages and run the commands from the metadata\n",
"sudo /opt/aws/bin/cfn-init -v --access-key ", { "Ref" : "IAMUserAccessKey" }, " --secret-key ", { "Ref" : "SecretAccessKey" },
" --stack ", { "Ref" : "AWS::StackName" },
" --resource NAT2 ",
" --configsets config ",
" --region ", { "Ref" : "AWS::Region" }, "\n"
]]}}
My UserData after the changes looked like:
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum update -y aws-cfn-bootstrap\n",
"# Install the files and packages and run the commands from the metadata\n",
"/opt/aws/bin/cfn-init -v --access-key ", { "Ref" : "IAMUserAccessKey" }, " --secret-key ", { "Ref" : "SecretAccessKey" },
" --stack ", { "Ref" : "AWS::StackName" },
" --resource NAT2 ",
" --configsets config ",
" --region ", { "Ref" : "AWS::Region" }, "\n"
]]}}
Similarly, I removed all the sudo calls I was doing in my configsets
In my case cloudinit could not start script because userdata must start with
#!bin/bash
without empty spaces in front of it!
Nice AWS bug, lot of time for troubleshooting :)
I've been through this, and in my case it was also an issue with spaces before the she-bang #!bin/bash.
I spun up an instance through python code, using boto3.
ec2 = boto3.resource('ec2', region_name='eu-south-1')
instance = ec2.create_instances(
image=AMI_IMAGE_ID,
InstanceType=INSTANCE_TYPE,
...
UserData=USER_DATA_SCRIPT
...
)
where the definition of USER_DATA_SCRIPT was:
USER_DATA_SCRIPT = """
#!/bin/bash
apt update -y
apt upgrade -y
...
"""
This contained spaces up front, and this caused the script to generate the error without further details in /var/log/cloud-init-output.log.
Changing it into:
USER_DATA_SCRIPT = """#!/bin/bash
apt update -y
apt upgrade -y
...
"""
solved the issue.