I am using CDK to provision some EC2 instances and configure them using user-data.
My user data consists of 2 files
cloud-config
shell script.
What I have been noticing is that the shell script executes before my cloud-config finishes resulting in the script failing as all dependencies have not finished downloading.
Is there a way to control the run order? The reason I did not do all the configuration in the cloud config is I need to pass some arguments to the script and was easy using the ec2.UserData.forLinux().addExecuteFileCommand
const multipartUserData = new ec2.MultipartUserData();
multipartUserData.addUserDataPart(
this.createBootstrapConfig(),
'text/cloud-config; charset="utf8"'
);
multipartUserData.addUserDataPart(
this.runInstallationScript(),
'text/x-shellscript; charset="utf8"'
);
Related
We are trying to run batch scripts on load on a AWS EC2 instance using userdata (which I understand is based off of cloud-init). Since the code runs in a conda environment, we are trying to activate it prior to running the Python/Pandas code. We noticed that the PATH variable isn't getting set correctly. (even though it was set correctly prior to making the image, and is set correctly for all users after SSH'ing into instance)
We've tried:
#!/bin/bash
source activate path/to/conda_env
bash path/to/script.sh
and
#!/bin/bash
conda run -n path/to/conda_env bash path/to/script.sh
Nothing appears to work. This code runs the script while sshing into an EC2 instance but not while using EC2 cloud-init userdata (launching a script at launch). I've verified the script is indeed working at launch by creating a simple text file with user data, so it is working when starting an instance...
I need to run a .sh script on startup of ec2 created from cloudformation.I am copying script from s3 and then trying to run it. The script is able to be copied from the s3 bucket to ec2 root but its not running when we try . setupec2.sh . The script has no issues when run manually (its a bit long as its doing a couple of installations) and I can find it when we login into ec2 but wanted to run it from cloudformation startup and so gave it as user data.
The error its giving is
/var/lib/cloud/instance/scripts/part-001: line 33: setupec2.sh: No such file or directory
You need to specify a full path when you call setupec2.sh
EG /setupec2.sh if it is in the root folder.
I have a machine learning project and I have to get data from a website every 15 minutes. And I cannot use my own computer so I will use Google cloud. I am trying to use Google Compute Engine and I have a script for getting data (here is the link: https://github.com/BurkayKirnik/Automatic-Crypto-Currency-Data-Getter/blob/master/code.py). This script gets data every 15 mins and writes it down to csv files. I can run this code by opening an SSH terminal and executing it from there but it stops working when I close the terminal. I tried to run it by executing it in startup script but it doesn't work this way too. How can I run this and save the csv files? BTW I have to install an API to run the code and I am doing it in startup script. There is no problem in this part.
Instances running in Google Cloud Platform can be configured with the same tools available in the operating system that they are running. If your instance is a Linux instance, the best method would be to use a cronjob to execute your script repeatedly at your chosen interval.
Once you have accessed the instance via SSH, you can open the crontab configuration file by running the following command:
$ crontab -e
The above command will provide access to your personal crontab configuration (for the user you are logged in as). If you want to run the script as root you can use this instead:
$ sudo crontab -e
You can now edit the crontab configuration and add an entry that tells cron to execute your script at your required interval (in your case every 15 minutes).
Therefore, your crontab entry should look something like this:
*/15 * * * * /path/to/you/script.sh
Notice the first entry is for minutes, so by using the */15, you are telling the cron daemon to execute the script once every 15 minutes.
Once you have edited the crontab configuration file, it is a good idea to restart the cron daemon to ensure the change you made will take place. To do this you can run:
$ sudo service cron restart
If you would like to check the status to ensure the cron service is running you can run:
$ sudo service cron status
You script will now execute every 15 minutes.
In terms of storing the CSV files, you could either program your script to store them on the instance, or an alternative would be to use Google Cloud Storage bucket. File can be copied to buckets easily by making use of the gsutil (part of Cloud SDK) command as described here. It's also possible to mount buckets as a file system as described here.
I’m setting up a patch process for EC2 servers running a web application.
I need to build an automated process that installs system updates but, reverts back to the last working ec2 instance if the web application fails a status check.
I’ve been trying to do this using an Automation Document in EC2 Systems Manager that performs the following steps:
Stop EC2 instance
Create AMI from instance
Launch new instance from newly created AMI
Run updates
Run status check on web application
If check fails, stop new instance and restart original instance
The Automation Document runs the first 5 steps successfully, but I can't identify how to trigger step 6? Can I do this within the Automation Document? What output would I be able to call from step 5? If it uses aws:runCommand, should the runCommand trigger a new automation document or another AWS tool?
I tried the following to solve this, which more or less worked:
Included an aws:runCommand action in the automation document
This ran the DocumentName "AWS-RunShellScript" with the following parameters:
Downloaded the script from s3:
sudo aws s3 cp s3://path/to/s3/script.sh /tmp/script.sh
Set the file to executable:
chmod +x /tmp/script.sh
Executed the script using variables set in, or generated by the automation document
bash /tmp/script.sh -o {{VAR1}} -n {{VAR2}} -i {{VAR3}} -l {{VAR4}} -w {{VAR5}}
The script included the following getopts command to set the inputted variables:
while getopts o:n:i:l:w: option
do
case "${option}"
in
n) VAR1=${OPTARG};;
o) VAR2=${OPTARG};;
i) VAR3=${OPTARG};;
l) VAR4=${OPTARG};;
w) VAR5=${OPTARG};;
esac
done
The bash script used the variables to run the status check, and roll back to last working instance if it failed.
I am trying to run a Play Framework application on AWS EC2 Containers. I am using sbt-ecr to build and upload the image.
Now I would like to pass different command line parameters to Play, for instance -Dconfig=production.conf.
Usually when I run it locally my command looks like this:
docker run -p 80:9000 myimage -Dconfig.resource=production.conf
The port settings can be configured separately in AWS. How can I set Play's command line parameter for AWS EC2 containers?
Apparently my problem was of a completely different nature and didn't have anything at all to do with the entrypoint or cmd arguments.
The task didn't start because the loggroup which was configured for the container didn't exist.
Here is how to pass parameters to an image on ECS just like on the command line or using the docker CMD instruction. Just put them in the "Command" field in the "Environment" section of the container configuration like so:
-Dconfig.resource=production.conf,-Dhttps.port=9443