I've just try the Debian 7.1 AMI on AWS(from the Marketplace). I've some problem with my user-data script.
He's not executed during the boot time, my script works well with the Amazon AMI but not with Debian(I've also try with a simple script: echo "toto" > /tmp/test.log but nothing).
Any idea?
Thanks
Matt
P.S: I start my script with #!/bin/bash
In fact, the user-data script is executed only one time. If you create a AMI based on the Debian marketplace AMI, when you'll launch your "custom" AMI, the user-data has already been executed when you have started the Debian base AMI.
If you want that the user-data will be executed on a custom AMI, you must change the insserv for the init.d ec2-run-user-data:
sudo -i
insserv -d ec2-run-user-data
And now you can create an AMI.
Matt
Related
I have one Ubuntu 18.04.6 LTS EC2 Instance having Graviton2 arm64 Architecture. I have also enabled encryption on EBS volume.
I configured some cronjob bash script.
I was able to run those script manually by ./backup-script.sh command.
But when configured cron job below.
0 4 * * * /bin/sh /path/to/script/backup-script.sh
It is not able to execute.
I have other EC2 instances where cron job is running successfully but those are not graviton2 based instance neither EBS encryption enabled on them.
Have you checked logs grep CRON /var/log/syslog? It should have worked. Verified by running simple cron job on Grv2 instance Ubuntu 18.
I found the solution. Cronjob script is now working in graviton.
But any data from /var/ directory is not getting copy to s3.
It gives below error-
"The user-provided path /var/www/ does not exist."
I usually work on amazon linux ec2 instance and i check /var/log/cloud-init-output.log to see if my cloudformation user data script is working or not. I can't find cloud-init-output.log on redhat ec2 instance and i am not sure where to check the logs and how to make sure that my user data script is working properly.
Josh answered this here: https://stackoverflow.com/a/50258755/5775568
TL;DR: Run this command from your EC2 instance to see your logs:
sudo grep cloud-init /var/log/messages
For posterity: I also needed to take this approach to see user-data logs on my centos7 EC2 instance.
Have some Python script to be run on Amazon AMI Spot Instance.
Wondering can I deploy by Python script/remote script :
1) The AMI spot instance.
2) Lubuntu, Anaconda + additionnal Python conda packages dynamically
on the AMI spot instance through script
Do I need to use Docker to have everything packaged in advance ?
There is StarCluster pacakge in Python, am not sure if we can use to launch
Spot Instance ?
The easiest way to get started with bootstrapping EC2 instances at launch is to add a custom user data script. If you start the instance user data with #! and the path to the interpreter you want to use, it gets executed at boot time and can perform any customization you want.
Example:
#!/bin/bash
yum update -y
Further documentation: User Data and Shell Scripts
I have a user-data script file when launching an EC2 instance from an AMI image.
The script uses AWS but I get "aws: command not found".
The AWS-CLI is installed as part of the AMI (I can use it once the instance is up) but for some reason the script cannot find it.
Am I missing something? any chance that the user-data script runs before the image is loaded (I find it hard to believe)?
Maybe the path env variable is not set at this point?
Thanks,
any chance that the user-data script runs before the image is loaded
No certainly not. It is a service on that image that runs the script.
Maybe the path env variable is not set at this point
This is most likely the issue. The scripts run as root not ec2-user, and don't have access to the path you may have configured in your ec2-user account. What happens if you try specifying /usr/bin/aws instead of just aws?
You can install aws cli and set up environment variables with your credentials. For example, in the user data script, you can write something like:
#!/bin/bash
apt-get install -y awscli
export AWS_ACCESS_KEY_ID=your_access_key_id_here
export AWS_SECRET_ACCESS_KEY=your_secret_access_key_here
aws cp s3://test-bucket/something /local/directory/
In case you are using a CentOS based AMI, then you have to change apt-get line for yum, and the package is called aws-cli instead of awscli.
What's the best way to send logs from Auto scaling groups (of EC2) to Logentries.
I previously used the EC2 platform to create EC2 log monitoring for all of my EC2 instances created by an Autoscaling group. However according to Autoscaling rules, new instance will spin up if a current one is destroyed.
Now how do I create an automation for Logentries to create a new hosts and starting getting logs. I've read this https://logentries.com/doc/linux-agent-with-chef/#updating-le-agent I'm stuck at the override['le']['pull-server-side-config'] = false since I don't know anything about Chef (I just took the training from their site)
For an Autoscaling group, you need to get this baked into an AMI, or scripted to run on startup. You can get an EC2 instance to run commands on startup, after you've figured out which script to run.
The Logentries Linux Agent installation docs has setup instructions for an Amazon AMI (under Installation > Select your distro below > Amazon AMI).
Run the following commands one by one in your terminal:
You will need to provide your Logentries credentials to link the agent to your account.
sudo -s
tee /etc/yum.repos.d/logentries.repo <<EOF
[logentries]
name=Logentries repo
enabled=1
metadata_expire=1d
baseurl=http://rep.logentries.com/amazon\$releasever/\$basearch
gpgkey=http://rep.logentries.com/RPM-GPG-KEY-logentries
EOF
yum update
yum install logentries
le register
yum install logentries-daemon
I recommend trying that script once and seeing if it works properly for you, then you could include it in the user data for your Autoscaling launch configuration.