We are trying to run batch scripts on load on a AWS EC2 instance using userdata (which I understand is based off of cloud-init). Since the code runs in a conda environment, we are trying to activate it prior to running the Python/Pandas code. We noticed that the PATH variable isn't getting set correctly. (even though it was set correctly prior to making the image, and is set correctly for all users after SSH'ing into instance)
We've tried:
#!/bin/bash
source activate path/to/conda_env
bash path/to/script.sh
and
#!/bin/bash
conda run -n path/to/conda_env bash path/to/script.sh
Nothing appears to work. This code runs the script while sshing into an EC2 instance but not while using EC2 cloud-init userdata (launching a script at launch). I've verified the script is indeed working at launch by creating a simple text file with user data, so it is working when starting an instance...
Related
The issue I'm going to describe works OK on a stock Windows Server 2012 AMI from Amazon. I'm facing issues with a custom AMI.
I created a custom AMI for Windows Server 2012 by creating an image from an EC2 machine.
Just before creating the custom AMI, I used the Ec2ConfigServiceSetting.exe to make sure:
The instance receives a new machine name based on its IP.
The password of the user is changed on boot.
The instance is provisioned using the script I have in place in UserData.
I also shut down the instance using Sysprep from the Ec2ConfigServiceSetting before creating the image for the custom AMI.
However, when I run a remote PowerShell command (from C# code, if it matters), it doesn't work. From C#-land, the command gets executed OK, but nothing happens in the machine.
Let's say my remote PS command launches a program in the remote machine (agent.exe). My script looks a little bit like:
Set-Location C:\path\in\disk
$env:Path = "C:\some\thing;" + $env:Path
C:\path\to\agent.exe --daemon
Once I log into the Ec2 instance, agent.exe --daemon is NOT running. However, if I first log into the instance, then run the remote PowerShell command, agent.exe --daemon DOES run.
This works perfectly with a stock AMI from Amazon, so I can only assume there's some configuration I'm missing for this to work (and, why does it work if I first log in using RDesktop?)
We found in the past some issues regarding SSL initialization without a user profile, so in our provisioning script (UserData) we do some things someone might consider shenanigans:
net user Administrator hardcoded-password
net user ec2-user hardcoded-password /add
$pwd = (ConvertTo-SecureString 'hardcoded-password' -AsPlainText -Force)
$cred = New-Object System.Management.Automation.PSCredential('Administrator', $pwd)
Start-Process cmd -LoadUserProfile -Credential $cred
All of a sudden no linux command(ls, vi, etc..) is working in AWS EC2 instance and I get message saying command not found.
I had launched an EC2 instance and all linux commands were working fine.
I then uploaded some files to EC2 and extracted them(setting up my environment).
I made following changes to the ~/.bashrc file
export M2_HOME=/home/ec2-user/apache-maven-3.6.0
export JAVA_HOME=/home/ec2-user/jdk1.8.0_151
export ANT_HOME=/home/ec2-user/apache-ant-1.9.13
export PATH=/home/ec2-user/jdk1.7.0_80/bin:/home/ec2-user/apache-maven-3.6.0/bin
export JBOSS_HOME=target/wildfly-run/wildfly-11.0.0.Final
and I executed below command in my AWS EC2 instance.
source ~/.bashrc
After this linux commands(ls, vi, cat, etc..) are not working, however "which", "pwd" commands are working.
Can someone help to me to correct the PATH settings so that my commands start executing normally
You should append the original PATH to the additions you made (using the $PATH variable), like below:
export PATH=/home/ec2-user/jdk1.7.0_80/bin:/home/ec2-user/apache-maven-3.6.0/bin:$PATH
Changing value of path as below sorted out all the issues
export PATH=/usr/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/local/bin:/opt/aws/bin:/root/bin:/home/ec2-user/jdk1.7.0_80/bin:/home/ec2-user/apache-maven-3.5.2/bin:/home/ec2-user/apache-ant-1.9.14/bin
below is the system default path
PATH=/usr/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/local/bin:/opt/aws/bin:/root/bin
I have an EC2 instance that uses #reboot to run a python script every time the instance starts up. The python script uses conn.stop_instances(instance_ids=[my_id]) to stop the instance after the script has finished (more details here). Unfortunately, I can no longer ssh into my instance because the python script stops the instance immediately. Is there anything I can do to reset the instance or change the settings manually?
If not, is there any way to grab files from an instance without having to ssh in?
Create a shell script that deletes your reboot script.
#! /bin/bash
rm -f /path/to/my/python_script.py
Add this script as User Data to your EC2 instance.
Reboot the instance. The script will run deleting your python reboot script.
Notice the -f flag. This means force, which will handle files set to read-only.
Go back and remove this script from User Data once you can control / access your instance.
Running Commands on Your Linux Instance at Launch
I’m setting up a patch process for EC2 servers running a web application.
I need to build an automated process that installs system updates but, reverts back to the last working ec2 instance if the web application fails a status check.
I’ve been trying to do this using an Automation Document in EC2 Systems Manager that performs the following steps:
Stop EC2 instance
Create AMI from instance
Launch new instance from newly created AMI
Run updates
Run status check on web application
If check fails, stop new instance and restart original instance
The Automation Document runs the first 5 steps successfully, but I can't identify how to trigger step 6? Can I do this within the Automation Document? What output would I be able to call from step 5? If it uses aws:runCommand, should the runCommand trigger a new automation document or another AWS tool?
I tried the following to solve this, which more or less worked:
Included an aws:runCommand action in the automation document
This ran the DocumentName "AWS-RunShellScript" with the following parameters:
Downloaded the script from s3:
sudo aws s3 cp s3://path/to/s3/script.sh /tmp/script.sh
Set the file to executable:
chmod +x /tmp/script.sh
Executed the script using variables set in, or generated by the automation document
bash /tmp/script.sh -o {{VAR1}} -n {{VAR2}} -i {{VAR3}} -l {{VAR4}} -w {{VAR5}}
The script included the following getopts command to set the inputted variables:
while getopts o:n:i:l:w: option
do
case "${option}"
in
n) VAR1=${OPTARG};;
o) VAR2=${OPTARG};;
i) VAR3=${OPTARG};;
l) VAR4=${OPTARG};;
w) VAR5=${OPTARG};;
esac
done
The bash script used the variables to run the status check, and roll back to last working instance if it failed.
I am trying to package my software into an AWS AMI.
I would like to disable in my AMI the logic which executes the userdata if it detects that it contains a bash script, without disabling the whole userdata system (I am accessing the userdata through the metadata-url (169.254.169.254)).
My AMI is based on the Amazon Linux AMI x86_64 PV EBS (ami-5256b825) which uses cloud-init 0.7.2-7.20.
I have already tried to comment the following lines in the /etc/cloud/cloud.cfg.d/defaults.cfg but the AWS AMI creation process seems to overwrite this file with the default values.
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
Note: On the old AWS AMIs (for instance ami-5256b825), I had been doing this with the following sed command :
sed -i 's/once-per-instance/never/g' /etc/init.d/cloud-init-user-scripts
You can could create the cloud-init lock file (e.g. "/var/lib/cloud/sem/user-scripts.i-7f3f1d11") before it runs. As cloud-init runs last and checks for the absence of this lock file to run the user data, user data would not run. You can condition that to the user data starting with "#!/bin/bash".