Have some Python script to be run on Amazon AMI Spot Instance.
Wondering can I deploy by Python script/remote script :
1) The AMI spot instance.
2) Lubuntu, Anaconda + additionnal Python conda packages dynamically
on the AMI spot instance through script
Do I need to use Docker to have everything packaged in advance ?
There is StarCluster pacakge in Python, am not sure if we can use to launch
Spot Instance ?
The easiest way to get started with bootstrapping EC2 instances at launch is to add a custom user data script. If you start the instance user data with #! and the path to the interpreter you want to use, it gets executed at boot time and can perform any customization you want.
Example:
#!/bin/bash
yum update -y
Further documentation: User Data and Shell Scripts
Related
I cannot get my newly launched ec2 instances to run the User Data script specified in Advanced Settings -> User Data.
If I manually SSH into my instance after it is done initializing and type "cd DIRECTORY", & "npm run SCRIPT" the instance will run the NodeJS app perfectly. However I cannot get the User Data script to do this automatically on ec2 instance initialization.
I saw in similar articles to include "#!/bin/bash" at the beginning of my script but this has not made a difference.
When launching a new instance, in Advanced Settings -> User Data script I have:
#!/bin/bash <- tried with and without this
cd DIRECTORY
npm run SCRIPT
Which is the exact command I use to run my NodeJS app by manually SSH-ing into the instance. Is there something I am missing??
I am launching an Amazon Linux T3.micro instance from my AMI.
Any ideas on how to troubleshoot or fix this??
How to run Jupyter notebook on AWS instance, chmod 400 error
I want to run my jupyter notebooks in the cloud, ec2 AWS instance.
--
I'm following this tutorial:
https://www.codingforentrepreneurs.com/blog/jupyter-notebook-server-aws-ec2-aws-vpc
--
I have the Instance ec2 all set up as well as nginx.
--
Problem is..
When typing chmod 400 JupyterKey.pem just work for MAC not Windowns Power shell
cd path/to/my/dev/folder/
chmod 400 JupyterKey.pem
ssh ubuntu#34.235.154.196 -i JupyterKey.pem
Error: The term 'chmod' is not recognized as the name of a cmdlet, function, cript, or operation
category info: ObjectNotFound
FullyQualifiedErrorId: Command notFoundException
AWS has a managed Jupyter Notebook service as part of Amazon SageMaker.
SageMaker hosted notebook instances enable you to easily spin up a Jupyter Notebook with one click, with pay per hour pricing (similar to EC2 billing), and with the ability to easily upload your existing notebook directly onto the managed instance, all directly through the instance URL + AWS console.
Check out this tutorial for a guide on getting started!
I had the same permission problem and could fix it by running the following command in the Amazon Machine Image Linux:
sudo chown user:user ~/certs/mycert.pem
I have a user-data script file when launching an EC2 instance from an AMI image.
The script uses AWS but I get "aws: command not found".
The AWS-CLI is installed as part of the AMI (I can use it once the instance is up) but for some reason the script cannot find it.
Am I missing something? any chance that the user-data script runs before the image is loaded (I find it hard to believe)?
Maybe the path env variable is not set at this point?
Thanks,
any chance that the user-data script runs before the image is loaded
No certainly not. It is a service on that image that runs the script.
Maybe the path env variable is not set at this point
This is most likely the issue. The scripts run as root not ec2-user, and don't have access to the path you may have configured in your ec2-user account. What happens if you try specifying /usr/bin/aws instead of just aws?
You can install aws cli and set up environment variables with your credentials. For example, in the user data script, you can write something like:
#!/bin/bash
apt-get install -y awscli
export AWS_ACCESS_KEY_ID=your_access_key_id_here
export AWS_SECRET_ACCESS_KEY=your_secret_access_key_here
aws cp s3://test-bucket/something /local/directory/
In case you are using a CentOS based AMI, then you have to change apt-get line for yum, and the package is called aws-cli instead of awscli.
What's the best way to send logs from Auto scaling groups (of EC2) to Logentries.
I previously used the EC2 platform to create EC2 log monitoring for all of my EC2 instances created by an Autoscaling group. However according to Autoscaling rules, new instance will spin up if a current one is destroyed.
Now how do I create an automation for Logentries to create a new hosts and starting getting logs. I've read this https://logentries.com/doc/linux-agent-with-chef/#updating-le-agent I'm stuck at the override['le']['pull-server-side-config'] = false since I don't know anything about Chef (I just took the training from their site)
For an Autoscaling group, you need to get this baked into an AMI, or scripted to run on startup. You can get an EC2 instance to run commands on startup, after you've figured out which script to run.
The Logentries Linux Agent installation docs has setup instructions for an Amazon AMI (under Installation > Select your distro below > Amazon AMI).
Run the following commands one by one in your terminal:
You will need to provide your Logentries credentials to link the agent to your account.
sudo -s
tee /etc/yum.repos.d/logentries.repo <<EOF
[logentries]
name=Logentries repo
enabled=1
metadata_expire=1d
baseurl=http://rep.logentries.com/amazon\$releasever/\$basearch
gpgkey=http://rep.logentries.com/RPM-GPG-KEY-logentries
EOF
yum update
yum install logentries
le register
yum install logentries-daemon
I recommend trying that script once and seeing if it works properly for you, then you could include it in the user data for your Autoscaling launch configuration.
I've just try the Debian 7.1 AMI on AWS(from the Marketplace). I've some problem with my user-data script.
He's not executed during the boot time, my script works well with the Amazon AMI but not with Debian(I've also try with a simple script: echo "toto" > /tmp/test.log but nothing).
Any idea?
Thanks
Matt
P.S: I start my script with #!/bin/bash
In fact, the user-data script is executed only one time. If you create a AMI based on the Debian marketplace AMI, when you'll launch your "custom" AMI, the user-data has already been executed when you have started the Debian base AMI.
If you want that the user-data will be executed on a custom AMI, you must change the insserv for the init.d ec2-run-user-data:
sudo -i
insserv -d ec2-run-user-data
And now you can create an AMI.
Matt