AWS Sagemaker Life Cycle Configuration - AutoStop - amazon-web-services

I have created 4 instances in my AWS sagemaker NOTEBOOKS tab.
I want to create a life cycle configuration where the instance should stop every day at 9:00 PM.
I have seen some examples but it is with IDLE TIME but not with the specific time
#!/bin/bash
set -e
# PARAMETERS
IDLE_TIME=3600
echo "Fetching the autostop script"
wget -O autostop.py https://raw.githubusercontent.com/mariokostelac/sagemaker-setup/master/scripts/auto-stop-idle/autostop.py
echo "Starting the SageMaker autostop script in cron"
(crontab -l 2>/dev/null; echo "*/5 * * * * /bin/bash -c '/usr/bin/python3 $DIR/autostop.py --time ${IDLE_TIME} | tee -a /home/ec2-user/SageMaker/auto-stop-idle.log'") | crontab -
echo "Changing cloudwatch configuration"
curl https://raw.githubusercontent.com/mariokostelac/sagemaker-setup/master/scripts/publish-logs-to-cloudwatch/on-start.sh | sudo bash -s auto-stop-idle /home/ec2-user/SageMaker/auto-stop-idle.log
Can anyone help me out on this one?

Change the crontab syntax to 0 21 * * * shutdown.py
Then create a shutdown.py which is reduced version of the autostop.py and contains mainly:
...
print('Closing notebook')
client = boto3.client('sagemaker')
client.stop_notebook_instance(NotebookInstanceName=get_notebook_name())
BTW: triggering shutdown now directly from the crontab command didn't work for me, therefore calling the SageMaker API instead.

Related

Elastic Beanstalk Linux 2 Cron Job running but not executing

I have put my configuration files down to as simple of an example as I can think of, but I still can't get the cron to execute. This is a django project, and though it looks like the crons are trying to run, they are not actually executing.
.ebextensions/cron-log.config
"/etc/cron.d/test_cron":
mode: "000644"
owner: root
group: root
content: |
*/1 * * * * root . /opt/elasticbeanstalk/deployment/env && echo "TESTING" >> /var/log/test_log.log 2>&1
commands:
rm_old_cron:
command: "rm -fr /etc/cron.d/*.bak"
ignoreErrors: true
when downloading the logs from aws, test_log.log does not exist
in the cron file returned in the logs, it shows:
Oct 29 09:03:01 ip-172-31-8-91 CROND[10212]: (root) CMD (. /opt/elasticbeanstalk/deployment/env && echo "TESTING" >> /var/log/test_log.log 2>&1)
I have tried many variations of this including having the command that is ran make changes in our database, but it never seems to actually execute.
I ran into this same issue recently (PHP ElasticBeanstalk, migrating to AL2), and found this solution.
Basically since Amazon Linux 2 no longer has "/opt/elasticbeanstalk/support/envvars", and "/opt/elasticbeanstalk/deployment/env" requires sudo to access, you'll need to effectively create your own environment variables file for your cron jobs to reference. Create a new .ebextensions config script containing the following:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /etc/profile.d/sh.local
Then, reference the "/etc/profile.d/sh.local" file instead of "/opt/elasticbeanstalk/deployment/env". Your cron entry should now look like:
*/1 * * * * root . /etc/profile.d/sh.local && echo "TESTING" >> /var/log/test_log.log 2>&1
...and you should be good to go! :)

Elastic Beanstalk Extensions: When does a command complete?

I have an AWS Elastic Beanstalk setup with some .ebextensions files with some container_commands in them. One of those commands is a script. The script completes, but the next command doesn't run.
$ pstree -p | grep cfn-
|-cfn-hup(2833)-+-command-process(10161)---command-process(10162)-+-cfn-init(10317)---bash(10428)
$ ps 10317
PID TTY STAT TIME COMMAND
10317 ? S 0:00 /usr/bin/python2.7 /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:278460835609:stack/awseb-e-4qwsypzv7u-stack/f8ab55f0-393c-11e9-8907-0ae8cc519968 -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPostBuild
$ ps 10428
PID TTY STAT TIME COMMAND
10428 ? Z 0:00 [bash] <defunct>
As you can see, my script is a defuct zombie, but cfn-init isn't making a wait(2) syscall for it.
When I run the script from the command line, it terminates properly.
I have to assume cfn-init is getting SIGCHLD. Why isn't it wait(2)ing and moving on?
Also, is there a better way to investigate this? I've been looking at running processes and reading the completely unhelpful /var/log/eb-* logs.
FWIW, the script is very simple:
#!/usr/bin/env bash
mkfifo ~ec2-user/fifo
nohup ~ec2-user/holdlock.sh &
read < ~ec2-user/fifo
And the thing it nohups is pretty simple:
#!/usr/bin/env bash
(echo 'select pg_advisory_lock(43110);';sleep 10m) |
PGPASSWORD=$RDS_PASSWORD psql -h $RDS_HOSTNAME -d $RDS_DB_NAME -U
$RDS_USERNAME |
tee ~ec2-user/nhlog > ~ec2-user/fifo
A workaround for this is to move the series of commands into a single shell script and invoke that as a single command. This still doesn't explain what ebextensions actually does, but it lets me move forward.

Userdata ec2 is not excuted

I am setting up a web app through code pipeline. My cloud formation script is creating an ec2 instance. In that ec2 user data, I have written a logic to get a code from the s3 and copy the code in the ec2 and start the server. A web app is in Python Pyramid framework.
code pipeline is connected with GitHub. It creates a zip file and uploads to the s3 bucket. (That is all in a buildspec.yml file)
When I changed the user data script and run code pipeline it works fine.
But When I changed some web app(My code base) file and re-run the code pipeline. That change is not reflected.
This is for ubuntu ec2 instance.
#cloud-boothook
#!/bin/bash -xe
echo "hello "
exec > /etc/setup_log.txt 2> /etc/setup_err.txt
sleep 5s
echo "User_Data starts"
rm -rf /home/ubuntu/c
mkdir /home/ubuntu/c
key=`aws s3 ls s3://bucket-name/pipeline-name/MyApp/ --recursive | sort | tail -n 1 | awk '{print $4}'`
aws s3 cp s3://bucket-name/$key /home/ubuntu/c/
cd /home/ubuntu/c
zipname="$(cut -d'/' -f3 <<<"$key")"
echo $zipname
mv /home/ubuntu/c/$zipname /home/ubuntu/c/c.zip
unzip -o /home/ubuntu/c/c.zip -d /home/ubuntu/c/
echo $?
python3 -m venv venv
venv/bin/pip3 install -e .
rm -rf cc.zip
aws configure set default.region us-east-1
venv/bin/pserve development.ini http_port=5000 &
The expected result is when I run core pipeline, every time user data script will execute.
Give me a suggestion, any other
The User-Data script gets executed exactly once upon instance creation. If you want to periodically synchronize your code changes to the instance you should think about implementing a CronJob in your User-Data script or use a service like AWS CodeDeploy to deploy new versions (this is the preferred approach).
CodePipeline uses a different S3 object for each pipeline execution artifact, so you can't hardcore a reference to it. You could publish the artifact to a fixed location. You might want to consider using CodeDeploy to deploy the latest version of your application.

How to set environment variables in Amazon Elastic Beanstalk when running a cron job (Node.js)?

I have been working on configuring a cron job while deploying an environment on Elastic Beanstalk. The purpose of the job is to run every 30 minutes and execute a Node.js script that processes SMS schedules in our system; when a schedule is ready, an SMS message is sent via the Twilio api.
Unfortunately, Node.js environments don't have the file /opt/elasticbeanstalk/support/envvars containing the environment variables defined (Maybe I am missing something?).
To work around this issue, I am loading the environment variables from /opt/elasticbeanstalk/bin/get-config in Python and then executing my Node.js script.
Everything is working as expected so hopefully this can help someone in the same situation; however, I am wondering if there is a better way to accomplish this... Open to suggestions.
In my .ebextensions folder I have the config file for the cron:
files:
"/etc/cron.d/process-sms-schedules":
mode: "000644"
owner: root
group: root
content: |
*/30 * * * * root /usr/local/bin/process-sms-schedules.sh
"/usr/local/bin/process-sms-schedules.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
# Execute script to process schedules
python /var/app/current/process-sms-schedules.py > /var/log/process-sms-schedules.log
exit 0
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
Here is the Python script that gets executed as part of the cron job:
#!/usr/bin/env python
import os
import subprocess
from subprocess import Popen, PIPE, call
import simplejson as json
envData = json.loads(Popen(['/opt/elasticbeanstalk/bin/get-config', 'environment'], stdout = PIPE).communicate()[0])
for k, v in envData.iteritems():
os.environ[k] = v
call(["babel-node", "/var/app/current/process-sms-schedules.js"])
Thanks in advance for any feedback.
References
Cron Job Elastic Beanstalk
How to set environment variable in Amazon Elastic Beanstalk (Python)
I had this issue while trying to execute a PHP file inside an Elastic Beanstalk environment. In particular I was trying to execute wp-cron.php file.
Basically you should write a cron job like this:
/etc/cron.d/wp-cronjob :
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/.local/bin:/home/ec2-user/bin
*/5 * * * * ec2-user . /opt/elasticbeanstalk/support/envvars; /usr/bin/php /var/www/html/wp-cron.php > /dev/null 2>&1
Explained:
Every 5 minutes executes commands as user "ec2-user": '*/5 * * * * ec2-user'
Loads elasticbeanstalk environment variables: '. /opt/elasticbeanstalk/support/envvars;'.
Do not print any output: '> /dev/null 2>&1'.
Also:
.ebextensions/wp-cronjob.txt
# Load paths
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/.local/bin:/home/ec2-user/bin
# Every 5 minutes executes commands as user "ec2-user": '*/5 * * * * ec2-user'.
# Loads elasticbeanstalk environment variables: '. /opt/elasticbeanstalk/support/envvars;'.
# Do not print any output: '> /dev/null 2>&1'.
*/5 * * * * ec2-user . /opt/elasticbeanstalk/support/envvars; /usr/bin/php /var/www/html/wp-cron.php > /dev/null 2>&1
.ebextensions/cronjob.config
container_commands:
add_wp_cronjob:
command: "cat .ebextensions/wp-cronjob.txt > /etc/cron.d/wp-cronjob && chmod 644 /etc/cron.d/wp-cronjob"
leader_only: true
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
Maybe is not the same for a Node.js environment but I am pretty sure they are similar.

Scheduling an ec2-create-image cron job

I originally posted this on the AWS forums and didn't get much response.
I'm trying to schedule a twice daily image of a server, I'm using this entry in my crontab under the root user:
01 12,00 * * * /opt/aws/bin/ec2-create-image i-InstanceNameHere --region eu-west-1 --name `date +%s` --description "testing-imaging" --no-reboot -O ABCDEFGHIJKLMNOPQRS -W ABCDEFGHIJKLMNOPQRS
Running the command (with correct key information and instance name of course) manually successfully creates an image (but without the description), however when cronned nothing happens.
I've both had this command directly in crontab and have dropped the above command into a bash script which also pops out an entry in a file with a date stamp each time it runs, so I'm certain this isn't a cron issue.
Does anyone have any thoughts what could cause this to not work when scheduled?
Thanks in advance for any advice!
Mike
Cron runs on a 24 hour clock starting from 0 as midnight and 23 as 11 pm. As such you'd simply have to replace 12,0 with 0,12. Also just use single "0".
The problem, as suspected, wasn't specifically do to with cron.
Off the back of error2007s's query for logs, I put everything into a bash script and pushed all output into a log file.
The issue was that when running through cron, most of the server variables needed weren't set, in the end I'm left with the below list definitions. These might be a bit overzelous, but in the end the process works.
#!/bin/sh
#Set required envionmental variables for ec2-create-image to run
export AWS_PATH=/opt/aws
export PATH=$PATH:$AWS_PATH/bin
export AWS_ACCESS_KEY=000000000000
export AWS_SECRET_KEY=000000000000000000000
export AWS_HOME=/opt/aws/apitools/ec2
export EC2_HOME=/opt/aws/apitools/ec2
export JAVA_HOME=/usr/lib/jvm/jre
/opt/aws/bin/ec2-create-image i-123123123123123 --region eu-west-1 --name `date +%s` --description "testing-imaging" --no-reboot &>> backuplog.txt
echo "backup operation ran `date`" >> backuplog.txt