I want to schedule my droplets so that it can be created at peak time hours and destroyed at low traffic hours. However, I failed to cron this script, doctl was never executed using cronjob BUT it was executed using ./script format.
#!/bin/bash
#set environtment variable
database_id=$(<./config/database_ID)
region=sgp1
num_nodes=3
size=db-s-2vcpu-4gb
#create readonly
echo "create readonly on ${database_id}"
for i in {1..10}
do
echo "create db-ro-${i}"
doctl database replica create $database_id db-ro-$i --size db-s-4vcpu-8gb --region $region
done
#resizing database
doctl databases resize $database_id --num-nodes $num_nodes --size $size -v
Can anyone give me a workaround without using libraries other than native doctl?
Related
I am working on Glue in AWS and trying to test and debug in local dev. I follow the instruction here https://aws.amazon.com/blogs/big-data/developing-aws-glue-etl-jobs-locally-using-a-container/ to develop Glue job locally. On that post, they use Glue 1.0 image for testing and it works as it should be. However when I load and try to dev by Glue 3.0 version; I follow the guidance steps but, I can't open Jupyter notebook on :8888 like the post said even every step seems correct.
here my cmd to start a Jupyter notebook on Glue 3.0 container
docker run -itd -p 8888:8888 -p 4040:4040 -v ~/.aws:/root/.aws:ro --name glue3_jupyter amazon/aws-glue-libs:glue_libs_3.0.0_image_01 /home/jupyter/jupyter_start.sh
nothing shows on http://localhost:8888.
still have no idea why! I understand the diff. between versions of Glues just wanna develop and test on the latest version of it. Have anybody got the same issue?
Thanks.
It seems that GLUE 3.0 image has some issues with SSL. A workaround for working locally is to disable SSL (you also have to change the script paths as documentation is not updated).
$ docker run -it -p 8888:8888 -p 4040:4040 -e DISABLE_SSL="true" \
-e AWS_ACCESS_KEY_ID=$(aws --profile default configure get aws_access_key_id) \
-e AWS_SECRET_ACCESS_KEY=$(aws --profile default configure get aws_secret_access_key) \
-e AWS_DEFAULT_REGION=$(aws --profile default configure get region) \
--name glue_jupyter amazon/aws-glue-libs:glue_libs_3.0.0_image_01 \
/home/glue_user/jupyter/jupyter_start.sh
After a few seconds you should have a working jupyter notebook instance running on http://127.0.0.1:8888
I am setting up a web app through code pipeline. My cloud formation script is creating an ec2 instance. In that ec2 user data, I have written a logic to get a code from the s3 and copy the code in the ec2 and start the server. A web app is in Python Pyramid framework.
code pipeline is connected with GitHub. It creates a zip file and uploads to the s3 bucket. (That is all in a buildspec.yml file)
When I changed the user data script and run code pipeline it works fine.
But When I changed some web app(My code base) file and re-run the code pipeline. That change is not reflected.
This is for ubuntu ec2 instance.
#cloud-boothook
#!/bin/bash -xe
echo "hello "
exec > /etc/setup_log.txt 2> /etc/setup_err.txt
sleep 5s
echo "User_Data starts"
rm -rf /home/ubuntu/c
mkdir /home/ubuntu/c
key=`aws s3 ls s3://bucket-name/pipeline-name/MyApp/ --recursive | sort | tail -n 1 | awk '{print $4}'`
aws s3 cp s3://bucket-name/$key /home/ubuntu/c/
cd /home/ubuntu/c
zipname="$(cut -d'/' -f3 <<<"$key")"
echo $zipname
mv /home/ubuntu/c/$zipname /home/ubuntu/c/c.zip
unzip -o /home/ubuntu/c/c.zip -d /home/ubuntu/c/
echo $?
python3 -m venv venv
venv/bin/pip3 install -e .
rm -rf cc.zip
aws configure set default.region us-east-1
venv/bin/pserve development.ini http_port=5000 &
The expected result is when I run core pipeline, every time user data script will execute.
Give me a suggestion, any other
The User-Data script gets executed exactly once upon instance creation. If you want to periodically synchronize your code changes to the instance you should think about implementing a CronJob in your User-Data script or use a service like AWS CodeDeploy to deploy new versions (this is the preferred approach).
CodePipeline uses a different S3 object for each pipeline execution artifact, so you can't hardcore a reference to it. You could publish the artifact to a fixed location. You might want to consider using CodeDeploy to deploy the latest version of your application.
I have elasticsearch 5.5 running on a server with some data indexed in it. I want to migrate this ES data to AWS elasticsearch cluster. How I can perform this migration. I got to know that one way is by creating the snapshot of ES cluster, but I am not able to find any proper documentation for this.
The best way to migrate is by using Snapshots. You will need to snapshot your data to Amazon S3 and then proceed a restore from there. Documentation for snapshots to S3 can be found here. Alternatively, you can also re-index your data though this is a longer process and there are limitations depending on the version of AWS ES.
I also recommend looking at Elastic Cloud, the official hosted offering on AWS that includes the additional X-Pack monitoring, management, and security features. The migration guide for moving to Elastic Cloud also goes over snapshots and re-indexing.
I momentarily created a shell script for this -
Github - https://github.com/vivekyad4v/aws-elasticsearch-domain-migration/blob/master/migrate.sh
#!/bin/bash
#### Make sure you have Docker engine installed on the host ####
###### TODO - Support parameters ######
export AWS_ACCESS_KEY_ID=xxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxx
export AWS_DEFAULT_REGION=ap-south-1
export AWS_DEFAULT_OUTPUT=json
export S3_BUCKET_NAME=my-es-migration-bucket
export DATE=$(date +%d-%b-%H_%M)
old_instance="https://vpc-my-es-ykp2tlrxonk23dblqkseidmllu.ap-southeast-1.es.amazonaws.com"
new_instance="https://vpc-my-es-mg5td7bqwp4zuiddwgx2n474sm.ap-south-1.es.amazonaws.com"
delete=(.kibana)
es_indexes=$(curl -s "${old_instance}/_cat/indices" | awk '{ print $3 }')
es_indexes=${es_indexes//$delete/}
es_indexes=$(echo $es_indexes|tr -d '\n')
echo "index to be copied are - $es_indexes"
for index in $es_indexes; do
# Export ES data to S3 (using s3urls)
docker run --rm -ti taskrabbit/elasticsearch-dump \
--s3AccessKeyId "${AWS_ACCESS_KEY_ID}" \
--s3SecretAccessKey "${AWS_SECRET_ACCESS_KEY}" \
--input="${old_instance}/${index}" \
--output "s3://${S3_BUCKET_NAME}/${index}-${DATE}.json"
# Import data from S3 into ES (using s3urls)
docker run --rm -ti taskrabbit/elasticsearch-dump \
--s3AccessKeyId "${AWS_ACCESS_KEY_ID}" \
--s3SecretAccessKey "${AWS_SECRET_ACCESS_KEY}" \
--input "s3://${S3_BUCKET_NAME}/${index}-${DATE}.json" \
--output="${new_instance}/${index}"
new_indexes=$(curl -s "${new_instance}/_cat/indices" | awk '{ print $3 }')
echo $new_indexes
curl -s "${new_instance}/_cat/indices"
done
I originally posted this on the AWS forums and didn't get much response.
I'm trying to schedule a twice daily image of a server, I'm using this entry in my crontab under the root user:
01 12,00 * * * /opt/aws/bin/ec2-create-image i-InstanceNameHere --region eu-west-1 --name `date +%s` --description "testing-imaging" --no-reboot -O ABCDEFGHIJKLMNOPQRS -W ABCDEFGHIJKLMNOPQRS
Running the command (with correct key information and instance name of course) manually successfully creates an image (but without the description), however when cronned nothing happens.
I've both had this command directly in crontab and have dropped the above command into a bash script which also pops out an entry in a file with a date stamp each time it runs, so I'm certain this isn't a cron issue.
Does anyone have any thoughts what could cause this to not work when scheduled?
Thanks in advance for any advice!
Mike
Cron runs on a 24 hour clock starting from 0 as midnight and 23 as 11 pm. As such you'd simply have to replace 12,0 with 0,12. Also just use single "0".
The problem, as suspected, wasn't specifically do to with cron.
Off the back of error2007s's query for logs, I put everything into a bash script and pushed all output into a log file.
The issue was that when running through cron, most of the server variables needed weren't set, in the end I'm left with the below list definitions. These might be a bit overzelous, but in the end the process works.
#!/bin/sh
#Set required envionmental variables for ec2-create-image to run
export AWS_PATH=/opt/aws
export PATH=$PATH:$AWS_PATH/bin
export AWS_ACCESS_KEY=000000000000
export AWS_SECRET_KEY=000000000000000000000
export AWS_HOME=/opt/aws/apitools/ec2
export EC2_HOME=/opt/aws/apitools/ec2
export JAVA_HOME=/usr/lib/jvm/jre
/opt/aws/bin/ec2-create-image i-123123123123123 --region eu-west-1 --name `date +%s` --description "testing-imaging" --no-reboot &>> backuplog.txt
echo "backup operation ran `date`" >> backuplog.txt
I created a tag on the AWS console for one of my EC2 instances.
However, when I look on the server, no such environment variable is set.
The same thing works with elastic beanstalk. env shows the tags I created on the console.
$ env
[...]
DB_PORT=5432
How can I set environment variables in Amazon EC2?
You can retrieve this information from the meta data and then run your own set environment commands.
You can get the instance-id from the meta data (see here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval)
curl http://169.254.169.254/latest/meta-data/instance-id
Then you can call the describe-tags using the pre-installed AWS CLI (or install it on your AMI)
aws ec2 describe-tags --filters "Name=resource-id,Values=i-5f4e3d2a" "Name=Value,Values=DB_PORT"
Then you can use OS set environment variable command
export DB_PORT=/what/you/got/from/the/previous/call
You can run all that in your user-data script. See here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Lately, it seems AWS Parameter Store is a better solution.
Now there is even a secrets manager which auto manages sensitive configurations as database keys and such..
See this script using SSM Parameter Store based of the previous solutions by Guy and PJ Bergeron.
https://github.com/lezavala/ec2-ssm-env
I used a combination of the following tools:
Install jq library (sudo apt-get install -y jq)
Install the EC2 Instance Metadata Query Tool
Here's the gist of the code below in case I update it in the future: https://gist.github.com/marcellodesales/a890b8ca240403187269
######
# Author: Marcello de Sales (marcello.desales#gmail.com)
# Description: Create Create Environment Variables in EC2 Hosts from EC2 Host Tags
#
### Requirements:
# * Install jq library (sudo apt-get install -y jq)
# * Install the EC2 Instance Metadata Query Tool (http://aws.amazon.com/code/1825)
#
### Installation:
# * Add the Policy EC2:DescribeTags to a User
# * aws configure
# * Souce it to the user's ~/.profile that has permissions
####
# REboot and verify the result of $(env).
# Loads the Tags from the current instance
getInstanceTags () {
# http://aws.amazon.com/code/1825 EC2 Instance Metadata Query Tool
INSTANCE_ID=$(./ec2-metadata | grep instance-id | awk '{print $2}')
# Describe the tags of this instance
aws ec2 describe-tags --region sa-east-1 --filters "Name=resource-id,Values=$INSTANCE_ID"
}
# Convert the tags to environment variables.
# Based on https://github.com/berpj/ec2-tags-env/pull/1
tags_to_env () {
tags=$1
for key in $(echo $tags | /usr/bin/jq -r ".[][].Key"); do
value=$(echo $tags | /usr/bin/jq -r ".[][] | select(.Key==\"$key\") | .Value")
key=$(echo $key | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
echo "Exporting $key=$value"
export $key="$value"
done
}
# Execute the commands
instanceTags=$(getInstanceTags)
tags_to_env "$instanceTags"
If you are using linux or mac os for your ec2 instance then,
Go to your root directory and write command:
vim .bash_profile
You can see your bash_profile file and now press 'i' for inserting a lines, then add
export DB_PORT="5432"
After adding this line you need to save file, so press 'Esc' button then press ':' and after colon write 'w' it will save the file without exiting.
For exit, again press ':' after that write 'quit' and now you are exit from the file. To check that your environment variable is set or not write below commands:
python
>>>import os
>>>os.environ.get('DB_PORT')
>>>5432
Following the instructions given by Guy, I wrote a small shell script. This script uses AWS CLI and jq. It lets you import your AWS instance and AMI tags as shell environment variables.
I hope it can help a few people.
https://github.com/12moons/ec2-tags-env