How to add aws-cli v2 in production? - amazon-web-services

I have developed an application in nodejs/vuejs and I want to dockerize the whole project before push it in production.
Knowing that my API is executing an aws command at a specific time, I need to install and configure AWS-CLIv2 in production.
crontab.scheduleJob('30 8,12 * * *', () => {
shelljs.exec("rm -rf src/data/*.csv && aws s3 cp s3://${bucketName}/`aws s3 ls s3://${bucketName} | tail -n 1 | awk '{print $4}'` src/data");
});
For development, I installed (from line command) and configured AWS locally from https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html
Would it be possible to install aws-cliv2 via the API's Dockerfile? What structure should I adopt?
Otherwise offer me your solutions please ?
Thank you

Maybe you can try use the SDK of AWS for interact with the AWS API (in this case the S3 Bucket), but if you prefer use the aws-cli binary you can install the binary like this docker image is made it or make your docker image based in this image.

Related

Userdata ec2 is not excuted

I am setting up a web app through code pipeline. My cloud formation script is creating an ec2 instance. In that ec2 user data, I have written a logic to get a code from the s3 and copy the code in the ec2 and start the server. A web app is in Python Pyramid framework.
code pipeline is connected with GitHub. It creates a zip file and uploads to the s3 bucket. (That is all in a buildspec.yml file)
When I changed the user data script and run code pipeline it works fine.
But When I changed some web app(My code base) file and re-run the code pipeline. That change is not reflected.
This is for ubuntu ec2 instance.
#cloud-boothook
#!/bin/bash -xe
echo "hello "
exec > /etc/setup_log.txt 2> /etc/setup_err.txt
sleep 5s
echo "User_Data starts"
rm -rf /home/ubuntu/c
mkdir /home/ubuntu/c
key=`aws s3 ls s3://bucket-name/pipeline-name/MyApp/ --recursive | sort | tail -n 1 | awk '{print $4}'`
aws s3 cp s3://bucket-name/$key /home/ubuntu/c/
cd /home/ubuntu/c
zipname="$(cut -d'/' -f3 <<<"$key")"
echo $zipname
mv /home/ubuntu/c/$zipname /home/ubuntu/c/c.zip
unzip -o /home/ubuntu/c/c.zip -d /home/ubuntu/c/
echo $?
python3 -m venv venv
venv/bin/pip3 install -e .
rm -rf cc.zip
aws configure set default.region us-east-1
venv/bin/pserve development.ini http_port=5000 &
The expected result is when I run core pipeline, every time user data script will execute.
Give me a suggestion, any other
The User-Data script gets executed exactly once upon instance creation. If you want to periodically synchronize your code changes to the instance you should think about implementing a CronJob in your User-Data script or use a service like AWS CodeDeploy to deploy new versions (this is the preferred approach).
CodePipeline uses a different S3 object for each pipeline execution artifact, so you can't hardcore a reference to it. You could publish the artifact to a fixed location. You might want to consider using CodeDeploy to deploy the latest version of your application.

How do I download files within a Sagemaker notebook instance programatically?

We have a notebook instance within Sagemaker which contains many Jupyter Python scripts. I'd like to write a program which downloads these various scripts each day (i.e. so that I could back them up). Unfortunately I don't see any reference to this in the AWS CLI API.
Is this achievable?
It's not exactly that you want, but looks like VCS can fit your needs. You can use Github(if you already use it) or CodeCommit(free privat repos) Details and additional ways like sync target dir with S3 bucket - https://aws.amazon.com/blogs/machine-learning/how-to-use-common-workflows-on-amazon-sagemaker-notebook-instances/
Semi automatic way:
conda install -y -c conda-forge zip
!zip -r -X folder.zip folder-to-zip
Then download that zipfile.

Cron job with aws eb and laravel task scheduling

I would like to know how create a cron with AWS elastic beanstalk and laravel task scheduling.
Currently AWS elastic beanstalk propose to create a cron.yaml file but this file take in paramters only the url. However laravel need to execute a command. I don't know how to do it.
Can you help me please ?
Having done lots of googling, I don't think AWS EB supports executing the schedule:run command directly from app. Instead, the command will have to be triggered from an endpoint just as explained in the docs here.
I found a package here which helped me setup the endpoint easily
Hope it helps...
Getting Laravel Scheduled tasks working is a lot simpler if you just do this:
sudo vi /etc/crontab
and add the line to the bottom of the file:
* * * * * webapp cd /var/www/html/<yourAppFolder>/ && php artisan schedule:run >> /dev/null 2>&1
Done!

Reading revision string in post-deploy hook

I though this would be easy but I cannot manage to find a way to get the revision string from a post deploy hook on EBS. The use case is straightforward: I want to warn rollbar of a deploy.
Here is the current script :
# Rollbar deploy notifier
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/90_notify_rollbar.sh":
mode: "000755"
content: |
#!/bin/bash
. /opt/elasticbeanstalk/support/envvars
LOCAL_USERNAME=`whoami`
REVISION=`date +%Y-%m-%d:%H:%M:%S`
curl https://api.rollbar.com/api/1/deploy/ \
-F access_token=$ROLLBAR_KEY \
-F environment=$RAILS_ENV \
-F revision=$REVISION \
-F local_username=$LOCAL_USERNAME
So far I'm using the current date as revision number, but that isn't really helpful. I tried using /opt/elasticbeanstalk/bin/get-config but I couldn't find anything relevant in the environment and container section, and couldn't read anything from meta. Plus, I found no doc about those, so...
Ideally, I would also like the username of the deployer, not the one on the local machine, but that would be the cherry on the cake.
Thanks for your time !
You can update your elastic beanstalk instance profile role (aws-elasticbeanstalk-ec2-role) to allow it to call Elastic Beanstalk APIs. In the post deploy hook you can call DescribeEnvironments with the current environment name using the aws cli or any of the AWS SDKs.
Let me know if you have any more questions about this or if this does not work for you.
I'm also looking for an easy alternative for API. For now I use bash
eb deploy && curl https://api.rollbar.com/api/1/deploy/ -F access_token=xxx -F environment=production -F revision=`git rev-parse --verify HEAD` -F rollbar_username=xxx
Replace xxx with your token and username

How can i deploy ember cli index to s3 without sha

i'm using ember-cli-deploy and ember-deploy-s3-index.
Following this article i managed to deploy the index to a bucket with static web hosting and another bucket holding the assets.
I want to automate (CI) the deploy process but there are two problems:
Each deploy adds an index file with a new name (test:b2907fa.html for example), and i need to manually change the index document to match the latest deploy in my s3 configuration.
I need to add permissions to the file on each deploy.
I would like to have a fixed name (override existing on deploy) for my index file, and that the file will have view permissions by default.
Is this possible?
Thanks.
Turn out you don't need to change the index document.
After deploy you need to run ember:deploy:activate --revision test:b2907fa --environment production and it will change it in the s3 bucket.
A simpler alternative with no add-ons/dependencies:
Deploying an ember cli app is as simple as syncing the contents of the dist/ folder to your server (after building with --production flag) These files can then be statically served
Here is a script I wrote to automate my deploy process:
printf "** Depoying application**\n"
cd ~/Desktop/Project/ember_test/censored
printf "\n** Building static files **\n"
ember build --environment=production
printf "\n** Synchronizing distribution folder to frontend.censored.co.za **\n"
rsync -rv ~/Desktop/Project/ember_test/censored/dist frontend#frontend.censored.co.za:/var/www/html/censored --exclude ".*/" --exclude ".*" --delete
printf "\n** Removing production build from local repository **\n"
rm -rv ~/Desktop/Project/ember_test/censored/dist/*
printf "\n** Deployment done. **\n"
This deploys to a linux server where you want to deploy to s3
So instead of my 3rd command (rsync) you would use s3cmd to put your folder into s3 (It would probably be a s3cmd put command)