I have a question maybe a little silly, I'm trying to deploy a static site with codeship but I can't understand the documentation:
https://codeship.com/documentation/continuous-deployment/deployment-to-aws-codedeploy/
Currently it's a little different the way to setup, I don't know what to write in "Local Path" input
You should interpret "Local Path" as a reference to the working directory in the virtual machine.
It took me awhile to figure it out. You can see this in the cloning step. You will see something like this.
Cloning into '/home/rof/src/bitbucket.org/<your_user>/<you_repository>'
The path /home/rof/src/bitbucket.org/<your_user>/<you_repository> is what you are looking for.
If you want to upload something inside of that directory just concatenate the rest like /home/rof/src/bitbucket.org/<your_user>/<you_repository>/internal/path
For example:
You can compile your NodeJs App and compress the dist directory to build an artifact and then upload it to S3.
It would be something like this. In your setups commands:
nvm use 5.6.0
npm install
npm run deploy
tar -zcvf artifact.tar.gz dist/
mkdir upload/
mv artifact.tar.gz upload/
Finally your "Local Path" is:/home/rof/src/bitbucket.org/<your_user>/<you_repository>/upload
Hope this help!.
Related
We have a notebook instance within Sagemaker which contains many Jupyter Python scripts. I'd like to write a program which downloads these various scripts each day (i.e. so that I could back them up). Unfortunately I don't see any reference to this in the AWS CLI API.
Is this achievable?
It's not exactly that you want, but looks like VCS can fit your needs. You can use Github(if you already use it) or CodeCommit(free privat repos) Details and additional ways like sync target dir with S3 bucket - https://aws.amazon.com/blogs/machine-learning/how-to-use-common-workflows-on-amazon-sagemaker-notebook-instances/
Semi automatic way:
conda install -y -c conda-forge zip
!zip -r -X folder.zip folder-to-zip
Then download that zipfile.
My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.
I'm using hexo in github page. Mistakingly I deleted my local file in my local machine. I tried to make a new local file again by using git clonehttps://github.com/aaayumi/aaayumi.github.io.git. Then I installed npm install hexo-cli -g.
I could install all necessary files but when I typed hexo deploy,
it shows,
hexo deploy
Usage: hexo <command>
Commands:
help Get help on a command.
init Create a new Hexo folder.
version Display version information.
Global Options:
--config Specify config file instead of using _config.yml
--cwd Specify the CWD
--debug Display all verbose messages in the terminal
--draft Display draft posts
--safe Disable all plugins and scripts
--silent Hide output on console
For more help, you can use 'hexo help [command]' for the detailed information
or you can check the docs: http://hexo.io/docs/
Is there an way to be able to use hexo blog locally?
The code in https://github.com/aaayumi/aaayumi.github.io is not the source code of your blog, it is just the generated content. What you need are the original markdown files that were inside your source folder.
You will have to recreate the blog with hexo init and rewrite your blog posts .. Sorry for that.
Of course you can look at your website directly (http://ayumi-saito.com/) and rewrite the posts, copy pasting from there which should not take that long.
Also to make sure this does not happen again, you can publish your blog source files in a different repository. So that there is always a copy somewhere.
PS: Thanks for using my theme ;)
I'm trying to download a postgres driver to each node of my cluster. I wrote the following bootstrap action, but it doesn't seem to have worked:
#!/bin/bash
aws s3 cp s3://path/to/driver/jars/postgresql-9.4.1210.jre7.jar .
I know this must be an easy thing to do, but I can't seem to find an obvious example.
The bootstrap action you have looks fine and is probably working. It's just that you are probably assuming that it will download the file to the same directory where you land when ssh'ing to the cluster, which is /home/hadoop, but that is not the case. The working directory of bootstrap actions is somewhere under /var/lib/bootstrap-actions, if I remember correctly.
It would be easier to find the file you've downloaded if you change "." to something like "/home/hadoop". You could also create some other new directory to which to download the file as part of this script (using "sudo mkdir" and "sudo chown" if necessary).
I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?