Bamboo SCP plug-in: how to find directory - amazon-web-services

I am trying to upload a file to a remote server using the SCP task. I have OpenSSH configured on the remote server in question, and I am using an Amazon EC2 instance running Windows Server 2008 R2 with Cygwin to run the Bamboo build server.
My question is regarding finding the directory I wish to use. I want to upload the entire contents of C:\doc using SCP. The documentation notes that I must use the local path relative to the Bamboo working directory rather than an absolute directory name.
I found by running pwd during the build plan that the working directory is /cygdrive/c/build-dir/CDP-DOC-JOB1. So to get to doc, I can run cd ../../doc. However, when I set my working directory under the SCP configuration as ../../doc/** (using this pattern matching guide), I get the message There were no files to upload. in the log.
C:\doc contains subfolders as well as a textfile in the root directory.
Here is my SCP task configuration:
Here is a look from cygwin at my directory:

You may add a first "script" task running a Windows shell, that copies everything from C:\doc to some local directory, and then run the scp task to copy the content of this new directory onto your remote server
mkdir doc
xcopy c:\doc .\doc /E /F
Then the pattern for copy should be /doc/**

Related

Elastic BeanStalk app deploy post hook not executing my command

I recently was able to get my Laravel app deployed using codepipeline on Elastic Beanstalk but ran into a problem. I noticed that my routes where failing because of php.conf Nginx configuration. I had to add a few lines of code to EB's nginx php.conf file to get it to work.
My problem now was that after every deployment, the instance of the application I modified the php.conf file was destroyed and recreated fresh. I wanted a way to dynamically update the file after every successful deployment. I had a version of the file I wanted versioned with my application and so wanted to create a symlink to that file after deployment.
After loads of research, I stumbled on appDeploy Hooks on Elastic Beanstalk that runs post scripts after deployment so did this
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_post_deploy_script.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo mkdir /var/testing1
sudo ln -sfn /var/www/html/php.conf.example /etc/nginx/conf.d/elasticbeanstalk/php.conf
sudo mkdir /var/testing
sudo nginx -s reload
And this for some reason does not work. The symlink is not created so my routes are still not working..
I even added some mkdir so am sure the commands in that script runs, none of those commands ran because none of those directories where created.
Please note that if I ssh into the ec2 instance and run the commands there it works. That bash script also exists in the post directory and if I manually run in on the server it works too.
Any pointers to how I could fix this would be helpful. Maybe I am doing something wrong too.
Now I have gotten my scripts to run by following this. However, the script is not running. I am getting an error
2020/06/28 08:22:13.653339 [INFO] Following platform hooks will be executed in order: [01_myconf.config]
2020/06/28 08:22:13.653344 [INFO] Running platform hook: .platform/hooks/postdeploy/01_myconf.config
2020/06/28 08:22:13.653516 [ERROR] An error occurred during execution of command [app-deploy] - [RunPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/01_myconf.config failed with error fork/exec .platform/hooks/postdeploy/01_myconf.config: permission denied
I tried to follow this forum post here to make my file executable by adding to my container command a new command like so:
01_chmod1:
command: "chmod +x .platform/hooks/postdeploy/91_post_deploy_script.sh"
I am still running into the same issue. Permission denied
Sadly, the hooks you are describing (i.e. /opt/elasticbeanstalk/hooks/appdeploy) are for Amazon Linux 1.
Since you are using Amazon Linux 2, as clarified in the comments, the hooks you are trying to use do not apply. Thus they are not being executed.
In Amazon Linux 2, there are new hooks as described here and they are:
prebuild – Files here run after the Elastic Beanstalk platform engine downloads and extracts the application source bundle, and before it sets up and configures the application and web server.
predeploy – Files here run after the Elastic Beanstalk platform engine sets up and configures the application and web server, and before it deploys them to their final runtime location.
postdeploy – Files here run after the Elastic Beanstalk platform engine deploys the application and proxy server.
The use of these new hooks is different than in Amazon Linux 1. Thus you have to either move back to Amazon Linux 1 or migrate your application to Amazon Linux 2.
General migration steps from Amazon Linux 1 to Amazon Linux 2 in EB are described here
Create a folder called .platform in your project root folder and create a file with name 00_myconf.config inside the .platform folder.
.platform/
00_myconf.config
Open 00_myconf.config and add the scripts
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_post_deploy_script.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo mkdir /var/testing1
sudo ln -sfn /var/www/html/php.conf.example /etc/nginx/conf.d/elasticbeanstalk/php.conf
sudo mkdir /var/testing
sudo nginx -s reload
Commit your changes or reupload the project. This .platform folder will be considered in each new instance creation and your application will deploy properly in all the new instances Amazon Elastic beanstalk creates.
If you access the documentation here and scroll to the section with the title "Application example with extensions" you can see an example of the folder structure of your .platform folder so it adds your custom configuration to NGINX conf on every deploy.
You can either replace the entire nginx.conf file with your file or add additional configuration files to the conf.d directory
Replace conf file with your file on app deploy:
.platform/nginx/nginx.conf
Add configuration files to nginx.conf:
.platform/nginx/conf.d/custom.conf

How to create folder on Elastic Beanstalk server to install LetsEncrypt SSL certificate with AcmePHP

I have a site running on an Elastic Beanstalk single instance server and want to add automated SSL certificate generation from LetsEncrypt using the AcmePHP library.
The library tries to store the certificates in ~/.acmephp, which the server responds to with an error
Failed to create "/home/webapp/.acmephp": mkdir(): Permission denied.
The acmephp library doesn't have an option to change the path built in, and rather than fork and recompile the script, I'd like to be able to store the files in the default directory.
Does anyone know how I can give the app permission to create this directory, outside of the web root, or how I can make the server create it automatically and have it be available to the app?
It looks like since it's being ran by the webapp user, when acmePHP is trying to store the certificate under that user's home directory it fails because that directory doesn't exist (afaik the webapp user only runs httpd and it definitely doesn't have a home directory).
A very dirty workaround could be manually creating that file and folder in the . ebextensions folder in your project.The file would be .ebextensions/create_home.config and it would contain something like this:
files:
"/tmp/create-home.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /home/webapp
chown webapp:webapp -R /home/webapp
commands:
01_create:
command: "/tmp/create-home.sh"
That script is ran by the root user, and afterwards it changes ownership of the /home/webapp folder to the webapp user and group respectively. Hope it helps

AWS Elastic Beanstalk - .ebextensions

My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.

How do I set the beanstalk .ebextensions .config "sources" key "target directory" to the current bundle directory

I'm working in a python 2.7 elastic beanstalk environment.
I'm trying to use the sources key in an .ebextensions .config file to copy a tgz archive to a directory in my application root -- /opt/python/current/app/utility. I'm doing this because the files in this folder are too big to include in my github repository.
However, it looks like the sources key is executed before the ondeck symbolic link is created to the current bundle directory so I can't reference /opt/python/ondeck/app when using the sources command because it creates the folder and then beanstalk errors out when trying to create the ondeck symbolic link.
Here are copies of the .ebextensions/utility.config files I have tried:
sources:
/opt/python/ondeck/app/utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above successfully copies to /opt/python/ondec/app/utility but then beanstalk errors out becasue it can't create the symbolic link from /opt/python/bundle/x --> /opt/python/ondeck.
sources:
utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above copies the folder to /utility right off the root in parallel with /etc.
You can use container_commands instead of sources as it runs after the application has been set up.
With container_commands you won't be able to use sources to automatically get your files and extract them so you will have to use commands such as wget or curl to get your files and untar them afterwards.
Example: curl http://[bucket].s3.amazonaws.com/utility.tgz | tar xz
In my environment (php) there is no transient ondeck directory and the current directory where my app is eventually deployed is recreated after commands are run.
Therefore, I needed to run a script post deploy. Searching revealed that I can put a script in /opt/elasticbeanstalk/hooks/appdeploy/post/ and it will run after deploy.
So I download/extract my files from S3 to a temporary directory in the simplest way by using sources. Then I create a file that will copy my files over after the deploy and put it in the post deploy hook directory .
sources:
/some/existing/directory: https://s3-us-west-2.amazonaws.com/my-bucket/vendor.zip
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_move_my_files_on_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
mv /some/existing/directory /var/app/current/where/the/files/belong

How to set up and use EC2 CLI on Mac?

I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?