Currently on deployment I get:
Hook /opt/elasticbeanstalk/hooks/preinit/30directories.sh failed
I want to remove the hook entirely using .ebextensions, I am currently using:
/.ebextensions/01-remove-unused.config
commands:
removeunused:
command: "rm -f /opt/elasticbeanstalk/hooks/preinit/30directories.sh"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/preinit/30directories.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
ls
I'm not sure how relevant this is, but what platform of ElasticBeanstalk are you using? For 64bit Amazon Linux 2016.09 v2.3.0 running Docker 1.11.2 specifically (and maybe other platforms), I don't believe that there is any way to do this the way that you are describing.
Unfortunately, the preinit scripts are executed well before ElasticBeanstalk will inject .ebextensions into your environment, and they are only run when a fresh instance is started. To confirm this, you can inspect /var/log/eb-activity.log on a freshly deployed ElasticBeanstalk instance, which shows you everything related to the bootstrapping process that AWS logs for you. Search for Initialization/PreInitStage0/PreInitHook in this log file, and then also search for .ebextensions; you will see that the preinit scripts indeed get executed before most everything else, and .ebextensions files come much later. (for what it's worth, this blog post might help further help understand which hooks get run at which times)
What you could potentially do is configure an .ebextensions script to execute before all other non-preinit hooks scripts that will re-execute (and potentially undo changes from) all of the preinit scripts. However, I would guess that this would be more trouble than it is worth, as there are likely unintended side effects that could come from this.
At any rate, these are my findings trying to do something similar. Hopefully, this helps (despite the fact that I haven't technically solved your problem)!
Related
On Elastic Beanstalk, with an AWS Linux 2 based environment, updating the Environment Properties (i.e. environment variables) of an environment causes all generated files to be deleted. It also doesn't run container_commands as part of this update.
So, for example, I have a Django project with collectstatic in the container commands:
05_collectstatic:
command: |
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
This collects static files to a folder called staticfiles as part of deploy. But when I do an environment variable update, staticfiles is deleted. This causes all static files on the application to be broken until I re-deploy, which is extremely undesirable.
This behavior did not occur on AWS Linux 1 based environments. The difference appears to be that AWS Linux 2 based environments replace the /var/app/current folder during environment variable changes, where AWS Linux 1 based environments did not do this.
How do I fix this?
Research
I can verify that the container commands are not being run during an environment variable change by monitoring /var/log/cfn-init.log; no new entries are added to this log.
This happens with both rolling update type "disabled" and "immutable".
This happens even if I convert the environment command to be a platform hook, despite the fact that hooks are listed as running when environment properties are updated.
It seems to me like there are two potential solutions, but I don't know of an Elastic Beanstalk setting for either:
Have environment variable changes leave /var/app/current rather than replacing it.
Have environment variable changes run container commands.
The Elastic Beanstalk docs on container commands say "Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated." Is this a bug in Elastic Beanstalk?
Related question: EB: Trigger container commands / deploy scripts on configuration change
The solution is to use a Configuration deployment platform hook for any commands that change the files in the deployment directory. Note that this is different from an Application deployment platform hook.
Using the example of the collectstatic command, the best thing to do is to move it from a container command to a pair of hooks, one for standard deployments and one for configuration changes.
To do this, remove the collectstatic container command. Then, make two identical files:
.platform/confighooks/predeploy/predeploy.sh
.platform/hooks/predeploy/predeploy.sh
Each file should have the following code:
#!/bin/bash
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
You need two seemingly redundant files because different hooks have different trigger conditions. Scripts in hooks run when you deploy the app whereas scripts in confighooks run when you change the configuration of the app.
Make sure to make both of these files executable according to git or else you will run into a "permission denied" error when you try to deploy. You can check if they are executable via git ls-files -s .platform; you should see 100755 before any shell files in the output of this command. If you see 100644 before any of your shell files, run git add --chmod=+x -- .platform/*/*/*.sh to make them executable.
My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.
I implemented a REST api in django with django-rest-framework and used oauth2 for authentication.
I tested with:
curl -X POST -d "client_id=YOUR_CLIENT_ID&client_secret=YOUR_CLIENT_SECRET&grant_type=password&username=YOUR_USERNAME&password=YOUR_PASSWORD" http://localhost:8000/oauth2/access_token/
and
curl -H "Authorization: Bearer <your-access-token>" http://localhost:8000/api/
on localhost with successful results consistent with the documentation.
When pushing this up to an existing AWS elastic beanstalk instance, I received:
{ "detail" : "Authentication credentials were not provided." }
I like the idea of just having some extra configuration on the standard place. In your .ebextensions directory create a wsgi_custom.config file with:
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
As posted here: https://forums.aws.amazon.com/message.jspa?messageID=376244
I thought the problem was with my configuration in django or some other error type instead of focusing on the differences between localhost and EB. The issue is with EB's Apache settings.
WSGIPassAuthorization is natively set to OFF, so it must be turned ON. This can be done in your *.config file in your .ebextensions folder with the following command added:
container_commands:
01_wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
Please let me know if I missed something or if there is a better way I should be looking at the problem. I could not find anything specifically about this anywhere on the web and thought this might save somebody hours of troubleshooting then feeling foolish.
I use a slightly different approach now. sahutchi's solution worked as long as env variables were not changed as Tom dickin pointed out. I dug a bit deeper inside EB and found out where the wsgi.conf template is located and added the "WSGIPassAuthorization On" option there.
commands:
WSGIPassAuthorization:
command: sed -i.bak '/WSGIScriptAlias/ a WSGIPassAuthorization On' config.py
cwd: /opt/elasticbeanstalk/hooks
That will always work, even when changing environment variables. I hope you find it useful.
Edit: Seems like lots of people are still hitting this response. I haven't used ElasticBeanstalk in a while, but I would look into using Manel Clos' solution below. I haven't tried it personally, but seems a much cleaner solution. This one is literally a hack on EBs scripts and could potentially break in the future if EB updates them, specially if they move them to a different location.
Though the above solution is interesting, there is another way. Keep the wsgi.conf VirtualHost configuration file you want to use in .ebextensions, and overwrite it in a post deploy hook (you can't do this pre-deploy because it will get re-generated (yes, I found this out the hard way). If you do this, to reboot, make sure to use the supervisorctl program to restart so as to get all your environment variables set properly. (I found this out the hard way as well.)
cp /tmp/wsgi.conf /etc/httpd/conf.d/wsgi.conf
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart httpd
exit 0
01_python.config:
05_fixwsgiauth:
command: "cp .ebextensions/wsgi.conf /tmp"
So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).