Add custom directives to wsgi.conf on AWS Beanstalk - amazon-web-services

I need to add ProxyPass directive to default wsgi.conf. I tried running sed command in container_commands script, but it seems to be called before wsgi.conf is created by deploy scripts. I found that i can drop custom hooks in /opt/elasticbeanstalk/hooks/appdeploy/post directory, but this method is not officially supported.

I wish I could find something more official, but it seems people are putting a wsgi.conf into their project, and using a container_commands script to move it to the appropriate location (which is not /etc/httpd/conf.d/wsgi.conf, though it does end up replacing /etc/httpd/conf.d/wsgi.conf in the end!):
container_commands:
04_wsgireplace:
command: "cp wsgi.conf ../wsgi.conf"
or
container_commands:
04_wsgireplace:
command: "cp .ebextensions/wsgi.conf ../wsgi.conf"
Depends on where in your project you've stored wsgi.conf, I assume. Looks like the script is being run from the app directory. I'm about to try it myself (for a flask project), and I'll report back!
There's a very related question here.
(References: 1,2,3)
Update: I tried it out (with wsgi.conf in .ebextensions), and it worked (for me).

I'm looking at another solution that builds on issue where default wsgi.conf needs to be extended. The concept comes from this blog post deploy
commands:
create_post_dir:
command: mkdir /opt/elasticbeanstalk/hooks/appdeploy/post
ignoreErrors: true
mv_post_appddeploy_script:
command: mv /tmp/99_wsgi_conf.sh /opt/elasticbeanstalk/hooks/appdeploy/post
files:
"/tmp/99_wsgi_conf.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
service httpd stop
echo "WSGIApplicationGroup %{GLOBAL}" >> /etc/httpd/conf.d/wsgi.conf
service httpd start
I think this is a more elegant solution - extend default rather than ignore. Can be made more sophisticated when required.

Related

How to use Dockerfile COPY command to copy files from parent directories [duplicate]

How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

Accessing environment variables in AWS Beanstalk ebextensions

I am trying to access an environment variable that I have defined in the AWS Beanstalk configuration. I need to access it within a config file in .ebextensions or in a file that is copied in place in a config file. I have tried the following:
container_commands:
update_nginx_config:
command: "cp .ebextensions/files/nginx/nginx.conf /etc/nginx/nginx.conf"
And in my nginx.conf file, I have tried to access $MYVAR, ${MYVAR} and {$MYVAR}, some of which was suggested here and here (the latter being directly within a config file).
files:
"/etc/nginx/nginx.conf" :
mode: "000644"
owner: root
group: root
content: |
$MYVAR ${MYVAR} {$MYVAR}
This does not work either. In all cases, the variable names are just output such as $MYVAR, so Beanstalk does not recognize my variables. I found the below in the AWS documentation about container_commands:
They also have access to environment variables such as your AWS
security credentials.
This is great, but it does not say how.
How can I access an environment variable with ebextensions, be it within a config file itself or in a separate file that is copied in place?
Thank you in advance!
I reached out to the Amazon technical support for an answer to this question, and here is their reply:
Unfortunately the variables are not available in ebextensions
directly. The best option to do that is by creating a script that then
is run from container commands like this:
files:
"/home/ec2-user/setup.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
# Commands that will be run on container_commmands
# Here the container variables will be visible as environment variables.
container_commands:
set_up:
command: /home/ec2-user/setup.sh
So, if you create a shell script and invoke it via a container command, then you will have access to environment variables within your shell script as follows: $ENVIRONMENT_VARIABLE. I have tested this, and it works.
If you're having issues running a script as root and not being able to read the configured environment variables, try adding the following to the top of your script.
. /opt/elasticbeanstalk/support/envvars
Depending on your use case, you might have to change your approach a bit (at least I did), but it is a working solution. I hope this helps someone!
From this answer: https://stackoverflow.com/a/47817647/2246559
You can use the GetOptionSetting function described here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions-functions.html
For instance, if you were setting the worker_processes variable, it could look like:
files:
"/etc/nginx/nginx.conf" :
mode: "000644"
owner: root
group: root
content: |
worker_processes `{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "MYVAR"}}`;
Note the backticks `` in the function call.
In case you're using the value directly in a container command, the get-config script that comes with the instance can help.
Example :
20_install_certs:
command: |
MY_VAR=$(/opt/elasticbeanstalk/bin/get-config environment -k MY_VAR)
This is a bit different use-case and be only for debugging purposes.
You can access the environment variables via
$ /opt/elasticbeanstalk/bin/get-config environment
Doc: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html
Only works for Linux environments I think!

Amazon Elastic Beanstalk - Change Timezone

I´m running an EC2 instance through AWS Elastic Beanstalk. Unfortunately it has the incorrect timezone - it´s 2 hours earlier than it should be, because timezone is set to UTC. What I need is GMT+1.
Is there a way to set up the .ebextensions configuration, in order to force the EC2 instance to use the right timezone?
Yes, you can.
Just create a file /.ebextensions/00-set-timezone.config with following content
commands:
set_time_zone:
command: ln -f -s /usr/share/zoneinfo/Australia/Sydney /etc/localtime
This is assuming your are using default Amazon Linux AMI image. If you use some other Linux distribution, just change the command to whatever it requires to set timezone in that Linux.
This is a response from the aws Support Business and this works!
---- Original message ----
How can I change the timezone of an enviroment or rather to the instances of the enviroment in Elastic Beasntalk to UTC/GMT -3 hours (Buenos Aires, Argentina)?
I´m currently using Amazon Linux 2016.03. Thanks in advance for your help.
Regards.
---------- Response ----------
Hello,Thank you for contacting AWS support regarding modifying your Elastic Beanstalk instances time zone to use UTC/GMT -3 hours (Buenos Aires, Argentina), please see below on steps on how to perform this modification.
The below example shows how to modify timezone for Elastic Beanstalk environment using .ebextensions for Amazon Linux OS:
Create .ebextensions folder in the root of your application
Create a .config file for example 00-set-timezone.config file and add the below content in yaml formatting.
container_commands:
01changePHP:
command: sed -i '/PHP_DATE_TIMEZONE/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/php.d/environment.ini
01achangePHP:
command: sed -i '/aws.php_date_timezone/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/php.d/environment.ini
02change_AWS_PHP:
command: sed -i '/PHP_DATE_TIMEZONE/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/httpd/conf.d/aws_env.conf
03php_ini_set:
command: sed -i '/date.timezone/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/php.ini
commands:
01remove_local:
command: "rm -rf /etc/localtime"
02link_Buenos_Aires:
command: "ln -s /usr/share/zoneinfo/America/Argentina/Buenos_Aires /etc/localtime"
03restart_http:
command: sudo service httpd restart
Deploy application to Elastic Beanstalk including the .ebextensions and the timezone will change as per the above.
I hope that helps
Regards!
If you are running windows in your eb environment...
.
create a folder named .ebextensions in the root of your project..
inside that folder create a file named timezone.config
in that file add the following :
commands:
set_time_zone:
command: tzutil /s "Central Standard Time"
set the time zone as needed
screenshot
I'm using custom .ini file in php.d folder along with regular recommendations from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#change_time_zone:
The sed command inserts (rewrites) only the first line of /etc/sysconfig/clock, since the second line (UTC=true) should be left alone, per the above AWS documentation.
# .ebextensions/02-timezone.config
files:
/etc/php.d/webapp.ini:
mode: "000644"
owner: root
group: root
content: |
date.timezone="Europe/Amsterdam"
commands:
01_set_ams_timezone:
command:
- sed -i '1 s/UTC/Europe\/Amsterdam/g' /etc/sysconfig/clock
- ln -sf /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime
Changing the time zone of EC2 with Elastic Beanstalk is simple:
Create a .ebextensions folder in the root
Add a file with filename end with .config (timezone.config)
Inside the file
container_commands:
time_zone:
command: ln -f -s /usr/share/zoneinfo/America/Argentina/Buenos_Aires /etc/localtime
Then you have done.
Note that the container_commands is different from commands, from the document it states:
commands run before the application and web server are set up and
the application version file is extracted.
That's the reason of your time zone command doesn't work because the server hasn't started yet.
container_commands run after the application and web server have been
set up and the application version file has been extracted, but before
the application version is deployed.
If you are runing a java/Tomcat container, just put the JVM Option on the configuration.
-Duser.timezone=America/Sao_Paulo
Possibles values: timezones
Moving to AWS Linux 2 was challenging. It took me a while to work out how to do this easily in .ebextensions.
I wrote the simple solution in another stackoverflow question .. but for anyone needing instant gratification .. add the following commands into the file .ebextensions/xxyyzz.config:
container_commands:
01_set_bne:
command: "sudo timedatectl set-timezone Australia/Brisbane"
command: "sudo systemctl restart crond.service"
These workarounds only fixes the timezone for applications. But when you have any system services like a cron run it looks at the /etc/sysconfig/clock and that is always UTC. If you tail the cron logs or aws-sqsd logs would will notice timestamps are still 2hrs behind - in my case. And a change to the clock setting would need a reboot into order to take effect - which is not an option to consider should you have autoscaling in place or should you want to use ebextensions to change the system clock's config.
Amazon is aware of this issue and I dont think they have resolved it yet.
If your EB application is using the Java/Tomcat container, you can add the JVM timezone Option to the Procfile configuration. Example:
web: java -Duser.timezone=Europe/Berlin -jar application.jar
Make sure to add all configuration options before the -jar option, otherwise they are ignored.
in the .ebextensions added below for PHP
container_commands:
00_changePHP:
command: sed -i '/;date.timezone =/c\date.timezone = \"Australia/Sydney\"' /etc/php.ini
01_changePHP:
command: sed -i '/date.timezone = UTC/c\date.timezone = \"Australia/Sydney\"' /etc/php.d/aws.ini
02_set_tz_AEST:
command: "sudo timedatectl set-timezone Australia/Sydney"
command: "sudo systemctl restart crond.service"
commands:
01remove_local:
command: "rm -rf /etc/localtime"
02change_clock:
command: sed -i 's/\"UTC\"/\"Australia\/Sydney\"/g' /etc/sysconfig/clock
03link_Australia_Sydney:
command: "ln -f -s /usr/share/zoneinfo/Australia/Sydney /etc/localtime"
cwd: /etc
Connect AMI(amazon linux instance) via putty or ssh and execute the commands below;
sudo rm /etc/localtime
sudo ln -sf /usr/share/zoneinfo/Europe/Istanbul /etc/localtime
sudo reboot
Explanation of the procedure above is simply;
remove localtime,
update the timezone,
reboot
Please notify that I've changed my timezone to Turkey's localtime, you can find your timezone by listing zoneinfo directory with the command below;
ls /usr/share/zoneinfo
or just check timezone abbrevetaions via wikipedia;
http://en.wikipedia.org/wiki/Category:Tz_database
You can also check out the related Amazon AWS documentation;
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
Note: I'm not sure that if this is the best practice or not (probably not), however I've applied the procedure I've written above and it's working for me.

Authorization Credentials Stripped --- django, elastic beanstalk, oauth

I implemented a REST api in django with django-rest-framework and used oauth2 for authentication.
I tested with:
curl -X POST -d "client_id=YOUR_CLIENT_ID&client_secret=YOUR_CLIENT_SECRET&grant_type=password&username=YOUR_USERNAME&password=YOUR_PASSWORD" http://localhost:8000/oauth2/access_token/
and
curl -H "Authorization: Bearer <your-access-token>" http://localhost:8000/api/
on localhost with successful results consistent with the documentation.
When pushing this up to an existing AWS elastic beanstalk instance, I received:
{ "detail" : "Authentication credentials were not provided." }
I like the idea of just having some extra configuration on the standard place. In your .ebextensions directory create a wsgi_custom.config file with:
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
As posted here: https://forums.aws.amazon.com/message.jspa?messageID=376244
I thought the problem was with my configuration in django or some other error type instead of focusing on the differences between localhost and EB. The issue is with EB's Apache settings.
WSGIPassAuthorization is natively set to OFF, so it must be turned ON. This can be done in your *.config file in your .ebextensions folder with the following command added:
container_commands:
01_wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
Please let me know if I missed something or if there is a better way I should be looking at the problem. I could not find anything specifically about this anywhere on the web and thought this might save somebody hours of troubleshooting then feeling foolish.
I use a slightly different approach now. sahutchi's solution worked as long as env variables were not changed as Tom dickin pointed out. I dug a bit deeper inside EB and found out where the wsgi.conf template is located and added the "WSGIPassAuthorization On" option there.
commands:
WSGIPassAuthorization:
command: sed -i.bak '/WSGIScriptAlias/ a WSGIPassAuthorization On' config.py
cwd: /opt/elasticbeanstalk/hooks
That will always work, even when changing environment variables. I hope you find it useful.
Edit: Seems like lots of people are still hitting this response. I haven't used ElasticBeanstalk in a while, but I would look into using Manel Clos' solution below. I haven't tried it personally, but seems a much cleaner solution. This one is literally a hack on EBs scripts and could potentially break in the future if EB updates them, specially if they move them to a different location.
Though the above solution is interesting, there is another way. Keep the wsgi.conf VirtualHost configuration file you want to use in .ebextensions, and overwrite it in a post deploy hook (you can't do this pre-deploy because it will get re-generated (yes, I found this out the hard way). If you do this, to reboot, make sure to use the supervisorctl program to restart so as to get all your environment variables set properly. (I found this out the hard way as well.)
cp /tmp/wsgi.conf /etc/httpd/conf.d/wsgi.conf
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart httpd
exit 0
01_python.config:
05_fixwsgiauth:
command: "cp .ebextensions/wsgi.conf /tmp"

How to use conditional in .ebextensions config (AWS Elastic Beanstalk)

I wish I could use conditional for .ebextensions configuration, but I don't know how to use it, my current case are :
One of .ebextensions configuration content are create a folder, actually the folder that must be created it's only once, because if I'm deploying app for second times or more I've got error, and the error said "the folder already exist".
So I need to give conditional, if the folder already exist it's not necessary to run again the command for create a folder.
If anyone has any insight or direction on how this can be achieved, I would greatly appreciate it. Thank you!
The .ebextensions config files allow conditional command execution by using the test: directive on a command. Then the command only runs if the test is true (returns 0).
Example .ebextensions/create_dir.config file:
commands:
01_create_dir:
test: test ! -d "${DIR}"
command: mkdir "${DIR}"
Another example (actually tested on EB) to conditionally run a script if a directory is not there:
commands:
01_intall_foo:
test: test ! -d /home/ec2-user/foo
command: "/home/ec2-user/install-foo.sh"
cwd: "/home/ec2-user/"
The sparse documentation from AWS is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-commands
PS.
If you just need to conditionally create the directory, you can do it without conditionals, using the -p option for mkdir to conditionally create the directory like so:
commands:
01_create_dir:
command: mkdir -p "${DIR}"
I think that the only way to do it is with shell conditions:
commands:
make-directory:
command: |
if [ ! -f "${DIR}" ]; then
mkdir "${DIR}"
fi
See bigger example in jcabi-beanstalk-maven-plugin.