Install package globally with Elastic Beanstalk - amazon-web-services

I'm deploying an app using Elastic Beanstalk and part of the app has a grunt task that runs "sass", I have sass being installed but it is being installed locally and thus isn't part of the PATH, so the grunt task fails.
I just attempted adding a command to the beanstalk config that does sudo gem install sass but that fails with Command failed on instance. Return code: 1 Output: sudo: sorry, you must have a tty to run sudo.
What would be the best way to get sass into the PATH? There didn't seem to be an easy way to update the PATH / set the .bashrc with elastic beanstalk

Using ebextensions commands is the way to go. You don't need sudo, as the commands run with the necessary privileges.
Also looks like you are using node solution stack (since you mentioned grunt). There may be multiple versions of ruby on your instance. You want to be sure to use the right gem binary so your dependencies are installed in the right location.
There is a ruby installed in /usr/bin and another one in /opt/elasticbeanstalk/lib. The latter is used by Elastic Beanstalk which is not what you want. You want to run the gem binary under /usr/bin.

Related

How do I access beanstalk application venv?

this last week I have been trying to upload a flask app using AWS Beanstalk.
The main problem for me was loading a very heavy library as part of the bundle (there is a 500mb limit for uploading the bundle code).
Instead, I tried to use requirements.txt file so it would download the library directly to the server.
Unfortunately, every time I tried to include the library name in the requirements file, it failed to load it (torch library).
on pythonanywhere server there is a console which allows you to access the virtual environment and simply type
pip install torch
which was very useful and comfortable.
I am looking for something similar in AWS beanstalk, so that I could install the library directly instead of relying on the requirements.txt file.
I have been at it for a few days now and can't make any progress.
your help would be much appreciated.
another question,
is it possible to load the venv to Amazon-S3 and then access the folder from the beanstalk environment?
Its not a good practice to "manually" install your dependencies or configure your EB env from inside. This is only useful for testing and debugging purposes. Thus keep that it mind.
To get your venv, you have to ssh to your EB instance using regular ssh or web-based clients available in AWS EC2 console when you locate your EB EC2 instance. Session manager should work out-of-the-box to enable you to login to the instance.
When you login to the instance, then to activate your venv, you do:
# start bash
bash
# source venv
source /var/app/venv/staging-*/bin/activate

How to fix Error: pg_config executable not found on Elastic Beanstalk permanently

I have a Django project that works with PostgreSQL on Elastic Beanstalk.
I have found the next error when deploying:
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source.
Please add the directory containing pg_config to the $PATH or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
I followed psycopg2 on elastic beanstalk - can't deploy app to solve it and it worked! BUT after a while Amazon seems to update my virtual env and the error returns so I have to go back and do the same stuff over and over again.
I also have tried configuring the database instance directly from the Elastic Beanstalk panel but nothing changes.
The database is on RDS and I think it's important to say that when I manually install psycopg2 on my instance and re-deploy everything works fine, I even have made a couple of deploys after that and the problems is not there but it is after a while.
I really want to know how to solve it once for all. Thanks in advance.
It is trying to do is build the postgres drivers from source.
You can deal with this several different ways. For example, you can choose to install the drivers from a binary package instead of building them from source.
in your requirements.txt, replace
psycopg2==2.8.5
with
psycopg2-binary==2.8.5
If, you do insist on building it from source when you deploy to Beanstalk, you will need to use platform hooks to pre-install the dependencies that are needed to compile it. The AL2 instances are pretty bare-bone, so you will need to do roughly the following:
add directory structure .platform/hooks/prebuild
in prebuild, create a script, something like '10_install_dependencies.sh'
at the top of the script you will need #!/usr/bin/sh
Use this script to add needed dependencies. For example, for the posgres development libs you will want
sudo yum install -y <yum-packahe-that-you-need>
You may also end up having to install other development libs needed to build psycopg...

How do I auto load packages (such as libjpeg-dev) to my Elastic Beanstalk App?

I have a auto scaling Elastic Beanstalk app running Python where I want to use PIL. When I do, it says that my jpeg decoder is missing and that I need to install libjpeg.
So I follow AWS official guides for "configuration files" here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html#customize-containers-format
But I can't get it working.
If I understand the guides correctly I should set up a directory called .ebextensions in my application folder. Inside my .ebextensions I should set up a foo.config file. In my case I name it python.config.
In this file I'm suppose to execute commands. The code of my .config file is:
packages:
yum:
libjpeg-devel: '6b'
I deploy my app and I can see in my Log Snapshots that its inflating and creating the file like this:
-------------------------------------
/var/log/eb-tools.log
-------------------------------------
creating: /opt/python/ondeck/app/.ebextensions/
inflating: /opt/python/ondeck/app/.ebextensions/python.config
inflating: /opt/python/ondeck/app/application.py
...
However, I can't find anything about actually executing the commands. I've been searching my log for "yum", "python.config", "jpeg", "libjpeg" and so on without any relevant traces. I restart the application server but still I get the same messages saying libjpeg is missing.
I have seen other people asking similar questions, about config files not working. But I have yet to see any answer.
I ran into the same issue, instead of setting up a completely new Elastic Beanstalk app, I connected to the EC2 instance via SSH and re-installed PIL (or Pillow).
On EC2 instance, I ran the following commands:
source /opt/python/run/venv/bin/activate
pip uninstall PIL
pip install PIL
Now PIL supports jpeg encoding =)
I "fixed" this by setting up a completely new Elastic Beanstalk app and deploying the exact same application there. It then successfully installed the libjpeg package.
I was never able to find out the answer to why it didnt work on the first Elastic Beanstalk App. But maybe it had something to do with PIL was first installed and then it couldnt install libjpeg after.

Trouble installing AWS Elastic Beanstalk Command Line tool packages

I'm having trouble installing AWS Elastic Beanstalk command line tool and I don't understand why. I've downloaded the package from AWS and followed the instruction carefully. Following is the installation instruction:
== Installation
Once you have downloaded the CLI package:
1) Unzip this archive to a location of your choosing.
Eb is located in the "eb" directory. The complete CLI reference
for more advanced scenarios can be found in the "api" directory.
To add eb files to your path:
Linux/Mac OS X (Bash shell):
export PATH=$PATH:<path to eb>
Windows:
set PATH=<path to eb>;%PATH%
I'm using Mac OS X so I've used export PATH=$PATH:. For the path to eb, I've just copied the file into the terminal, which resulted export PATH=$PATH:/Users/lydia/Downloads/ElasticBeanstalk/eb/macosx/python2.7/eb. I'm not sure what I'm missing and I can't deploy without downloading eb command line first.
Remove the eb at the end so it's just
/Users/lydia/Downloads/ElasticBeanstalk/eb/macosx/python2.7/
this worked for me although i can only get it to work if i export the CLI into the specific website folder i am working on see my question here https://askubuntu.com/questions/428417/aws-elastic-beanstalk-command-line-tool-setup
Fix that worked for me (if you installed python using brew) is remove python via
brew uninstall --force python
and then install it again from https://www.python.org/downloads/.
Then just follow the instructions from AWS.
You only add directories to your $PATH. Is ~/Downloads/ElasticBeanstalk/eb/macosx/python2.7/eb a directory? Or is it the actual command?

Deploying a new VM with Vagrant and AWS user-data not working

I have a provisioning setup with vagrant and puppet that works well locally and I'm now tryign to move it to AWS using vagrant-aws.
As I understand it I can make use the AWS user-data field in vagrant as specified to run commands on the first boot of a new vm like so:
aws.user_data = File.read("user_data.txt")
Where user_data.txt contains:
#!/bin/bash
sudo apt-get install -y puppet-common
Then my existing puppet provisioning scripts should be able to run. However this errors out on the vagrant up command with:
[aws] Running provisioner: puppet...
The `puppet` binary appears to not be in the PATH of the guest. This
could be because the PATH is not properly setup or perhaps Puppet is not
installed on this guest. Puppet provisioning can not continue without
Puppet properly installed.
But when I ssh into the machine I see that the user-data did get parsed and puppet is installed successfully. Is the puppet provisioner running before the user-data install puppet maybe? Or is there some better way to install puppet on a vm before trying to provision?
It is broken, but there's a workaround if you're using Ubuntu which is far simpler than building your own AMI.
Add the following line to your config:
aws.user_data = "#cloud-config\nbootcmd:\n - echo 'manual' > /etc/init/ssh.override\npackages:\n - puppet\nruncmd:\n - [ 'rm', '/etc/init/ssh.override' ]\n - [ 'service', 'ssh', 'start' ]\n"
This tells Cloudinit to disable SSH startup early in the boot process and re-enable it once your packages are installed. Now Vagrant can only SSH in to run puppet once the packages are fully installed.
This will probably work for other distros that use Cloudinit aside from Ubuntu, altho it is Upstart specific so the commands may need tweaking.
Well I worked around this by building my own AMI with puppet and other things I need installed, still seems like vagrant-aws is broken or I'm misunderstanding something else here.