this last week I have been trying to upload a flask app using AWS Beanstalk.
The main problem for me was loading a very heavy library as part of the bundle (there is a 500mb limit for uploading the bundle code).
Instead, I tried to use requirements.txt file so it would download the library directly to the server.
Unfortunately, every time I tried to include the library name in the requirements file, it failed to load it (torch library).
on pythonanywhere server there is a console which allows you to access the virtual environment and simply type
pip install torch
which was very useful and comfortable.
I am looking for something similar in AWS beanstalk, so that I could install the library directly instead of relying on the requirements.txt file.
I have been at it for a few days now and can't make any progress.
your help would be much appreciated.
another question,
is it possible to load the venv to Amazon-S3 and then access the folder from the beanstalk environment?
Its not a good practice to "manually" install your dependencies or configure your EB env from inside. This is only useful for testing and debugging purposes. Thus keep that it mind.
To get your venv, you have to ssh to your EB instance using regular ssh or web-based clients available in AWS EC2 console when you locate your EB EC2 instance. Session manager should work out-of-the-box to enable you to login to the instance.
When you login to the instance, then to activate your venv, you do:
# start bash
bash
# source venv
source /var/app/venv/staging-*/bin/activate
Related
Our company is building up a suite of common internal Spark functions and jobs, and I'd like to make sure that our data scientists have access to all of these when they prototype in Zeppelin.
Ideally, I'd like a way for them to start up a Zeppelin notebook on AWS EMR, and have the dependency jar we build automatically loaded onto it without them having to manually type in the maven information manually every time (private repo location/credentials, package info, etc).
Right now we have the dependency jar loaded on S3, and with some work we could get a private maven repository to host it on.
I see that ZEPPELIN_INTERPRETER_DIR saves off interpreter settings, but I don't think it can load from a common default location (like S3, or something)
Is there a way to tell Zeppelin on an EMR cluster to load it's interpreter settings from a common location? I can't be the first person to want this.
Other thoughts I've had but have not tried yet:
Have a script that uses aws cmd line options to start a EMR cluster with all the necessary settings pre-made for you. (Could also upload the .jar dependency if we can't get maven to work)
Use a infrastructure-as-code framework to start up the clusters with the required settings.
I don't believe it's possible to tell EMR to load settings from a common location. The first thought you included is the way to go imo - you would aws emr create ... and that create would include a shell script step to replace /etc/zeppelin/conf.dist/interpreter.json by downloading the interpreter.json of interest from S3, and then hard restart zeppelin (sudo stop zeppelin; sudo start zeppelin).
I have deployed my django app on AWS Elastic beanstalk, using the sqlite database. The site has been running for a while.
Now I have made some changes to the website and want to deploy again, but by keeping the changes the database went through(by form inputs)
But I am not able to find a way to download the updated database or any files. The only version of the app I am able to download is the one I already uploaded when I first deployed.
How can I download the current running app version and deploy with the same database?
you may use scp to connect to remote server and download all your files.
as commented, you should better use dumpdata and then i/o that file (instead of the db or full site).
however, if your remote system python version is not the same as your django site, then the dumpdata may not work (this is my case), so I managed to download the whole site folder back to local and then use dumpdata
aws elastic beanstalk Python app's files in /opt/python/current/app
just use scp -r to download the whole site to local.
I believe you have sqlite db file being packaged from your local during eb deploy. If that is a case then all you need to do is not include db.sqlite file in your deployments.
Move location of db.sqllite file to s3 or ebs persistent volume so that it persists.
Use .ebextentions to run db migration in aws on top of the aws db file just the way django recommends.
To be on safe side you can ssh into your eb env which is ec2 instance and download db.sqlite file just incase.
I'm currently developing a web application (Django 2.0) application.
My app will be deployed on IBM Cloud (Cloud Foundry) using python build-pack.
One of my requirements is to install blender.
Everything else is very well, but for blender installation.
What I've tried so far was:
I tried access my app using SSH connection, but surely I don't have root access to apt-get install blender!!
And tried to include blender in packages.json file and push that file using cf push my-app.
But nothing worked for me.
In another shorter question: what is the main approach in Cloud Foundry Apps to install packages like when we use apt-get install in Ubuntu / Debian.
Please correct me if I did anything wrong, or guide me with headlines to solve this problem!!
I see a couple options for you to install packages if they cannot be installed using the regular requirements file (which is the preferred way):
Download the relevant libraries and put them in subfolders of the app before pushing it. The libraries will be uploaded. That is how I would do it.
Once you have an SSH connection, use secure copy (scp) to upload the files and place them in the subfolders where they are expected.
Regarding Blender, the question is what you need in addition to having the code copied over. Does it need a running daemon? Are there more dependencies? You would need to share more information about your specific app to answer that. Maybe, packaging everything as one or more containers and run it on Kubernetes or a combination of Cloud Foundry and Kubernetes is a better way.
Im migrating a Play! application from Heroku to AWS Beanstalk.
Heroku is really straight forward when it comes to deploying: Just push changes to a remote git repository on Heroku and the build occurs on the server side.
This is very convenient because it is not necessary to upload the whole project for each tiny change (Including all libraries!).
Basically for each change we are generating a huge 140 MB Docker zipped file that takes at least 10 minutes to upload.
Surely there must be a better way but a long search on Google only returned options to automize the file generation with scripts and alternatives like Jenkins but this does not solve the problem, it just automates the problem.
Does anyone have a better solution?
You can set up a AWS CodeCommit repository, and use that as a remote for your local git repository. Next you can set up AWS CodePipeline to build your application and deploy to Elastic Beanstalk whenever there is a new commit to the AWS CodeCommit repository.
This way you don't have to upload everything every time. Whenever you do git push, only the changed files are uploaded to the AWS CodeCommit repository, and then AWS CodePipeline takes care of building your application and deploying it to Elastic Beanstalk.
So I got curious about this question too and had a conversation with an AWS specialist about different options here. Each option has it's downsides tho.
The first option is to bake your application code, create an AMI out of it and carry out deployment using baked AMI. More on that
You have to test this approach first before adopting. The downside is that you would have to regularly maintain the AMI. You might also miss out critical patches from Beanstalk since AMI has been locked down
A good read on this topic
The next approach would be to move out of Beanstalk and use CloudFormation where you can just upload your application folder to S3. Your CloudFormation template has to take care of spinning up all the resources required and using AWS::CloudFormation::Init and cfn-signals, it would be possible to install and setup software.Changes within the resource Metadata can be detected by making use of the proper CloudFormation signal and we can also run user-specified actions when a change is detected on the template specification.
(AWS::CloudFormation::Init)
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html (set of helper scripts that can be used with CloudFormation)
Although these are not exactly a solution to what you asked for, they can be a good alternative. At least I made sure that you are not missing out any available options at Beanstalk.
Also one advice I got from them was to consider splitting up application into multiple components and sub-components. This would reduce your application size considerably.
Hope this helped.
Short answer: No.
Long Answer: I ended up packaging the app with activator and not using Docker.
Crate a folder named "dist" in the root of the project.
Include a file named Procfile with the following line:
web: ./bin/YOUR_APP_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
Make sure to replace YOUR_APP_NAME with the name of your app as configured in build.sbt.
Package the Play app with the following command:
activator clean dist
That will generate a zip file inside target/universal/ folder in the project.
Deploy that zip file to AWS Elastic Beanstalk.
I want to make a Django project compatible with AWS Beanstalk.
I dont want this to be like in AWS tutorial, since they use git and need to setup the whole project as they tell.
I just want to know if there is a way of converting an already created Python-Django project to be AWS Beanstalk compatible. I mean, isn't there a standard project layout to download or a plugin or command-line tool that creates the .ebsettings folder for me? I want to convert my project and upload it throw the AWS web gui, dont need all the git stuff.
You can do this without using git route. You just need to zip your source bundle and upload to the Beanstalk Web Console. The code structure can be kept the way you want.
Key configurations are:
1. WSGIPath : This should point to the .py file which you need to start the app (WSGI app)
2. static: This should point to the path containing the static files
You can add the configurations in the .ebextensions folder, which should at the root of your app zip. You can read more details here: Customizing and Configuring a Python Container - AWS Elastic Beanstalk