I'm currently developing a web application (Django 2.0) application.
My app will be deployed on IBM Cloud (Cloud Foundry) using python build-pack.
One of my requirements is to install blender.
Everything else is very well, but for blender installation.
What I've tried so far was:
I tried access my app using SSH connection, but surely I don't have root access to apt-get install blender!!
And tried to include blender in packages.json file and push that file using cf push my-app.
But nothing worked for me.
In another shorter question: what is the main approach in Cloud Foundry Apps to install packages like when we use apt-get install in Ubuntu / Debian.
Please correct me if I did anything wrong, or guide me with headlines to solve this problem!!
I see a couple options for you to install packages if they cannot be installed using the regular requirements file (which is the preferred way):
Download the relevant libraries and put them in subfolders of the app before pushing it. The libraries will be uploaded. That is how I would do it.
Once you have an SSH connection, use secure copy (scp) to upload the files and place them in the subfolders where they are expected.
Regarding Blender, the question is what you need in addition to having the code copied over. Does it need a running daemon? Are there more dependencies? You would need to share more information about your specific app to answer that. Maybe, packaging everything as one or more containers and run it on Kubernetes or a combination of Cloud Foundry and Kubernetes is a better way.
Related
I’m looking to learn about Cloud Foundry and I’m trying to get a development instance of it set up on my local Windows 10 PC. But I’m not having any luck.
I’m finding a lot of information about PCF Dev which was deprecated a while ago. I also looked at the replacement for PCF Dev, CF Dev (https://github.com/cloudfoundry-attic/cfdev). Its git page mentions that its repository is no longer receiving updates. I still went ahead and tried installing it using the instructions in the README:
cf install-plugin -r CF-Community cfdev
But the link it uses to download the plugin is broken:
Starting download of plugin binary from repository CF-Community...
Get "https://d3p1cc0zb2wjno.cloudfront.net/cfdev/cfdev-v0.0.18-rc.36-windows.exe": dial tcp: lookup d3p1cc0zb2wjno.cloudfront.net: no such host
Can anyone recommend a way to get a development instance of Cloud Foundry set up on my local machine so I can play around with it?
Thanks
Yes, steer clear of pcf-dev and cf-dev, they may still work but are definitely not getting updates so will be way out of date by now.
My understanding, although I haven't tried this process in a while, is that the way to run locally is with VirtualBox. You can run one locally using bosh-deployment & cf-deployment and Virtualbox.
For instructions installing Bosh in VirtualBox using bosh-deployment, see the Install Section to install Bosh.
With Bosh installed, follow the deployment guide to get CF installed. You can skip to step 4, since you're installing into VirtualBox. Be sure to read the entire document before you begin, however pay specific attention to this section which has specific instructions for running locally.
this last week I have been trying to upload a flask app using AWS Beanstalk.
The main problem for me was loading a very heavy library as part of the bundle (there is a 500mb limit for uploading the bundle code).
Instead, I tried to use requirements.txt file so it would download the library directly to the server.
Unfortunately, every time I tried to include the library name in the requirements file, it failed to load it (torch library).
on pythonanywhere server there is a console which allows you to access the virtual environment and simply type
pip install torch
which was very useful and comfortable.
I am looking for something similar in AWS beanstalk, so that I could install the library directly instead of relying on the requirements.txt file.
I have been at it for a few days now and can't make any progress.
your help would be much appreciated.
another question,
is it possible to load the venv to Amazon-S3 and then access the folder from the beanstalk environment?
Its not a good practice to "manually" install your dependencies or configure your EB env from inside. This is only useful for testing and debugging purposes. Thus keep that it mind.
To get your venv, you have to ssh to your EB instance using regular ssh or web-based clients available in AWS EC2 console when you locate your EB EC2 instance. Session manager should work out-of-the-box to enable you to login to the instance.
When you login to the instance, then to activate your venv, you do:
# start bash
bash
# source venv
source /var/app/venv/staging-*/bin/activate
I want to deploy my wagtail (which is a CMS based on django) project onto an AWS lambda function. The best option seems to be using zappa.
Wagtail needs opencv installed to support all the features.
As you might know, just running pip install opencv-python is not enough because opencv needs some os level packages to be installed. So before running pip install opencv-python one has to install some packages on the Amazon Linux in which the lambda environment is running. (yum install ...)
The only solution that came to my mind is using lambda layers to properly install opencv.
But I'm not sure whether it's possible to use lambda layers with projects deployed by zappa.
Any kind of help and sharing experiences would be really appreciated!
There is an open pull request that is ready to merge, but needs additional user testing.
The older project has a pull request that claims layer support has been merged
Feel free to try it out and let the maintainers know so documentation can be updated.
I'm using AWS Lambda, which involves creating an archive of my node.js script, including the node_modules folder and uploading that to their infrastructure to run.
This works fine, except when it comes to node modules with native bindings (using node-gyp). Because the binding was complied and project archived on my local computer (OS X), it is not compatible with AWS's (Amazon Linux) servers.
How can I cross-compile/install a node module (specifically, node-sqlite3) so when I upload it to another server arch it runs?
While not really a solution to your problem, a very easy workaround could be to simply compile the native addons on a Linux machine.
For your particular situation, I would use Vagrant. Vagrant can create virtual machines and configure them within seconds.
Find an OS image that resembles Amazon's Linux distro (Fedora, CentOS, others that use yum as package manager - see Wiki)
Use a simple configuration script that, when run by Vagrant on machine startup, will run npm install (optionally it might also remove the node_modules folder before to ensure a clean installation)
For extra comfort, the script can also create the zip file for deployment
Once the installation finishes, the script will shutdown the VM to avoid unnecessary consumption of system resources
Deploy!
It might require some tuning if the linked libraries are not at the same place on the target machine but generally this seems to me like the best and quickest solution.
While installing the app using Vagrant might be sufficient in some cases, I have found it necessary to build the app on Linux which is as close to Lambda's Amazon Linux AMI as possible.
You can read the original answer here: https://stackoverflow.com/a/34019739/303184
Steps to make it work:
Spawn new EC2 instance. Make sure it is based on exactly the same image as your AWS Lambda runtime. You can review Lambda env details here: http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html. In our case, it was Amazon Linux AMI called amzn-ami-hvm-2015.03.0.x86_64-gp2.
Install nvm and use it to install the same version of Node.js as on the AWS Lambda. At the time of writing this, it was v0.10.36. You can refer to http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html again to find out.
You will probably need to install git & g++ compiler on the EC2. You can do this running
sudo yum install git gcc-c++
Finally, clone your app to your new EC2 and install your app's dependecies:
nvm use 0.10.36
npm install --production
You can then easily download the node_modules using scp or such.
Same lines as Robert's answer, when I had to work on my MAC in a different OS I use vm ware like Oracle's free virtualizer VirtualBox to get a linux on my mac, no cost to me. Or sign up for a new AWS account, you get a micro for a year free. Use that to get your linux box, do whatever you need there.
AWS has a page describing how to deal with native NPM modules: https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/
Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it? I expect to do it the same way as in a dockerfile, but it doesn't work with or without sudo in my case.
Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it?
No. Running apt-get or a package manager would typically require root access and you do not get root access when the build pack runs or when your application runs (this is a difference w/Docker).
That said, you can do anything that doesn't require root access, so if you found a package manager that installed in the vcap user's home directory and didn't need root then you could use that.
It depends on what you're trying to install, but in some cases you can work around this by downloading the .deb or .rpm file and manually extracting the binaries. This typically works OK for things like shared libraries. Just download the precompiled binary that matches your stack (cflinuxfs2 == Ubuntu Trusty). For other things, you can build your own binaries from source. This is what the build pack's do, see binary-builder.
Hope that helps!