We use the PHP-Buildpack to run our app on a CloudFoundry Service.
To Backup the Database we wan't to use the mysqldump command, therefore we need a way to install mysql-client in the buildpack.
Do we have to create our own php build-pack every time we need a custom dependencies or is there a easier way to install additional dependencies in the buildpack?
After some testing with the apt-buildpack (thank's to #FyodorGlebov) i have found a working solution.
add apt.yml in the project root (documentation)
---
packages:
- mysql-client
add multi-buildpack.yml in the project root (documentation)
buildpacks:
- https://github.com/cloudfoundry/apt-buildpack
- https://github.com/cloudfoundry/php-buildpack
Push your app with this command (documentation)
cf push APP_NAME -b https://github.com/cloudfoundry/multi-buildpack
Related
I'm working in a project generated by cookiecutter-django localy using docker, I wanted to add new packeges, what is the best aproche to perform this?
I'm actually copying the version of the package and pasting into the base requirements file and building the local.yaml contener again and it's seams to be rebuilding the entire containers in the project instead of building only the container that changes has been detected. So i don't if my aproche is the best, so please help me acheive this
Given how you tagged the question, I assume you want to add a new Python package to a project that was generated using cookiecutter-django.
I think that the way you're doing it is correct. To be 100% clear, you need to:
Edit the requirement file where you want it installed:
local.txt for local only
production.txt for production only
base.txt for both
Rebuild your containers: docker-compose -f local.yml build
Restart your containers: docker-compose -f local.yml up -d
The 2nd step may feel a bit heavy, as it would reinstall all the Python packages, not the just the new one, but AFAIU that's how Docker works.
Hope that helps!
I am building a Django based application on App Engine. I have created a Postres CloudSql instance. I created a cloudbuild.yaml file with a Cloud Build Trigger.
django = v2.2
psycopg2 = v2.8.4
GAE runtime: python37
The cloudbuild.yaml:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'migrate', '--noinput']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "3000s"
The deploymnet is going alright, the app can connect to the database. But when I try load a page I get the next error:
"...import psycopg2 as Database File "/srv/psycopg2/__init__.py", line 50, in from psycopg2._psycopg import ( # noqa ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory"
Another interesting thing is if I deploy my app with 'gcloud app deploy' (not through Cloud Build), everything is alright I am not getting the error above, my app can communicate with the database.
I am pretty new with gcloud, so maybe I missed some basic here.
But my questions are:
-What is missing from my cloudbuild.yaml to make it work?
-Do I pip install my dependencies to the correct place?
-The prospective of this error what is the difference with the Cloud Build based deployment and the manual one?
From what I see you're using Cloud Build to run gcloud app deploy.
This command commits your code and configuration files to App Engine. As explained here App engine runs in a Google managed environment that automatically handles the installation of the dependencies specified in the requirements.txt file and executes the entrypoint you defined in your app.yaml. This has the benefit of not having to manually trigger the instalation of dependencies. The first two steps of your cloudbuild are not affecting the App Engine's runtime, since the configuration of it is managed by the aforementioned files once they're deployed.
The purpose of Cloud Build is to import source code from a variety of repositories and build binaries or images according to your specifications. It could be used to build Docker images and push them to a repository, download a file to be included in Docker build or package a Go binary an upload it to Cloud Storage. Furthermore the gcloud builder is aimed to run gcloud commands through a build pipeline for example to create account permissions or configure firewall rules when these are required steps for another operation to succeed.
Since you're not automatizing a build pipeline but trying to deploy an App Engine application Cloud build is not the product you should be using. The way to go when deploying to App Engine is to simply run gcloud app deploy command and let Google's environment take care of the rest for you.
Isn't this Quickstart describing exactly what the OP was trying to do?
https://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
I myself was hoping to automate deployment of a Django webapp to an AppEngine "standard" instance.
When I deploy the app, it runs fine on first install. But any following eb deploy procedures fail with an error that: go.mod was found, but not expected.
Is there a specific configuration I have to set for deploying with Go modules?
I switched to Dockerizing the app and deploying that way, which works fine. But it sounds a bit cumbersome to me as AWS Elastic Beanstalk provided specific Go environments.
You can work with go modules.
build.sh
#!/usr/bin/env bash
set -xe
# get all of the dependencies needed
go get
# create the application binary that EB uses
go build -o bin/application application.go
and override GOPATH to point to $HOME which defaults to /var/app/current as given in the EB configuration management dashboard.
.ebextensions/go.config
option_settings:
aws:elasticbeanstalk:application:environment:
GOPATH: /home/ec2-user
I had the same problem, I was finally able to fix it adding this line in my build.sh script file:
sudo rm /var/app/current/go.*
So it is like this, in my case:
#!/usr/bin/env bash
# Stops the process if something fails
set -xe
sudo rm /var/app/current/go.*
# get all of the dependencies needed
go get "github.com/gin-gonic/gin"
go get "github.com/jinzhu/gorm"
go get "github.com/jinzhu/gorm/dialects/postgres"
go get "github.com/appleboy/gin-jwt"
# create the application binary that eb uses
GOOS=linux GOARCH=amd64 go build -o bin/application -ldflags="-s -w"
I have a Django project that I deploy on a server using CircleCI. The server is a basic cloud server, and I can SSH into it.
I set up the deployment section of my circle.yml file, and everything is working fine. I would like to automatically perform some actions on the server after the deployment (such as migrating the database or reloading gunicorn).
I there a way to do that with CircleCI? I looked in the docs but couldn't find anything related to this particular problem. I also tried to put ssh user#my_server_ip after my deployment step, but then I get stuck and cannot perform any action. I can successfully SSH in, but the rest of the commands is not called.
Here is what my ideal circle.yml file would look like:
deployment:
staging:
branch: develop
commands:
- rsync --update ./requirements.txt user#server:/home/user/requirements.txt
- rsync -r --update ./myapp/ user#server:/home/user/myapp/
- ssh user#server
- workon myapp_venv
- cd /home/user/
- pip install -r requirements.txt
I solved the problem by putting a post_deploy.sh file on the server, and putting this line on the circle.yml:
ssh -i ~/.ssh/id_myhost user#server 'post_deploy.sh'
It executes the instructions in the post_deploy.sh file, which is exactly what I wanted.
I have created a ruby env on amazon elastic beanstalk, but when I try to deploy my rails app from command line using eb deploy I get this error:
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
You need to install git to be able to use gems from git repositories. For help
installing git, please refer to GitHub's tutorial at
https://help.github.com/articles/set-up-git (Executor::NonZeroExitStatus)
[2015-08-09T15:50:38.513Z] INFO [4217] - [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/10_bundle_install.sh] : Activity failed.
[2015-08-09T15:50:38.513Z] INFO [4217] - [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook] : Activity failed.
[2015-08-09T15:50:38.513Z] INFO [4217] - [CMD-AppDeploy/AppDeployStage0] : Activity failed.
[2015-08-09T15:50:38.514Z] INFO [4217] - [CMD-AppDeploy] : Completed activity. Result:
Command CMD-AppDeploy failed.
So, shall I install git at amazon instance bash directly? will this effect autoscaling?
I don't know if you fixed this, but you need to tell Elastic Beanstalk to install git.
In the root directory of your project, add a folder called .ebextensions.
Create a file inside that folder called (something like) install_git.config (the .config is important).
Add the following lines to that file:
packages:
yum:
git: []
Then redeploy your application, and you shouldn't see that error anymore.