How to create single dockerfile with node, yarn, python and maven - dockerfile

I'm trying to create a single dockerfile that includes npm, yarn, python, and mvn. Could anyone suggest the best practice to create a dockerfile with these packages.

Related

How to use PNPM with Google Cloud Build?

I'd like to migrate to PNPM, however, I can't find a way to use its lockfile on Google Cloud. My current cloudbuild config is the following:
steps:
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:latest"
entrypoint: 'gcloud'
args: ["app", "deploy"]
timeout: "1600s"
Afaik these official images only support Yarn and NPM. Is there an easy way to replace Yarn with PNPM here?
I looked on the Cloud Builders GitHub repo, but there's no PNPM there either.
IIUC the App Engine standard Node runtime(s) require that you use npm or yarn. PNPM is thus not user-definable when using standard.
https://cloud.google.com/appengine/docs/standard/nodejs/specifying-dependencies
If you want to use App Engine with a different package manager you could use flex and define a custom runtime. This essentially allows you to define a container image to deploy to App Engine and this may be anything that exposes an httpd on :8080.
You might be able to use pnpm install followed by npm shrinkwrap. I think gcloud deploy will ignore what's in node_modules in favor of package-lock.json but you could delete it.
npm i -g pnpm && pnpm i && npm shrinkwrap
That's npm shrinkwrap. There is pnpm shrinkwrap but that generates a pnpm-style lockfile.

confusion in deploying module's with django

Good day.
I'm a newbie to Django and I have a slight confusion:
When deploying my Django app, do I need to deploy it with all the Python 'come-with' modules, or the hosts already have them installed.
Also, I installed PIL for image manipulation. Would they also have it installed or i have to find a way to install it on their servers. Thanks in advance
do I need to deploy it with all the Python 'come-with' modules
Never do that. It might conflict with the dependencies on the server. Instead issue the following command to create a dependency file (requirements.txt).
pip freeze > requirements.txt (issue this command where manage.py is located)
On the server create a new virtual environment. Now copy django project to the server (you can do this using git clone or just plain old Filezilla). Activate virtual environment. Then change you current working directory to the where manage.py is located. to install all the dependencies issue the following command.
pip install -r requirements.txt
This will install the required dependencies on on server.

Docker compose install requirements in a shared directory

I am having several containers, and each of my containers are having their own Dockerfile. Everytime I am building, using docker-compose build, each container runs its own requirements; either from a requirements.txt file (RUN pip install -r requirements.txt), or directly from the Dockerfile (RUN pip install Django, celery, ...). Many of the requirements are common in some of the containers (almost all).
It is working perfectly, but there is a problem with build time. It takes almost 45 minutes to build every container from scratch. (lets say after I deleted all images and containers)
Is there a way, to install all the requirements in a common directory for all containers, so that we dont install the common requirements each time a new container is building?
Docker-compose I am using is version 2.
You can define your own base image. Let's say all your containers need django and boto for instance, you can create your own Dockerfile:
FROM python:3
RUN pip install django boto
# more docker commands
Then you can build this image as arrt_dtu/envbase and publish it somewhere (dockerhub, internal docker environment of your company). Now you can create your specialized images using this one:
FROM arrt_dtu/envbase
RUN pip install ...
That's exactly the same principle we have with the ruby image, for instance. The ruby one uses a linux one. If you want a rails image, you can use the ruby one as well. Docker images are totally reusable!

Install composer dependencies while deploying

I'm using Elastic Beanstalk to deploy my application as a Single Docker Application.
My Dockerfile does composer install while deploying, but I get a Could not authenticate against github.com error.
I use these lines in my Dockerfile to install my dependencies:
WORKDIR /www
RUN ["composer", "install", "-o"]
How would I solve this issue?
I think you need to configure composer inside your container with your key or something like that, remember that inside your container you're basically on another os and you don't have public keys etc.
I'd try to install it from source rather than from git (as you don't have keys).
try this:
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer ()

Django Buildout to Virtualenv

beginner Django developer here.
I started a personale project with buildout and now I wanted to test some deploying, I decided to go with heroku, but I immediately noticed that heroku works with virtualenv and a requirements.txt file. My question is, is there a way to deploy a buildout project to heroku or convert said project to use virtualenv? If yes, how can I achieve this?
Thanks
Buildout and virtualenv can work just fine together.
Upload your buildout.cfg, your bootstrap.py and use the virtualenv python to run the bootstrap.py script.
Kenneth Reitz (of requests fame, and Heroku's Python guy) has created a buildpack that does just that.