How to update code in a docker container? - django

I've set up a docker django container and made its image using build using tutorial here. The tutorial shows how to make a basic django application and mounts the application to "/code", which as I understand, is contained in a data-volume.
However I want to understand that how will I be able to update and develop this code, and be able to ship/deploy it. Since when I make a commit, it's doesn't take account any changes in the code, since it's a part of the data volume.
Is there any way I can make the django code a part of the image, or update the image with the updated code?

In my experience Docker serves two purposes:
To be able to develop code in a containerized environment. This is very useful as I am now able to get new developers on my team ready to work in about 5 mins Previously, this could have taken anywhere from an hour to several hours for misc issues, especially on older projects.
To be able to package an application in a containerized environment. This is also a great time saver as the only requirement for the environment is to have Docker installed.
When you are developing your code you should mount the source/volume so that your changes are always reflected inside the container. When you want to package an app for deployment you should COPY the source into the container and package it appropriately.
Here is a docker-compose file I use to (1) build an image to develop against, (2) develop my code, and (3) ship it (I'm using spring boot):
version: '3.7'
services:
dev:
image: '${MVN_BUILDER}'
container_name: '${CONTAINER_NAME}'
ports:
- '8080:8080'
volumes:
- './src:/build/src'
- './db:/build/db'
- './target:/build/target'
- './logs:/build/logs'
command: 'mvn spring-boot:run -Drun.jvmArguments="-Xmx512m" -Dmaven.test.skip=true'
deploy:
build:
context: .
dockerfile: Dockerfile-Deploy
args:
MVN_BUILDER: '${MVN_BUILDER}'
image: '${DEPLOYMENT_IMAGE}'
container_name: '${CONTAINER_NAME}'
ports:
- '8080:8080'
maven:
build:
context: .
dockerfile: Dockerfile
image: '${MVN_BUILDER}'
container_name: '${CONTAINER_NAME}'
I would run docker-compose build maven to build my base image. This is needed so that when I run my code in a container all the dependencies are installed in the image. The Dockerfile for this essentially copies to pom.xml into the image and downloads the dependencies needed for the app. Note this would need to be performed anytime dependencies change. Here is the Dockerfile to build that image that is referenced in the maven service:
### BUILD a maven builder. This will contain all mvn dependencies and act as an abstraction for all mvn goals
FROM maven:3.5.4-jdk-8-alpine as builder
#Copy Custom Maven settings
#COPY settings.xml /root/.m2/
# create app folder for sources
RUN mkdir -p /build
RUN mkdir -p /build/logs
# The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
WORKDIR /build
COPY pom.xml /build
#Download all required dependencies into one layer
RUN mvn -B dependency:go-offline dependency:resolve-plugins
RUN mvn clean install
Next, I would run docker-compose up dev to start my dev service and begin to develop my application. This service mounts my code into the container and uses Maven to start a spring boot application. Anytime I change the code spring boot restarts the server and my changes are reflected.
Finally, once I am happy with my application I build an image that has my application packaged for deployment using docker-compose build deploy. I use a two-stage build process to first copy the source into a container and package it for deployment as a Jar then that Jar is put into the 2nd stage where I can simply run java -jar build/app.jar (in the container) to start my application and the first stage is removed. That's It! Now you can deploy this latest image anywhere Docker is installed.
Here is what that last Dockerfile (Dockerfile-Deploy) looks like:
ARG MVN_BUILDER
### Stage 1 - BUILD image
FROM $MVN_BUILDER as builder
COPY src /build/src
RUN mvn clean package -PLOCAL
### Stage 2 - Deploy Jar
FROM openjdk:8
RUN mkdir -p /build
COPY --from=builder /build/target/*.jar /build/app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","build/app.jar"]
Here the .env file in the same directory as the docker-compose file. I use it to abstract image/container names and simply bump up the version number in one place when a new image is needed.
MVN_BUILDER=some/maven/builder:0.1
DEPLOYMENT_IMAGE=some/deployment/spring:0.1
CONTAINER_NAME=spring-container
CONTAINER_NAME_DEBUG=spring-container-debug

I think it's too late to answer your question, however, it might be beneficial for others who reach out.
The tutorial you mentioned is a bit tricky to use for the first-timers, so, I change the structure a little bit. I assume that you have a docker registry account (like Dockerhub) for the purpose of publishing the images to. This is required if you want to access the image on a remote host (you can copy the actual image file but is not recommended).
creating a project
Assume that you are going to create a website with Django and dockerize it, first, you do:
django-admin startproject samplesite
It creates a directory samplesite that includes the following (I added requirements.txt):
db.sqlite3 manage.py requirements.txt samplesite
adding Dockerfile and docker-compose.yml
For the Dockerfile, as you can see, nothing is changed compared to the Dockerfile.
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
However for the docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
image: yourUserNameOnDockerHub/mywebsite:0.1 # this line is added
command: python manage.py runserver 0.0.0.0:8000
#volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
docker-compose.yml is also almost identical to the one mentioned in the tutorial, with volume commented and one line added image: mywebsite:0.1. This line allows us to track the built image and deploy it whenever we want. The volume mounting is not related to the code you write and was put there to take out the dynamic content that is changed by Django (sqlite, uploaded files, etc.).
build and run
If you run docker-compose up the first time everything works fine, however, because of the new line added, when you change the code after the first time, the changes will not reflect in the container that runs. This is because upon each docker-compose up, compose will look for mywebsite:0.1 (that already exists) and does not build a new image and creates a container based on the old one. As we need that image name and tag to publish/deploy our image, we need to instead use:
docker-compose up --build
It will re-build an image with the changes reflected. Every time you make some changes, run it and a new fresh image is created that can be seen with (note that although the name and tag remain unchanged, change in image id shows that this is a new image):
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
yourUserNameOnDockerHub/mywebsite 0.1 033c9d2bfac0 7 seconds ago 974MB
publishing and deployment
If you have set up an account on Dockerhub (or any other registry) you can publish the image for later use or deployment on a remote server:
docker push yourUserNameOnDockerHub/mywebsite:01
If you want to deploy it on a remote host and want to use docker-compose again, Just change the docker-compose.yml to:
version: '3'
services:
db:
image: postgres
web:
image: yourUserNameOnDockerHub/mywebsite:0.1
command: python manage.py runserver 0.0.0.0:8000
#volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
Note that the build: . line is removed (as we are going to run it only). When developing locally, whenever you run docker-compose up --build a new image will be created and tagged and a container based on that will run in the compose stack. If you thought that you are happy with the changes, you follow the publishing step to make it live on the server.

When you want to update an image, lets say due to your application code changes, you use COPY during your image-build, so in the Dockerfile you do something like
COPY /you/code/on/the/host /var/www
Also see my answer about "volumes" and image-building https://stackoverflow.com/a/39314602/3625317 to clarify why your code is missing in the build

In step 9 of the tutorial you set a volume. This volume will link your current directory and your container /code directory. In other words, they will be the same.
So any updates to your local files will change the files in your container as well. Remember that you will need to restart your app so the changes can take place.
Before you deploy your image, you will need to create a second docker compose file. This file will remove the volume so the code will stay inside the container and won't change from outside. You can follow the steps provided in docker compose documentation.

Related

How to retrieve the docker image of a deployment on heroku via circleci

I have a django application running locally and i've set up the project on CircleCi with python and postgres images.
If I understand correctly what is happening, CircleCi would use the images to build a container to test my application with code database.
Then I'm using the job heroku/deploy-via-git to deploy it to Heroku when the tests are passed.
Now I think Heroku is using some images too to run the application.
I would like to get the image used by heroku to run my site locally on another machine.
So pull the image then push it to Docker Hub and finally download it back to my computer to only have to use a docker compose up.
Here is my CircleCi configuration's file
version: 2.1
docker-auth: &docker-auth
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
orbs:
python: circleci/python#1.5.0
heroku: circleci/heroku#0.0.10
jobs:
build-and-test:
docker:
- image: cimg/python:3.10.2
- image: cimg/postgres:14.1
environment:
POSTGRES_USER: theophile
steps:
- checkout
- run:
command: pip install -r requirements.txt
name: Install Deps
- run:
name: Run MIGRATE
command: python manage.py migrate
- run:
name: Run loaddata from Json
command: python manage.py loaddata datadump.json
- run:
name: Run tests
command: pytest
workflows:
heroku_deploy:
jobs:
- build-and-test
- heroku/deploy-via-git:
requires:
- build-and-test
I don't know if it is possible, if not, what should be the best way to proceed ? (I assume that there is a lot of possibilites)
I was considering to build an image from my local directory with docker compose up then use this image direclty on CircleCi, then i would be able to use this image an on other computer. But building images into images with CircleCi seems really messy and I'm not sure how I should proceed.
I've tried to pull images from Heroku but it seems I can only pull the code or get/modify the database but I can't get the image builds itself.
I hope this question is relevant and clear, as the CircleCi and Heroku documentation seems not clear and it's my first post on stackoverflow !
Thanks in advance
Heroku's platform is proprietary, so we can't be sure how it works internally.
We know that their stacks are based on Ubuntu LTS releases, and we know that they use open-source buildpacks to compile application slugs from source code, but details about the underlying infrastructure are murky. They certainly don't provide base images like heroku/python:3.11.0 for you to download.
If you want to use the same image locally, on CircleCI, and Heroku, a better option would be to start deploying with Heroku's Container Registry instead of Git. This allows you to build an image locally, push it into the container registry, and release it as the next version of your application.
I suggest you read the entire documentation page linked above, but the short version is:
Log into the container registry using the Heroku CLI:
heroku container:login
Assuming you already have a Dockerfile for your application, build and push an image:
heroku container:push web
In this case we are building from Dockerfile and pushing the resulting image to be used as a web process.
Release your application:
heroku container:release web
That's a basic Docker deployment from your local machine, and even if that's not your final plan I suggest you start by getting that working.
From there, you have options. One option would be to move this flow to CircleCI—continue to build images there, but have CircleCI push the resulting container to Heroku's Container Registry.
Another option might be as you suggest in your question: to build images locally and use them with both CircleCI and Heroku.

Multiple container app: execute container from another container

I have a multi container Django app. One Container is the database, another one the main webapp with Django installed for handling the front- and backend. I want to add a third container which provides the main functionality/tool we want to offer via the webapp. It has some complex dependencies, which is why I would like to have it as a seperate container as well. It's functionality is wrapped as a CLI tool and currently we build the image and run it as needed passing the arguments for the CLI tool.
Currently, this is the docker-compose.yml file:
version: '3'
services:
db:
image: mysql:8.0.30
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- TZ=${TZ}
volumes:
- db:/var/lib/mysql
- db-logs:/var/log/mysql
networks:
- net
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
app:
build:
context: .
dockerfile: ./Dockerfile.webapp
environment:
- MYSQL_NAME=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
ports:
- "8000:8000"
networks:
- net
volumes:
- ./app/webapp:/app
- data:/data
depends_on:
- db
restart: unless-stopped
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
tool:
build:
context: .
dockerfile: ./Dockerfile.tool
volumes:
- data:/data
networks:
net:
driver: bridge
volumes:
db:
db-logs:
data:
In the end, the user should be able to set the parameters via the Web-UI and run the tool container. Multiple processes should be managed by a job scheduler. I hoped that running the container within a multi-container app should be straightforward, but as far as I now know it is only possible through mounting the docker socket which should be avoided regarding security issues.
So my question is: what are the possiblites to achieve my desired goal?
Things I considered:
multi-stage container: main purpose is to reduce file size, but is there a hack to use the cli-tool along with its built environment in the latest image of the multi-stage container?
Api: build an Api for the tool. Other containers can communicate via the docker network. Seems to be cumbersome
The service "app" (the main django app) is built on top of the official python image which I would like to keep this way. Nevertheless there is the possibility to build one large image based on Ubuntu which includes the tool along with its dependendencies and the main django app. This will probably heavily increase sizes and maybe turn into dependency issues.
Has anybody run into similar issues? Which direction would you point me to? I'm also looking for some buzzwords that speed up my research.
You should build both parts into a single unified image, and then you can use the Python subprocess module as normal to invoke the tool.
The standard Docker Hub python image is already built on Debian, which is very closely related to Ubuntu. So you should be able to do something like
FROM python:3.10
# Install OS-level dependencies for both the main application and
# the support tool
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
another-dependency \
some-dependency \
third-dependency
# Install the support tool
ADD http://repository.example.com/the-tool/the-tool /usr/local/bin/the-tool
RUN chmod +x /usr/local/bin/the-tool
# Copy and install Python-level dependencies
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
# Copy in the main application
COPY ./ ./
# Metadata on how to run the application
EXPORT 8000
# USER someuser
CMD ["./the-app.py"]
You've already noted the key challenges in having the tool in a separate container. You can't normally "run commands" in a container; a container is a wrapper around some single process, and it requires unrestricted root-level access to the host to be able to manipulate the container in any way (including using the docker exec debugging tool). You'd also need unrestricted root-level access to the host to be able to launch a temporary container per request.
Putting some sort of API or job queue around the tool would be the "most Dockery" way to do it, but that can also be significant development effort. In this setup as you've described it, the support tool is mostly an implementation detail of the main process, so you're not really breaking the "one container does one thing" rule by making it available for a normal Unix subprocess invocation inside the same container.

Connecting to local docker-compose container Windows 10

Very similar to this question, I cannot connect to my local docker-compose container from my browser (Firefox) on Windows 10 and have been troubleshooting for some time, but I cannot seem to find the issue.
Here is my docker-compose.yml:
version: "3"
services:
frontend:
container_name: frontend
build: ./frontend
ports:
- "3000:3000"
working_dir: /home/node/app/
environment:
DEVELOPMENT: "yes"
stdin_open: true
volumes:
- ./frontend:/home/node/app/
command: bash -c "npm start & npm run build"
my_app_django:
container_name: my_app_django
build: ./backend/
environment:
SECRET_KEY: "... not included ..."
command: ["./rundjango.sh"]
volumes:
- ./backend:/code
- media_volume:/code/media
- static_volume:/code/static
expose:
- "443"
my_app_nginx:
container_name: my_app_nginx
image: nginx:1.17.2-alpine
volumes:
- ./nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- ./frontend:/home/app/frontend/
ports:
- "80:80"
depends_on:
- my_app_django
volumes:
static_volume:
media_volume:
I can start the containers with docker-compose -f docker-compose.yml up -d and there are no errors when I check the logs with docker logs my_app_django or docker logs my_app_nginx. Additionally, doing docker ps shows all the containers running as they should.
The odd part about this issue is that on Linux, everything runs without issue and I can find my app on localhost at port 80. The only thing I do differently when I am on Windows is that I run a dos2unix on my .sh files to ensure that they run properly. If I omit this step, then I get many errors which leads me to believe that I have to do this.
If anyone could give guidance/advice as to what may I be doing incorrectly or missing altogether, I would be truly grateful. I am also happy to provide more details, just let me know. Thank you!
EDIT #1: As timur suggested, I did a docker run -p 80:80 -d nginx and here was the output:
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
bf5952930446: Pull complete
ba755a256dfe: Pull complete
c57dd87d0b93: Pull complete
d7fbf29df889: Pull complete
1f1070938ccd: Pull complete
Digest: sha256:36b74457bccb56fbf8b05f79c85569501b721d4db813b684391d63e02287c0b2
Status: Downloaded newer image for nginx:latest
19b56a66955145e4f59eefff57340b4affe5f7e0d82ad013742a60b479687c40
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint naughty_hoover (8c7b2fa4aef964899c366e1897e38727bb7e4c38431875c5cb8456567005f368): Bind for 0.0.0.0:80 failed: port is already allocated.
This might be the cause of the error but I don't really understand what needs to be done at this point.
EDIT #2: As requested, here are my Dockerfiles (one for backend, one for frontend)
Backend Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y imagemagick libxmlsec1-dev pkg-config
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code
Frontend Dockerfile:
FROM node
WORKDIR /home/node/app/
COPY . /home/node/app/
RUN npm install -g react-scripts
RUN npm install
EDIT #3: When I do docker ps, this is what I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0da02ad8d746 nginx:1.17.2-alpine "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp my_app_nginx
070291de8362 my_app_frontend "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:3000->3000/tcp frontend
2fcf551ce3fa my_app_django "./rundjango.sh" 12 days ago Up About an hour 443/tcp my_app_django
As we established you use Docker Toolbox that is backed by VirtualBox rather than default Hyper-V Docker for Windows. In this case you might think of it as a VBox VM that actually runs Docker - so all volume mounts and port mappings apply to docker machine VM, not your host. And management tools (i.e. Docker terminal and docker-compose) actually run on your host OS through MinGW.
Due to this, you don't get binding ports on localhost by default (but you can achieve this by editing VM properties in VirtualBox manually if you so desire - I just googled the second link for some picture tutorials). Suprisingly, the official documentation on this particular topic is pretty scarce - you can get a hint by looking at their examples though.
So in your case, the correct url should be http://192.168.99.100
Another thing that is different between these two solutions is volume mounts. And again, documentation sorta hints at what it should be but I can't point you a more explicit source. As you have probably noticed the terminal you use for all your docker interactions encodes paths a bit differently (I presume because of that MinGW layer) and converted paths get sent off to docker-machine - because it's Linux and would not handle windows-style paths anyway.
From here I see a couple of avenues for you to explore:
Run your project from C:\Users\...\MyProject
As the documentation states, you get c:\Users mounted into /c/Users by default. So theoretically, if you run your docker-compose from your user home folder - paths should automagically align - but since you are having this issue - you are probably running it from somewhere else.
Create another share
You also can create your own mounting mount in Virtual Box. Run pwd in your terminal and note where project root is. Then use Virtual Vox UI and create a path that would make it align with your directory tree (for example, D:\MyProject\ should become /d/MyProject.
Hopefully this will not require you to change your docker-compose.yml either
Alternatively, switch to Hyper-V Docker Desktop - and these particular issues will go away.
Bear in mind, that Hyper-V will not coexist with VirtualBox. So this option might not be available to you if you need VBox for something else.

Combining Unit Tests from Multiple Projects into One Docker Image for Azure DevOps Pipeline

I have followed the following article on getting my unit tests working in a Docker image and publishing via Azure DevOps pipeline.
Running Your Unit Tests With Visual Studio Team Services and Docker Compose
Each one of my unit tests projects have a very basic Dockerfile:
First Dockerfile for Application Tests
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
COPY . /app
WORKDIR /app/Application.Tests
RUN dotnet restore
Second DockerFile for Infrastructure tests.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
COPY . /app
WORKDIR /app/Infrastructure.Tests
RUN dotnet restore
My docker-compose for both images:
application.tests:
image: ${DOCKER_REGISTRY-}applicationtests
build:
context: .
dockerfile: Application.Tests/Dockerfile
entrypoint: >
dotnet test --results-directory /testresults --logger trx
/p:CollectCoverage=true
/p:CoverletOutputFormat=cobertura
/p:CoverletOutput=/testresults/Application.Tests.cobertura.xml
/p:Exclude="[xunit.*]"
volumes:
- /opt/vsts/work/_temp:/testresults
infrastructure.tests:
image: ${DOCKER_REGISTRY-}infrastructuretests
build:
context: .
dockerfile: Infrastructure.Tests/Dockerfile
entrypoint: >
dotnet test --results-directory /testresults --logger trx
/p:CollectCoverage=true
/p:CoverletOutputFormat=cobertura
/p:CoverletOutput=/testresults/Infrastructure.Tests.cobertura.xml
/p:Exclude="[xunit.*]"
volumes:
- /opt/vsts/work/_temp:/testresults
Is there a way to combine the definition for each image in docker-compose? I understand that I can combine the tests into a single project, but want to maintain a 1:1 relationship between project and test project.
I should also add that both projects are in the same solution (.NET Core).
Is there a way to run the tests for all projects in a single
azure-pipeline task?
It's not available with DockerCompose task. According to your scenario, you should avoid combining multiple containers into one image. As the answer in this thread pointed that images should be kept light, and run one service per container.
Docker Compose Task is just the way to manage these services with multiple containers to run in your application. .
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration
If you insist on combing everything together, Docker compose is not fit for your case.
You could have a try with multiple stage dockerfile to combine these dockerfiles into one. This will end up with a single image when your run docker build

docker-compose same config for dev and production but enable code sharing between host and container only in development

As the most important benefit of using docker is to keep dev and prod env to be the same so let's rule out the option of using two different docker-compose.yml
Let's say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web(Django) and db(mysql). There's nothing wrong with the db part.
For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.
Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):
In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. --- This does remove most of the recreating container overhead, but we can't use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.
I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose. So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I'm still quite new to docker, please correct me if I'm wrong)
TLDR;
How can I set up my env so that
I use only one single docker-compose.yml for both dev and prod
I'm able to dev with live changes easily without recreating docker container
Thanks a lot!
Define your django service in docker-compose.yml as
services:
backend:
image: backend
Then add a file for dev: docker-compose.dev.yml
services:
backend:
extends:
file: docker-compose.yml
service: backend
volume: local_path:path
To launch for prod, just docker-compose up
To launch for dev
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To hot reload dev django app, just reload gunicorn ps aux | grep gunicorn | grep greencar_proj | awk '{ print $2 }' | xargs kill -HUP
I have also liked to jam as much functionality into a single docker-compose.yml file. A few strategies I would consider:
define different services for prod and dev. So you'll run docker-compose up dev or docker-compose up prod or docker-compose run dev. There is some copying here but usually not a lot.
Use multiple docker-compose.yml files and merge them. eg: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. More details here: https://docs.docker.com/compose/extends/
I usually just comment out my volumes section, but that's probably not the best solution.