gcloud docker-compose deploy main container (solution with sub projects) - google-cloud-platform

I developed a solution. This solution consists of 3 projects. (Some documents call them services, some call them containers, but we know that these are containers and sub-containers for docker-compose)
The application itself
Db - SQL Peoje
Redis
When I build this solution with docker-compose, I can see 3 images under a common Container. And in this way; It works in my local environment with docker-compose build and up.
I want to deploy the solution as it is (as the main container and subcontainer) to google cloud. I tried about 6-7 different ways while reading the documents to upload the project to the Google side. But I couldn't understand what ideal is. Everywhere is full of documents, but they did not write a healthy simple method.
Even with a method or two, when I deploy. It does not give an error, it seems to be working, but; When I open the url it gives 404. It didn't happen anyway.
Here's the ABC method for small projects that's for big projects to me briefly. Can you provide information in the form? My only goal is to make the container with 3 services running there. Please give me pure address for this special (but not much special) case.
Bonus: you can give a real definitions and diff all of them very complex gpc, engine, cloud run
docker-compose.yaml
version: '3.4'
services:
cms:
image: ${DOCKER_REGISTRY-}cms
build:
context: .
dockerfile: Dockerfile
container_name: ich_app
ports:
- "80:8080"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
container_name: ich_db
ports:
- "${DOCKER_SQL_PORT:-1433}:1433"
expose:
- 1433
environment:
- ACCEPT_EULA=Y
- MSSQL_PID=Express
- SA_PASSWORD=PassWORDD!
volumes:
- C:\db_backups\ichte\:/usr/share/
depends_on:
- redis
redis:
container_name: ich_redis
image: redis

At least one of your defined containers (db) requires a Windows runtime, which is going to limit your options. You can deploy Windows to GKE, see https://cloud.google.com/kubernetes-engine/docs/concepts/windows-server-gke
You are also deploying three containers where two of the options are offered as hosted services: SQL Server on Google Cloud, and Memorystore for Redis.
You may have luck if you deploy managed services, then connect them to your container.

Related

How can I add folders and files to a docker container running on ECS?

Assuming I have the following docker-compose
version: "3.9"
services:
skyrimserver:
container_name: skyrimserver
image: tiltedphoques/st-reborn-server:latest
ports:
- "10578:10578/udp"
volumes:
- /opt/docker/skyrimserver/config:/home/server/config
- /opt/docker/skyrimserver/logs:/home/server/logs
- /opt/docker/skyrimserver/Data:/home/server/Data
restart: unless-stopped
I would like to have the folders specified under volumes created & filled with some files before running it. It seems like the docker integration creates a EFS file system automatically which is empty. How can I hook into that to add files upon creation?
EDIT: A nice solution would be to have the ability to change the config on-the-fly and reloading within the game, not having to restart the whole server because the new docker file includes new configuration files
docker compose is a tool to define run docker. When you define volumes in yaml file it's similar to run docker with arg --mount and there is no volume in that instance for you to mount.
What you need with ECS is a fully functioning image with all sources of files in it, using a Dockerfile like this to build image might work:
FROM tiltedphoques/st-reborn-server
CP /opt/docker/skyrimserver/config /home/server/config
CP /opt/docker/skyrimserver/logs /home/server/logs
CP /opt/docker/skyrimserver/Data /home/server/Data
EXPOSE 10578
Edit: In case you still want to use ECS, you can try to SSH to that container or use system manager to connect to container and put your file in /home/server/. I strongly AGAINST this method because EFS
is ridiculously slow without dump data.
You should change to use EBS backed EC2 instances. It's easy to scale with auto scaling group and suit your usecase
Hi hope this would help just pu dot infront of you location:
version: "3.9"
services:
skyrimserver:
container_name: skyrimserver
image: tiltedphoques/st-reborn-server:latest
ports:
- "10578:10578/udp"
volumes:
- ./opt/docker/skyrimserver/config:/home/server/config
- ./opt/docker/skyrimserver/logs:/home/server/logs
- ./opt/docker/skyrimserver/Data:/home/server/Data
restart: unless-stopped

Multiple container app: execute container from another container

I have a multi container Django app. One Container is the database, another one the main webapp with Django installed for handling the front- and backend. I want to add a third container which provides the main functionality/tool we want to offer via the webapp. It has some complex dependencies, which is why I would like to have it as a seperate container as well. It's functionality is wrapped as a CLI tool and currently we build the image and run it as needed passing the arguments for the CLI tool.
Currently, this is the docker-compose.yml file:
version: '3'
services:
db:
image: mysql:8.0.30
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- TZ=${TZ}
volumes:
- db:/var/lib/mysql
- db-logs:/var/log/mysql
networks:
- net
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
app:
build:
context: .
dockerfile: ./Dockerfile.webapp
environment:
- MYSQL_NAME=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
ports:
- "8000:8000"
networks:
- net
volumes:
- ./app/webapp:/app
- data:/data
depends_on:
- db
restart: unless-stopped
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
tool:
build:
context: .
dockerfile: ./Dockerfile.tool
volumes:
- data:/data
networks:
net:
driver: bridge
volumes:
db:
db-logs:
data:
In the end, the user should be able to set the parameters via the Web-UI and run the tool container. Multiple processes should be managed by a job scheduler. I hoped that running the container within a multi-container app should be straightforward, but as far as I now know it is only possible through mounting the docker socket which should be avoided regarding security issues.
So my question is: what are the possiblites to achieve my desired goal?
Things I considered:
multi-stage container: main purpose is to reduce file size, but is there a hack to use the cli-tool along with its built environment in the latest image of the multi-stage container?
Api: build an Api for the tool. Other containers can communicate via the docker network. Seems to be cumbersome
The service "app" (the main django app) is built on top of the official python image which I would like to keep this way. Nevertheless there is the possibility to build one large image based on Ubuntu which includes the tool along with its dependendencies and the main django app. This will probably heavily increase sizes and maybe turn into dependency issues.
Has anybody run into similar issues? Which direction would you point me to? I'm also looking for some buzzwords that speed up my research.
You should build both parts into a single unified image, and then you can use the Python subprocess module as normal to invoke the tool.
The standard Docker Hub python image is already built on Debian, which is very closely related to Ubuntu. So you should be able to do something like
FROM python:3.10
# Install OS-level dependencies for both the main application and
# the support tool
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
another-dependency \
some-dependency \
third-dependency
# Install the support tool
ADD http://repository.example.com/the-tool/the-tool /usr/local/bin/the-tool
RUN chmod +x /usr/local/bin/the-tool
# Copy and install Python-level dependencies
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
# Copy in the main application
COPY ./ ./
# Metadata on how to run the application
EXPORT 8000
# USER someuser
CMD ["./the-app.py"]
You've already noted the key challenges in having the tool in a separate container. You can't normally "run commands" in a container; a container is a wrapper around some single process, and it requires unrestricted root-level access to the host to be able to manipulate the container in any way (including using the docker exec debugging tool). You'd also need unrestricted root-level access to the host to be able to launch a temporary container per request.
Putting some sort of API or job queue around the tool would be the "most Dockery" way to do it, but that can also be significant development effort. In this setup as you've described it, the support tool is mostly an implementation detail of the main process, so you're not really breaking the "one container does one thing" rule by making it available for a normal Unix subprocess invocation inside the same container.

ECS with Docker Compose environment variables

I'm deploying to ECS with the Docker Compose API, however, I'm sort of confused about environment variables.
Right now my docker-compose.yml looks like this:
version: "3.8"
services:
simple-http:
image: "${IMAGE}"
secrets:
- message
secrets:
message:
name: "arn:aws:ssm:<AWS_REGION>:<AWS_ACCOUNT_ID>:parameter/test-env"
external: true
Now in my Container Definitions, I get a Simplehttp_Secrets_InitContainer that references this environment variable as message and with the correct ARN, but there is no variable named message inside my running container.
I'm a little confused, as I thought this was the correct way of passing env's such as DB-passwords, AWS credentials, and so forth.
In the docs we see:
services:
test:
image: "image"
environment:
- "FOO=BAR"
But is this the right and secure way of doing this? Am I missing something?
I haven't played much with secrets in this ECS/Docker integration but there are a couple of things that don't add up between your understanding and the docs. First the integration seems to be working with Secrets Manager and not SSM. Second, according to the doc the content won't be available as a variable but rather as a flat file at runtime at /run/secrets/message (in your example).
Check out this page for the fine details: https://docs.docker.com/cloud/ecs-integration/#secrets

How do you containerize a Django app for the purpose of running mulptiple instances?

The idea is to create a Django app what would serve as the backend for an Android application and would serve an web admin interface for managing the mobile application's data.
Different sites of the company sometimes need different backends for the same android app (data has to be manageable completely separately). Application will be hosted on Windows server/s.
How can I containerize the app so I can run multiple instances of it (listening on different ports of the same IP) and I can move it to different servers if needed and set up a new instance of it there?
The Django development part I'm familiar with but I have never used Docker(nor other) containers before.
What I need:
Either a tutorial or documentation that deals with this specific topic
OR
Ordered points with some articles or tips how to get this done.
Is this the kind of thing you wanted?
https://atrisaxena.github.io/projects/docker-basic-django-application-deployment/
The secret to having multiple instances is to map the ports when you run the container.
When you run
docker run -d -p 8000:8000 djangosite
you can change the port mapping by changing the 8000:8000 setting to any <host_port>:<container_port> you want.
e.g. if you follow the example above, you end up exposing port 8000 on the container (EXPOSE 8000 in the Dockerfile). The above command maps port 8000 on the host to 8000 on the container. If you want to then run a second instance of the container on port 8001, you simply run
docker run -d -p 8001:8000 djangosite
Then final step is to use a proxy such as nginx to map the ports on the docker host machine to URLs that are accessible via a browser (i.e. via ports 80 for http and 443 for https).
Regarding moving the container, you simply need to import the docker image that you built onto whichever docker host machine you want, no need to move the source code.
Does this answer your question?
P.S. It is worth noting that the tutorial above recommends running the Django server using manage.py runserver which is NOT the standard way of deploying a Django site. The proper way to do it is to use WSGI or similar (via apache, nginx, gunicorn, etc.) within the container to properly interface with the container boundaries. See https://docs.djangoproject.com/en/3.2/howto/deployment/ for more info on how to properly deploy the site. All of the methods detailed in the documentation can be done within the container (but take care not to make your container too bulky or it will weigh down your host machines).
P.P.S It is also not strictly necessary to tag your docker container to a remote repository as suggested in the linked article. You can build the container locally with docker build (see https://docs.docker.com/engine/reference/commandline/build/) and save the image as a file using docker save (see https://docs.docker.com/engine/reference/commandline/save/). You can then import the image to new hosts using docker load (https://docs.docker.com/engine/reference/commandline/load/).
N.B. Don't confuse docker save and docker load with docker export and docker import because they serve different functions. Read the docs for more info there. docker save and docker load work with images whereas docker export and docker import work directly with containers (i.e. specific instances of an image).
I would recommend having a docker-compose file, having two services named differently and running on different ports, that's it
version: '2'
services:
backend:
ports:
- host_port:container_port example
- 8080:8000
build:
context: ./directory_containing_docker_file
dockerfile: .
restart: unless-stopped
networks:
- your-network
:
ports:
- host_port:container_port
- 8090:8000
build:
context: ./directory_containing_docker_file
dockerfile: .
restart: unless-stopped
networks:
- your-network
networks:
your-network:
driver: bridge

Connecting to local docker-compose container Windows 10

Very similar to this question, I cannot connect to my local docker-compose container from my browser (Firefox) on Windows 10 and have been troubleshooting for some time, but I cannot seem to find the issue.
Here is my docker-compose.yml:
version: "3"
services:
frontend:
container_name: frontend
build: ./frontend
ports:
- "3000:3000"
working_dir: /home/node/app/
environment:
DEVELOPMENT: "yes"
stdin_open: true
volumes:
- ./frontend:/home/node/app/
command: bash -c "npm start & npm run build"
my_app_django:
container_name: my_app_django
build: ./backend/
environment:
SECRET_KEY: "... not included ..."
command: ["./rundjango.sh"]
volumes:
- ./backend:/code
- media_volume:/code/media
- static_volume:/code/static
expose:
- "443"
my_app_nginx:
container_name: my_app_nginx
image: nginx:1.17.2-alpine
volumes:
- ./nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- ./frontend:/home/app/frontend/
ports:
- "80:80"
depends_on:
- my_app_django
volumes:
static_volume:
media_volume:
I can start the containers with docker-compose -f docker-compose.yml up -d and there are no errors when I check the logs with docker logs my_app_django or docker logs my_app_nginx. Additionally, doing docker ps shows all the containers running as they should.
The odd part about this issue is that on Linux, everything runs without issue and I can find my app on localhost at port 80. The only thing I do differently when I am on Windows is that I run a dos2unix on my .sh files to ensure that they run properly. If I omit this step, then I get many errors which leads me to believe that I have to do this.
If anyone could give guidance/advice as to what may I be doing incorrectly or missing altogether, I would be truly grateful. I am also happy to provide more details, just let me know. Thank you!
EDIT #1: As timur suggested, I did a docker run -p 80:80 -d nginx and here was the output:
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
bf5952930446: Pull complete
ba755a256dfe: Pull complete
c57dd87d0b93: Pull complete
d7fbf29df889: Pull complete
1f1070938ccd: Pull complete
Digest: sha256:36b74457bccb56fbf8b05f79c85569501b721d4db813b684391d63e02287c0b2
Status: Downloaded newer image for nginx:latest
19b56a66955145e4f59eefff57340b4affe5f7e0d82ad013742a60b479687c40
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint naughty_hoover (8c7b2fa4aef964899c366e1897e38727bb7e4c38431875c5cb8456567005f368): Bind for 0.0.0.0:80 failed: port is already allocated.
This might be the cause of the error but I don't really understand what needs to be done at this point.
EDIT #2: As requested, here are my Dockerfiles (one for backend, one for frontend)
Backend Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y imagemagick libxmlsec1-dev pkg-config
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code
Frontend Dockerfile:
FROM node
WORKDIR /home/node/app/
COPY . /home/node/app/
RUN npm install -g react-scripts
RUN npm install
EDIT #3: When I do docker ps, this is what I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0da02ad8d746 nginx:1.17.2-alpine "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp my_app_nginx
070291de8362 my_app_frontend "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:3000->3000/tcp frontend
2fcf551ce3fa my_app_django "./rundjango.sh" 12 days ago Up About an hour 443/tcp my_app_django
As we established you use Docker Toolbox that is backed by VirtualBox rather than default Hyper-V Docker for Windows. In this case you might think of it as a VBox VM that actually runs Docker - so all volume mounts and port mappings apply to docker machine VM, not your host. And management tools (i.e. Docker terminal and docker-compose) actually run on your host OS through MinGW.
Due to this, you don't get binding ports on localhost by default (but you can achieve this by editing VM properties in VirtualBox manually if you so desire - I just googled the second link for some picture tutorials). Suprisingly, the official documentation on this particular topic is pretty scarce - you can get a hint by looking at their examples though.
So in your case, the correct url should be http://192.168.99.100
Another thing that is different between these two solutions is volume mounts. And again, documentation sorta hints at what it should be but I can't point you a more explicit source. As you have probably noticed the terminal you use for all your docker interactions encodes paths a bit differently (I presume because of that MinGW layer) and converted paths get sent off to docker-machine - because it's Linux and would not handle windows-style paths anyway.
From here I see a couple of avenues for you to explore:
Run your project from C:\Users\...\MyProject
As the documentation states, you get c:\Users mounted into /c/Users by default. So theoretically, if you run your docker-compose from your user home folder - paths should automagically align - but since you are having this issue - you are probably running it from somewhere else.
Create another share
You also can create your own mounting mount in Virtual Box. Run pwd in your terminal and note where project root is. Then use Virtual Vox UI and create a path that would make it align with your directory tree (for example, D:\MyProject\ should become /d/MyProject.
Hopefully this will not require you to change your docker-compose.yml either
Alternatively, switch to Hyper-V Docker Desktop - and these particular issues will go away.
Bear in mind, that Hyper-V will not coexist with VirtualBox. So this option might not be available to you if you need VBox for something else.