How to connect to spanner emulator from a docker container - google-cloud-platform

I am able to connect to the spanner emulator directly from my local machine but I am having trouble when trying to connect to it from a docker container.
I have the following two services in my compose file:
version: '3.7'
services:
serviceA:
image: "test
depends_on:
- spanner-emulator
environment:
SPANNER_EMULATOR_HOST: localhost:9010
spanner-emulator:
image: spanner_image
ports:
- 9010:9010
- 9020:9020
- 9515:9515
When I spin up the serviceA, I am able to use gcloud to run queries on my local spanner emulator. But when I try to run commands from within the serviceA container I get last exception: 503 failed to connect to all addresses.
The commands that I am trying to run (which work outside the container on my machine directly)
spanner_client = spanner.Client(project="my-proj")
instance = spanner_client.instance("Emulator")
database = instance.database("my-db")
with database.snapshot() as snapshot:
results = snapshot.execute_sql("SELECT Name, Version FROM test1")
for row in results:
print(u"Name: {}, Version: {}".format(*row))
Help appreciated!

Dumb mistake. I am able to connect to the emulator from the docker container when I changed SPANNER_EMULATOR_HOST to spanner-emulator:9010

Related

Docker-compose - Cannot connect to Postgres

It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?
I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.
You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash

Unable to connect Django docker image to GCP instance using GCloud Proxy

I am using cloud_proxy to connect to google cloud postgres instance. I followed the steps in GCP website https://cloud.google.com/sql/docs/postgres/connect-admin-proxy. When I run it locally using python manage.py runserver with host for db as 127.0.0.1 and port as 5432, the program is working fine.
If I try to dockerize the application and run the program, I am facing the error
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Docker file
services:
web:
build: .
command: python manage.py runserver
volumes:
- .:/code
ports:
- 8000:8000
So I tried to dockerize the application using the stack overflow answer Is there a way to access google cloud SQL via proxy inside docker container modified the host in settings.py file too.
Now facing the error
gcloud is not in the path and -instances and -projects are empty
services:
web:
build: .
command: python manage.py runserver
depends_on:
- cloud-sql-proxy
volumes:
- .:/code
ports:
- 8000:8000
env_file:
- ./.env.dev
cloud-sql-proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: /cloud_sql_proxy --dir=/cloudsql instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
ports:
- 5432:5432
volumes:
- credentials.json:/secrets/cloudsql/credentials.json
restart: always
Could you please help me with this issue. My requirement is to create a docker image with Django application so that it can be deployed to GCP.
I think You are missing - It should be
command: /cloud_sql_proxy --dir=/cloudsql -instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
I recommend you to follow the next documentation:
Connecting psql client using the Cloud SQL Proxy docker Image
This page describes how to connect a psql client to your Cloud SQL instance, from a client machine running Linux or Compute Engine Linux instance, using the Cloud SQL Proxy Docker image, I think this guide could meet your needs.
This guide mentions the way to start the proxy at point 9.
Unix sockets:
docker run -d -v /cloudsql:/cloudsql \
-v <PATH_TO_KEY_FILE>:/config \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy -dir=/cloudsql \
-instances=<INSTANCE_CONNECTION_NAME> -credential_file=/config
If you are using the credentials provided by your Compute Engine instance, do not include the credential_file parameter and the -v <PATH_TO_KEY_FILE>:/config line.
If you are using a container optimized image, use a writeable directory in place of /cloudsql, for example:
-v /mnt/stateful_partition/cloudsql:/cloudsql
Additionally if you want to know more about Cloud SQL Proxy parameters and flags I recommend to take a look at this page
I hope this information would be useful to you

Vagrant to Docker communication via Docker IP address

Here's my situation. We are slowly migrating our VMs from Vagrant to Docker but we are mostly still Docker newbs. Some of our newer system's development environments have already been moved to Docker. We have test code that runs on an older Vagrant VM and used to communicate with a Vagrant running a Django Restful API application in order to run integration tests. This Django API is now in a Docker container. So now we need the test code that runs in a vagrant to be able to make requests to the API running in docker. Both the docker container and the vagrant are running side by side on a host machine (MacOS). We are using Docker compose to initialize the docker container the main compose yaml file is shown below.
services:
django-api:
ports:
- "8080:8080"
build:
context: ..
dockerfile: docker/Dockerfile.bapi
extends:
file: docker-compose-base.yml
service: django-api
depends_on:
- db
volumes:
- ../:/workspace
command: ["tail", "-f", "/dev/null"]
env_file:
- ${HOME}/.pam_environment
environment:
- DATABASE_URL=postgres://postgres:password#db
- PGHOST=db
- PGPORT=5432
- PGUSER=postgres
- PGPASSWORD=password
- CLOUDAMQP_URL=amqp://rabbitmq
db:
ports:
- "5432"
extends:
file: docker-compose-base.yml
service: db
volumes:
- ./docker-entrypoint-initdb.d/init-postgress-db.sh:/docker-entrypoint-initdb.d/init-postgress-db.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: django-api-dev
I would like the tests that are running on the vagrant to still be able to communicate with the django application that's now running on docker, similar to the way it could communicate with the api when it was running in a vagrant. I have tried several different types of network configurations within the docker compose file but alas networking is not my strong suit and I'm really just shooting in the dark here.
Is there a way to configure my docker container and/or my vagrant so that they can talk to each other? I need to expose my docker container's IP address so that my vagrant can access it.
Any help/tips/guidance here would be greatly appreciated!
In your Vagrantfile, make sure you have a private host only network. I usually use them with a fixed IP
config.vm.network "private_network", ip: "192.168.33.100"
Now both the VMs will get a static IP in the host only network. When you run docker-compose up -d in your Django VM. Your VM will map port 8080 to the container 8080. So you can use 192.168.33.100:8080 in the other VM for testing the APIs
I would like the tests that are running on the vagrant to still be
able to communicate with the django application that's now running on
docker, similar to the way it could communicate with the api when it
was running in a vagrant.
As you say, you are using docker-compose, so exposing ports would do the purpose you are looking for. In the yml file, where django application is defined, create a port mapping which would bind the port on host to the port in container. You can do this by including this:
ports:
"<host_port_where_you_want_to_access>:<container_port_where_application_is_running>"
Is there a way to configure my docker container and/or my vagrant so
that they can talk to each other? I need to expose my docker
container's IP address so that my vagrant can access it.
It is. If both the containers are in the same network (when services are declared in same compose file, all services are in same network by default), then one container can talk to other container by calling to their service name.
Example: In the yml file specified in question, django-api can access db at http://db:xxxx/ where xxxx can be any port inside container. xxxx need not be mapped to host or need not be exposed.

Docker for AWS and Selenium Grid - Connection refused / No route to host (Host unreachable)

What I am trying to achieve is a scalable and on-demand test infrastructure using Selenium Grid.
I can get everything up and running but what I end up with is this:
Here are all the pieces:
Docker for AWS (CloudFormation Stack)
docker-selenium
Docker compose file (below)
The "implied" software used are:
Docker swarm
Stacks
Here is what I can accomplish:
Create, log into, and ping all hosts & nodes within the stack, following the guidelines here: deploy Docker for AWS
Deploy using the compose the file at the end of this inquiry by running:
docker stack deploy -c docker-compose.yml grid
View Selenium Grid console using the public facing DNS name automatically provided by AWS (upon successful creation of the stack). Here is a helpful entry on the subject: Docker Swarm Mode.
Here is are the contents of the compose file I am using:
version: '3'
services:
hub:
image: selenium/hub:3.4.0-chromium
ports:
- 4444:4444
networks:
- selenium
environment:
- JAVA_OPTS=-Xmx1024m
deploy:
update_config:
parallelism: 1
delay: 10s
placement:
constraints: [node.role == manager]
chrome:
image: selenium/node-chrome:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
firefox:
image: selenium/node-firefox:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
networks:
selenium:
Any guidance on this issue will be greatly appreciated. Thank you.
I have also tried opening up ports across the swarm:
swarm-exec docker service update --publish-add 5555:5555 gird
A quick Google brought up https://github.com/SeleniumHQ/docker-selenium/issues/255. You need to add the following to the Chrome and Firefox nodes:
entrypoint: bash -c 'SE_OPTS="-host $$HOSTNAME" /opt/bin/entry_point.sh'
This is because the containers have two IP addresses in Swarm Mode and the nodes are picking up the wrong address and advertising that to the hub. This change will have the nodes advertise their hostname so the hub can find the nodes by DNS instead.

Symfony Application on AWS ECS with a data-only container - Is this the right direction?

I have a dockerized Symfony2 Application consisting of four containers:
php-fpm
nginx
mysql
code (data container with a volume)
On my local machine this setup runs without problem with docker-compose:
code:
image: ebc9f7b635b3
nginx:
build: docker/nginx
ports:
- "80:80"
links:
- php
volumes_from:
- code
php:
build: docker/php
volumes_from:
- code
links:
- mysql
mysql:
image: mysql
ports:
- "5000:3306"
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
environment:
- MYSQL_ROOT_PASSWORD=xyz
- MYSQL_DATABASE=xyz
- MYSQL_USER=xyz
- MYSQL_PASSWORD=xyz
I wanted to deploy my application on AWS ECS, therefore I prebuilt all the images and pushed them to the AWS Container Registry, created a new cluster with a new service and translated my local docker-compose.yml to a TaskDefinition.
Since yesterday I tried to get it running, but after following the Official Dokumentation
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
and searching for hours I can't find a way to get it working.
Either the service get stucked in a PENDING state without bringing up a container (except for the mysql container) or if I attach a volume to the task definition the containers coming up but the data is not mapped.
Is it that I must reference the data only container in a special syntax in the volumesFrom section of the Task Definition?
Is there a solution for this by now other than using EFS?
Thanks!