Why does kompose up yields "connection refused"? - google-cloud-platform

Running docker-compose up on a simple web app works as expected. Running kompose up on the same app while attempting to connect to a Google Cloud cluster doesn't work:
...
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
FATA Error while deploying application: Get http://127.0.0.1:6443/api: dial tcp 127.0.0.1:6443: connect: connection refused
What are possible causes of this problem?
Other posts here on SO do not help, and I can't find any other relevant result on the web.

Related

Google Cloud run failed to start

I'm trying to deploy a container to cloud run, but my deploy fails because of this error:
Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Locally my container is able to start and I can see this log (phoenix app):
19:54:51.487 [info] Running ProjectWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:8080 (http)
When I add to my docker run invocation -p 8080:8080, I can see that curl localhost:8080/health returns a 200 response.
curl localhost:8080/health
[{"error":null,"healthy":true,"name":"NOOP","time":12}]
What's strange is that in Cloud Run and Cloud Logging, I don't see any of my container logs, even though I see them locally and I know that I have logs that should be outputting to stdout and stderr on start up, so debugging is super hard.
What could be causing the logging issue? Why is Cloud Run able to talk to my container's server?

Error 99 connecting to localhost:6379. Cannot assign requested address

Setup:
I have a virtual machine and in the virtual machine running three containers (an nginx proxy, a very minimalistic flask app and redis). Flask should be serving on port 5000 while redis on 6379.
Each of these containers are up and running just fine as stand a lone services, but also available via docker compose as a service.
In the flask app, my aim is to connect to redis and query for some keys.
The nginx container exposes port 80, flask port 5000 and redis port 6379.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
Running the flask app I am getting an error that the port cannot be used
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I am lost of clarity what could be causing this problem and any ideas would be appreciated.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
When your flask process runs in a container, localhost refers to the network interface of the container itself. It does not resolve to the network interface of your docker host.
So you need to replace localhost with the IP address of the container running redis.
In the context of a docker-compose.yml file, this is easy as docker-compose will make service names resolve to the correct container IP address:
version: "3"
services:
my_flask_service:
image: ...
my_redis_service:
image: ...
then in your flask app, use:
db = redis.Redis(host='my_redis_service', port=6379, decode_responses=True)
I had this same problem, except the service I wanted my container to access was remote and mapped via ssh tunnel to my Docker host. In other words, there was no docker-compose service for my code to find. I solved the problem by explicitly telling redis to look for my local host as a string:
pyredis.Redis(host='docker.for.mac.localhost', port=6379)
Anyone using only docker to run a container,
you can add --network=host in the command like docker run --network=host to make docker use the network of the host while running the container.
You can also use a host network for a swarm service, by passing --network host to the docker service create command.
Make sure you don't publish any port while doing this. like -p 80:8000
I am not sure if Docker compose supports this.
N.b. this is only supported in linux.

Cannot reach containers from codebuild

I've been having issue reaching containers from within codebuild. I have an exposed GraphQL service with a downstream auth service and a postgresql database all started through Docker Compose. Running them and testing them works fine locally, however I cannot get the right comination of host names in codebuild.
It looks like my test is able to run if I hit the GraphQL endpoint at 0.0.0.0:8000 however once my GraphQL container attempts to reach the downstream service I will get a connection refused. I've tried reaching the auth service from inside the GraphQL service at auth:8001, 0.0.0.0:8001, with port 8001 exposed, and by setting up a briged network. I am always getting a connection refused error.
I've attached part of my codebuild logs.
Any ideas what I might be missing?
Container 2018/08/28 05:37:17 Running command docker ps CONTAINER ID
IMAGE COMMAND CREATED STATUS PORTS NAMES 6c4ab1fdc980
docker-compose_graphql "app" 1 second ago Up Less than a second
0.0.0.0:8000->8000/tcp docker-compose_graphql_1 5c665f5f812d docker-compose_auth "/bin/sh -c app" 2 seconds ago Up Less than a
second 0.0.0.0:8001->8001/tcp docker-compose_auth_1 b28148784c04
postgres:10.4 "docker-entrypoint..." 2 seconds ago Up 1 second
0.0.0.0:5432->5432/tcp docker-compose_psql_1
Container 2018/08/28 05:37:17 Running command go test ; cd ../..
Register panic: [{"message":"rpc error: code = Unavailable desc = all
SubConns are in TransientFailure, latest connection error: connection
error: desc = \"transport: Error while dialing dial tcp 0.0.0.0:8001:
connect: connection refused\"","path":
From the "host" machine my exposed GraphQL service could only be reached using the IP address 0.0.0.0. The internal networking was set up correctly and each service could be reached at <NAME>:<PORT> as expected, however, upon error the IP address would be shown (172.27.0.1) instead of the host name.
My problem was that all internal connections were not yet ready, leading to the "connection refused" error. The command sleep 5 after docker-compose up gave my services time to fully initialize before testing.

Error in tunneling service on VMC

I've push an app to my CF on cloud_controller paas.azure4j.us with uri yamashowcase.azure4j.us .
When I tried to create tunnel on service which binded to it an error occured like this :
toriq#meruvian354:~$ sudo vmc tunnel yamashowcase-db
Stopping Application 'caldecott': OK
Redeploying tunnel application 'caldecott'.
Uploading Application:
Checking for available resources: OK
Packing application: OK
Uploading (1K): OK
Push Status: OK
Binding Service [yamashowcase-db]: OK
Staging Application 'caldecott': OK
Starting Application 'caldecott': OK
Getting tunnel connection info: ..........Error: Expected remote tunnel to know about
yamashowcase-db, but it doesn't
I use vmc version 0.3.23 .
Any solution for this ??
I think this error is a little misleading. I would check the logs on the instance, particularly the cloud controller log and log for the service node used for yamashowcase-db (mysql, postgre etc)

Problems getting RabbitMQ and Django-Celery Running: Target Machine actively refused connection

I am trying to get Django-Celery running on my Django App. I cannot get the worker server to run. When I try I get the message: No Connection could be made because the target machine actively refused it
Here is what I have done so far. First, I installed the django celery package: http://pypi.python.org/pypi/django-celery
I can load it into python without problems. I also installed the RabbitMQ server per the windows install instructions: http://www.rabbitmq.com/install.html#windows
Starting the tutorials in pytho on the RabbitMQ site I saw the need to install pika: http://pypi.python.org/pypi/pika. It imports without any problems.
From there I start the RabbitMQ server by running this at the command line: rabbitmq-service start
I get the message back that Service RabbitMQ started
Here is where I start to have problems.
I attempted the first steps in django-celery: http://packages.python.org/django-celery/getting-started/first-steps-with-django.html and the "hello world" example on the rabbitMQ site: http://www.rabbitmq.com/tutorials/tutorial-one-python.html
In both cases I get the message: No Connection could be made because the target machine actively refused it
My first thought was that this sounded like a firewall problem. So I went into the windows 7 firewall and added inbound and outbound rules to open the local and remote ports 5672 and 5673 to TCP protocol, but I still get the same error message.
When I run rabbitmqctl status i get the message:
Error: unable to connect to node 'rabbit#hostname': nodedown
diagnostics:
- nodes and their ports on hostname: [{rabbitmqctl18856, 505031}]
Does that mean it that it is trying to operate on those ports? what about the default 5672?
Any suggestions?
UPDATE: This was actually a problem resulting from several failed rabbitmq installs conflicting with the latest installation. If you have to remove rabbitmq use the 'rabbitmq-service remove' command and not SC DELETE, which cause a lot of problems for me and I had to go in and clean up my windows registry file.
The nodedown error indicated by rabbitmqctl suggests that the server isn't running on that machine.
Try going though the steps in RabbitMQ's troubleshooting guide. In particular, pay close attention to the logs. Has the server crashed for some reason? Could you post the logs somewhere?