access WSDL in a docker container - web-services

I built a SOAP webservice and it is working fine on my local machine. The WSDL is generated via a maven plugin from an xsd file. The WSDL can be found on my localhost: http://localhost:8080/ws/test.wsdl.
I managed to build a docker image of this webservice and it exposes on port 310.
Now my problem is: the client can't import the wsdl because it is located in the webservice container.
My question is: what is the url the client can import the wsdl from?
Thanks in advance.

You have to very likely set a port binding for your container. If your container exposes port 310 you can access it, if you set a port binding rule via docker like this:
docker run -it -p 127.0.0.1:8080:310
This will forward any access on 127.0.0.1:8080 to the process in the container listening on port 310.

Related

How do i run django application without port number

How do i run django application without port number: i had tried Django: Run django app on server without port? but didn't work.
Web services must bind a port on a interface of the system. So, you should specify a port number to run your Django application. The default port number for HTTP is 80, for HTTPS 443. But you can use a custom port between [1-65535]:
For example;
python manage.py runserver 7000
You may try the following:
python manage.py runserver 80
or if you don't have permissions (assuming you are using Linux):
sudo python manage.py runserver 80
Then, you can access your application: http://localhost/
In general, web services need a port to run. If the port used is default http (80) or https (443) port, modern web browsers hide it from seeing in the address bar.
In a development server, you can hide the port(because you don't want to see it anymore) by assigning it to port 80 if it is not used by any other web service in the system(otherwise django will complain):
python manage.py runserver 80
In a production server, you need to use servers like Gunicorn to run your django app in the backend and a web server like Nginx or Apache to serve your backend to external world. In that case, since web servers use http/https ports, no ports will be visible in the browser.

Frontend and backend in one single docker container. Which IP address to use?

I have a docker container running a frontend (react) on port 3000 and a backend (django) on port 8000. From inside the container I can run
wget localhost:8000/
and I get back what the server has to give me back. This also works if I forward port 8000 and I call wget from outside the container.
But what about the frontend? Since it resides in the same container of the backend, I suppose it resides in the same localhost, so it should be able to retrieve the information from the backend using
wget localhost:8000/
But this is not what happens (I get ERR_CONNECTION_REFUSED)
Is it because when I run the frontend, the request comes actually from the browser on my local machine, which lives outside the container (and also outside the server)?
Or am I getting something wrong and wget localhost:8000/ should work also from my browser?
The frontend is running in your browser and therefore your thought that the request comes from your browser is the correct one. In this case you would have to expose a port for the Django backend so your browser can get to it from the "public" IP space.
Sounds like only port 8000 is mapped.
If you start the container make sure to add -p 3000:3000

Unable to connect to Docker container: Connection Refused

I have a war file deployed as Docker container on linux ec2. But when I try to hit the http://ec2-elastic-ip:8080/AppName, I don't get any response.
I have all the security group inbound rules set up for both http and https. So that's not a problem.
Debugging
I tried debugging by ssh-ing the linux instance. Tried command curl localhost:8080 , this is the response:
curl: (7) Failed to connect to localhost port 8080: Connection refused
Tried with 127.0.0.1:8080 but the same response.
Next thing I did was to list the Docker container: docker ps. I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<ID> <ecr>.amazonaws.com/<my>-registry:2019-05-16.12-17-02 "catalina.sh run" 24 minutes ago Up 24 minutes 0.0.0.0:32772->8080/tcp ecs-app-24-name
Now, I connected to this container using docker exec -it <name> /bin/bash and tried checking tomcat logs which clearly shows that my application war is there and tomcat has started.
I ever tried checking the docker-machine ip default but this gave me error:
Docker machine "default" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
Now am stuck. Not able to debug further. The result am expecting is to access the app through the url above.
What to do? Is it something am doing wrong?
Also, to mention, the entire infrastructure is managed through terraform. I first create the base image,copy the war to webapps using DockerFile, push the registry image and finally do a terraform apply to apply any changes.
Make sure that apache is listening on all IP addresses inside the docker container, not just localhost. The IP should be like 0.0.0.0.
If any service is running inside docker and is listening to only localhost, it can only be accessed inside that container, not from the host.
You can also try to start apache with port 8080 and bind docker 8080 port with host 8080 port
docker run apache -p 8080:8080
Currently your app is working on a random host port i.e 32772, see the docker ps output .You must be able to access you app on http://ec2-ip:32772 once you allow port 32772 in security groups.
In order to make it work on host port 8080, you need to bind/expose the host port during docker run -
$ docker run -p 8080:8080 ......
If you are on ECS, ideally you should use an ALB & TG with your service.
However, if you are not using ALB etc then you can try giving a static hostPort in TD "hostPort": 8080(I haven't tried this). If it works fine, you will need to make sure to change the deployment strategy as "minimum healthy percentage = 0" else you might face port conflict issues.
If the application needs a network port you must EXPOSE it in the docker file.
EXPOSE <port> [<port>/<protocol>...]
In case you need that port to be mapped to a specific port on the network, you must define that when you spin up the new container.
docker run -p 8080:8080/tcp my_app
If you use run each image separately you must bind the port every time.
If you don't want to do this every time you can use docker-compose and add the ports directive in it.
ports:
- "8080:8080/tcp"
Supposing you added expose in the dockerfile, he full docker-compose.yml would look like this:
version: '1'
services:
web:
build:
ports:
- "8080:8080"
my_app:
image: my_app

Error 99 connecting to localhost:6379. Cannot assign requested address

Setup:
I have a virtual machine and in the virtual machine running three containers (an nginx proxy, a very minimalistic flask app and redis). Flask should be serving on port 5000 while redis on 6379.
Each of these containers are up and running just fine as stand a lone services, but also available via docker compose as a service.
In the flask app, my aim is to connect to redis and query for some keys.
The nginx container exposes port 80, flask port 5000 and redis port 6379.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
Running the flask app I am getting an error that the port cannot be used
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I am lost of clarity what could be causing this problem and any ideas would be appreciated.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
When your flask process runs in a container, localhost refers to the network interface of the container itself. It does not resolve to the network interface of your docker host.
So you need to replace localhost with the IP address of the container running redis.
In the context of a docker-compose.yml file, this is easy as docker-compose will make service names resolve to the correct container IP address:
version: "3"
services:
my_flask_service:
image: ...
my_redis_service:
image: ...
then in your flask app, use:
db = redis.Redis(host='my_redis_service', port=6379, decode_responses=True)
I had this same problem, except the service I wanted my container to access was remote and mapped via ssh tunnel to my Docker host. In other words, there was no docker-compose service for my code to find. I solved the problem by explicitly telling redis to look for my local host as a string:
pyredis.Redis(host='docker.for.mac.localhost', port=6379)
Anyone using only docker to run a container,
you can add --network=host in the command like docker run --network=host to make docker use the network of the host while running the container.
You can also use a host network for a swarm service, by passing --network host to the docker service create command.
Make sure you don't publish any port while doing this. like -p 80:8000
I am not sure if Docker compose supports this.
N.b. this is only supported in linux.

Strange behavior in a Flask app with Docker on AWS doing a POST

I have a Flask app running with docker and uwsgi on AWS. I have some endpoints and when I do a POST to one of them, using Postman or Curl, I see on the logs the response status code 412, but on Postman or Curl it shows 502.
I tried to run the Flask app locally without docker but using uwsgi, and it runs as expected.
I need to have a 412 response to know how to handle this status code.
If the flask app works as expected on your local machine, it might have something to do with how the port routing is setup for your container.
In addition to the port your flask application is receiving requests on, there is a Docker container that it lives inside that also has its own ports. The first is an external set of ports that need to be exposed to receive requests, and there's another set of internal ports that can be linked to external ports and used by your application.
The long explanation is available in this answer here, but the TLDR is:
Running your container with docker run -it --expose 8008 -p 8008:8008 myContainer
will allow for an externally exposed port with --expose EXTERNALPORT and will bind the internal container port to the external container port with -p INTERNALPORT:EXTERNALPORT.
Lastly, when running your flask service, you'll need to make sure that its port lines up with the internally exposed container port. An example using the same port we listed before would be:
flask run --host=0.0.0.0 --port=8008