Cloud Foundry: How to remap an exposed port in Docker image? - cloud-foundry

I would like to run RabbitMQ service using my organization's Cloud Foundry Service. I checked the RabbitMQ docker image and saw that the following ports are exposed:
"ExposedPorts": {
"25672/tcp": {},
"4369/tcp": {},
"5671/tcp": {},
"5672/tcp": {}
},
I start the app by installing it in Cloud Foundry as follows: cf push -o rabbitmq RabbitMQ -u process.
The app gets installed and gets started. However, it is listening on port 5672. The CF service only allows me to have ports between 10000 and 10999. So I go into the CF portal, remove the HTTP route, and create a new TCP route on port 10123 for the rabbitmq app.
How do I go about mapping the port 10123 (external facing) to the port 5672 (RabbitMQ, internal facing) using the CF CLI?

There is functionality to map a route with specific external ports to specific internal app ports. It is described in the docs here.
https://docs.cloudfoundry.org/devguide/custom-ports.html#procedure
At the moment, the functionality isn't directly supported by the cf cli, so you need to use cf curl to manually send a few requests.
The general flow is this.
Get your app's guid.
Configure a list of ports for your app, cf curl /v2/apps/APP-GUID -X PUT -d '{"ports": [25672, 4369, 5671, 5672]}'
Map a TCP route to your app with cf map-route my-app example.com --port 10123.
Get the route guid of your TCP routee. Run cf curl /v2/routes?q=host:example.com.
Update the route mapping with cf curl /v2/route_mappings -X POST -d '{"app_guid": "APP-GUID from #1", "route_guid": "ROUTE-GUID from #4", "app_port": 5672}'
Optionally repeat 3-5 for additional ports.

Related

Docker - springboot on AWS EC2

Just spun an EC2 ubunto on AWS. Installed Docker. Pulled my test springboot image and run it on the host. Can't access the app via browser. When I curl on the host, it does respond with valid http response. Is there a network or firewall that I should be looking at?
ubuntu#ip-172-31-4-157:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea9879c1b38c parikshit123/docker-spring-boot:firsttry "java -jar docker-sp…" 20 minutes ago Up 20 minutes 0.0.0.0:8085->8085/tcp frosty_sammet
ubuntu#ip-172-31-4-157:~$ curl localhost:8085/test/hello
Hello from Mitalubuntu#ip-172-31-4-157:~$
Just figured out.
By default, AWS Ec2 instances have ALL TCP tranffic (inbound and outbound) blocked. I learned that it has to be g opened. I added security group and it worked. Now I can access the endpoint via browser. Bingo!

What are the ports to be opened for Google cloud SDK?

I am supposed to install Google cloud SDK on a secured windows server where even port for http(80) and https(443) is not enabled.
What are the ports to be opened to work with gcloud, gsutil and bq commands?
I tested the behaviour in my machine, I expected to need merely port 443 because Google Cloud SDK is based on HTTPS Rest API calls.
For example you can check what is going on behind the scenes with the flag --log-http
gcloud compute instances list --log-http
Therefore you need an egress rule allowing TCP:443 egress traffic.
With respect to the ingress traffic:
if your firewall is smart enough to recognise that since you opened the connection it should let the traffic pass (most common scenario) and therefore you do not need any rule for the incoming.
Otherwise you will need as well to allow TCP:443 incoming traffic.
Update
Therefore you will need to be able to open connection toward:
accounts.google.com:443
*.googleapis.com:443
*:9000 for serialport in case you need this feature
Below error shows it is 443
app> gcloud storage cp C:\Test-file6.txt gs://dl-bugcket-dev/
ERROR: (gcloud.storage.cp) There was a problem refreshing your current auth tokens: HTTPSConnectionPool(host='sts.googleapis.com', port=443): Max retries exceeded with url: /v1/token (Caused by NewConnectionError...
If you run netstat -anb at same time you run any gcloud command which need remote connection, you will also see below entry for the app you are using. In my case PowerShell
[PowerShell.exe]
TCP 142.174.184.157:63546 40.126.29.14:443 SYN_SENT
Do not use any proxy to see above entry else gcloud will connect to proxy and you can't see actual port. you can do this by creating new config.
gcloud config configurations create no-proxy-config

Unable to access REST service deployed in docker swarm in AWS

I used the cloud formation template provided by Docker for AWS setup & prerequisites to set up a docker swarm.
I created a REST service using Tibco BusinessWorks Container Edition and deployed it into the swarm by creating a docker service.
docker service create --name aka-swarm-demo --publish 8087:8085 akamatibco/docker_swarm_demo:part1
The service starts successfully but the CloudWatch logs show the below exception:
I have tried passing the JVM environment variable in the Dockerfile as :
ENV JAVA_OPTS= "-Dbw.rest.docApi.port=7778"
but it doesn't help.
The interesting fact is at the end the log says:
com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300006: Started BW Application [SFDemo:1.0]
So I tried to access the application using CURL -
curl -X GET --header 'Accept: application/json' 'URL of AWS load balancer : port which I exposed while creating the service/resource URI'
But I am getting the below message:
The REST service works fine when I do docker run.
I have checked the Security Groups of the manager and load-balancer. The load-balancer has inbound open to all traffic and for the manager I opened HTTP connections.
I am not able to figure out if anything I have missed. Can anyone please help ?
As mentioned in Deploy services to swarm, if you read along, you will find the following:
PUBLISH A SERVICE’S PORTS DIRECTLY ON THE SWARM NODE
Using the routing mesh may not be the right choice for your application if you need to make routing decisions based on application state or you need total control of the process for routing requests to your service’s tasks. To publish a service’s port directly on the node where it is running, use the mode=host option to the --publish flag.
Note: If you publish a service’s ports directly on the swarm node using mode=host and also set published= this creates an implicit limitation that you can only run one task for that service on a given swarm node. In addition, if you use mode=host and you do not use the --mode=global flag on docker service create, it will be difficult to know which nodes are running the service in order to route work to them.
Publishing ports for services works different than for regular containers. The problem was; the image does not expose the port after running service create --publish and hence the swarm routing layer cannot reach the REST service. To resolve this use mode = host.
So I used the below command to create a service:
docker service create --name tuesday --publish mode=host,target=8085,published=8087 akamatibco/docker_swarm_demo:part1
Which eventually removed the exception.
Also make sure to configure the firewall settings of your load balancer so as to allow communications through desired protocols in order to access your applications deployed inside the container.
For my case it was HTTP protocol, enabling port 8087 on load balancer which served the purpose.

Airflow integration with AWS development machine to access admin UI

I am trying to use Airflow for workflow management on my development machine on aws. I have multiple virtual environments setup and have installed airflow.
I am listening to port 8080 in my nginx conf as:
listen private.ip:8080;
I have allowed inbound connection to port 8080 on my AWS machine.
I am unable to access my airflow console as well as admin page from my public ip / website address.
You can just create a tunnel for viewing UI locally.
ssh -N -L 8080:ec2-machineip-compute-x.amazonaws.com:8080 YOUR_USERNAME_FOR_MACHINE#ec2-machineip-compute-x.amazonaws.com:8080
ssh -N -L 8080:ec2-machineip-compute-x.amazonaws.com:8080 YOUR_USERNAME_FOR_MACHINE#ec2-machineip-compute-x.amazonaws.com:8080
localhost:8080 for viewing airflow 8080 UI

CloudFoundry application opening two ports

I have a CF application that opens two ports. AFAIK CF can create routing on only for one of them - to the one that is located in VCAP_APP_PORT or PORT. How can I create some route to the second port? I don't mind having separate name directing to other port.
As stated in some other comments it is now possible in CF to use multiple ports for your application. There is a chapter in the CF documentation which describes how to do it.
I followed the instructions and still had some trouble to fully understand it, that's why I provide a step by step guide here with some explanations (replace all variables in [] with the actual values):
Configure your application to listen on multiple ports. In my case I configured a spring boot app to listen on port 8080 for HTTPS requests and on port 8081 for HTTP requests (used for calls to actuator endpoints like health/prometheus as described here). This means that I have configured one TCP route and one HTTP route in CF and mapped those routes to the CF app.
Get the [APP_GUID] of the CF app which should be reachable on multiple ports:
cf app [APP_NAME] --guid
Add the ports (e.g. 8080, 8081) to the CF app: cf curl /v2/apps/[APP_GUID] -X PUT -d '{"ports": [8080, 8081]}'
Now the route (e.g. in this case the HTTP route) which points to the CF app must also be adjusted so that it points to the correct CF app port. First you need to get the route information, you can do it with
cf curl /v2/routes?q=host:[HOST_NAME] or with cf curl /v2/apps/[APP_GUID]/routes and save the guid of the route that points to your app ([ROUTE_GUID]).
For this particular route you have to adjust the route mappings. Each CF route can have multiple route mappings. You can show current route mappings for a route with this command: cf curl /v2/routes/[ROUTE_GUID]/route_mappings. With cf curl /v2/route_mappings -X POST -d '{"app_guid": "[APP_GUID]", "route_guid": "[ROUTE_GUID]", "app_port": 8081}' you add a mapping to a route (e.g. here to 8081).
The route has now two mappings, one pointing to 8080 and one pointing to 8081. If you want the route to only point to one of the ports (e.g. 8081) you have to delete the mapping with the port you do not want to have. Run cf curl /v2/routes/[ROUTE_GUID]/route_mappings to show all route mappings. Then extract the guid of the route mapping that should be deleted (e.g. the one to port 8080). Finally, run cf curl /v2/route_mappings/[GUID_ROUTE_MAPPING] -X DELETE to delete the route mapping you do not need.
Now your CF app should be reachable on another port than 8080 when the newly configured route is used.
Currently an application on Cloud Foundry is not able to have two ports mapped into its container environment. As part of the new Diego runtime, multiple port mappings has been exposed, but is not currently available through the API.
Depending on what you need, you could take a look at Lattice, which uses the Diego runtime. Some documentation can be found here.
Cloud Foundry will route TCP/WebSocket traffic coming from 80/443 to the one assigned port. Your application can not listen to any other port.
https://docs.cloudfoundry.org/devguide/deploy-apps/prepare-to-deploy.html#ports
You can either create multiple url mappings, or have two applications that communicate with each other using a messaging or database service.
Resurrecting an old question, but this is supported in Cloud Foundry now. Support was added around April 2019. Check your version to see if it supports this.
The general process is:
Use cf cli to update your app to list all the ports it listens on
Update each route to the app with the specific port that the route should use. If you have two ports, you'll need two or more routes one port per route.
Restart the app
Right now you have to use cf curl to manually update these records. Instructions can be found here: https://docs.cloudfoundry.org/devguide/custom-ports.html. Hopefully future cf cli versions make this easier.