How do I enable HTTPS in Docker Container - amazon-web-services

I have deployed my linux container in AWS ECS, and I can access it over port 80 by using an AWS load balancer.
How do I make them visible by adding certificate on 443. I have tried just exposing the port, but I also need to add the certificate. I cant use docker-compose as AWS ECS doesn't support that and I need to add everything in the single application Dockerfile.
What are the steps for adding the certificate in a dockerfile?
My dockerfile looks like this now
FROM node:10
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD [ "node", "env.js"]

Related

Jenkins installed via docker cannot run on AWS EC2

I'm new to devops. I want to install Jenkins in AWS EC2 with docker.
I have installed the Jenkins by this command:
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On AWS security group, I have enabled port 8080 and 50000. I also enabled port 22 for SSH, 27017 for Mongo and 3000 for Node.
I can see the Jenkins container when I run docker ps. However, when I run https://xxxx.us-east-2.compute.amazonaws.com:8080, there is not a Jenkins window popup for Jenkins setting and display error, ERR_SSL_PROTOCOL_ERROR.
Does someone know what's wrong here? Should I install Nginx as well? I didn't install it yet.
The error is due to the fact that you are using https:
https://xxxx.us-east-2.compute.amazonaws.com:8080
From your description it does not seem that you've setup any type of ssl connection to your instance. So you should connect using http only:
http://xxxx.us-east-2.compute.amazonaws.com:8080
But this is not good practice as you communicate using plain text. A common solution is to access your jenkins web-ui through ssh tunnel. This way the connection is encrypted and you don't have to exposed any jenkins port in your security groups.

I cannot reach from a browser to ec2 instance in AWS

I wrote a very simple spring-boot application and packed it in Docker.
The content of docker file is:
FROM openjdk:13
ADD target/HelloWorld-1.0-SNAPSHOT.jar HelloWorld.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "HelloWorld.jar"]
I pushed it to docker hub.
I created a new EC2 instance on aws. Then I connected to it and typed the following commands:
sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo docker run -p 80:8085 ****/docker-hello-world
The last command gave many messages on the screen that said that spring-boot application is running.
Looks great. However, when I opened my browser and typed: "http://ec2-54-86-87-68.compute-1.amazonaws.com/" (public DNS of EC2 machine).
I got "This site can’t be reached".
Do you know what I did wrong?
Edit: security groups that regard this machine are "default" and the following group that I defined:
Inside the EC2 machine, I typed:"curl localhost:8085" and got:
"curl: (52) Empty reply from server"
Ensure that your port's inbound traffic is enabled for your local IP address in your ec2 instance security group configuration
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#adding-security-group-rule
Have you allowed inbound traffic for port 8085 in your security group configuration? That should be the first thing to check.
I found the solution.
It was port issues.
Instead of running
sudo docker run -p 80:8085 ****/docker-hello-world
I had to run:
sudo docker run -p 8085:8080 ****/docker-hello-world
This command says: "take the application that runs on port 8080 in the application and put it on port 8085 on docker".
I opened the browser and browsed to: "http://ec2-18-207-188-57.compute-1.amazonaws.com:8085/hello" and got the response I expected.

Error connecting to RDS Postgreql DB from inside Docker container

I've got an app running Flask_sqlalchemy in a Docker container.
The container wasn't running properly so I dived in and tried running the application and get the following error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
Is the server running on host "DBNAME.XXXXXXXXXX.eu-west-1.rds.amazonaws.com" (000.000.000.000) and accepting
TCP/IP connections on port 5432?
The application works fine outside the container, and I can't work out what's going on.
Could it be something to do with the AWS-RDS security groups? They're currently configured to only accept inbound connections from our office where development takes place.
EDIT:
This is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential libpq-dev python-shapely
COPY . /src
WORKDIR /src
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "application.py"]
And this is the Docker Run command I'm doing:
docker run -d -p 5000:5000 container_name
Thanks
I had exactly the same problem ;) but solved it by ensuring the following:
The AWS ELB environment configuration is Generic Docker (not preconfigured for Python).
The environment is created inside of a VPC that contains the RDS instance.
The RDS instance is listening to the right port (PostgreSQL:5432) from the VPC security group (this should already be the case).

How to Access AWS EC2 docker tomcat instance running inside jenkins docker instance from my local browser

I have a jenkins instance running inside a docker container that's listening on port 8181.
Example URL of the jenkins instance:
http://ec2-34-155-164-97.us-west-2.compute.amazonaws.com/
I have a tomcat docker instance that's listening on port 8383 running inside the jenkins docker container.
I can access jenkins instance from my local browser. Is there any possible way that I can access my docker tomcat instance from my local browser?
Here is my docker run command:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock \ -v $(which docker):/usr/bin/docker -p 8181:8080 jenkins-dsl
Please provide your suggestions.
It sounds like your docker run command simply needs to expose the port that your nested tomcat server is running on.
To do this, you need to pass in -p argument into your command. The -p argument is for binding a host port to the docker container's port:
-p <host_port>:<container_port>
You can pass in as many -p arguments as you want to bind multiple ports.
So if the docker tomcat server is running on port 8383 within the Jenkins docker container, then you can do something like this:
-p 8383:8080
Full command example:
docker run -d -it -p 8383:8080 --name tomcatServer docker-tomcat
I would assume that this would allow you to access tomcat server using the example URL provided like so:
http://ec2-34-155-164-97.us-west-2.compute.amazonaws.com:8383
However, you'd have to ensure your AWS Security Group will allow traffic to port 8383.
EDIT: Updated answer to reflect the resolution we discussed in the comments.
Edited
I could able to launch tomcat by specifying the port in the URL and opening the port in EC2 instance.
http://ec2-34-155-164-97.us-west-2.compute.amazonaws.com:8383
Latest Docker installation guide for Tomcat clearly says you will get this error when you launch it for the first time
You can then go to http://localhost:8888 or http://host-ip:8888 in a browser (noting that it will return a 404 since there are no webapps loaded by default).
its because you do not have any apps in the default webapps folder of Tomcat. your latest Tomcat docker image has the default apps in the "webapps.dist" folder, you have to copy it to "webapps" folder. Do the Following commands
# docker exec -it tomcat-container /bin/bash
# cd webapps.dist
# cp -R * ../webapps
"tomcat-container" is your container name.
now refresh your browser you will get it. if not let me know

How to bind publish port in Dockerfile on AWS Beanstalk

I try to deploy asp.net docker on aws
This is my Dockerfile (from microsoft docker example)
FROM microsoft/aspnet
COPY . /app
WORKDIR /app
RUN ["kpm", "restore"]
EXPOSE 80:80
ENTRYPOINT ["k", "kestrel"]
The problem is. I can use commandline to call [ docker run -p 80:80 image ] to run docker properly
But when I use AWS Beanstalk to deploy my dockerfile. It cannot map public port to docker port automatically. I need to run docker run -p again to make it usable
At start there are image that's work and container that run. But it seem it can't map port so it die. I then need to run a run command with -p again to make it work
I have no clues how aws will run docker with -p on which port. please help me