Docker CE installation on Ubuntu 16.04 crashes network - amazon-web-services

I am trying to install docker-ce 17.09 on ubuntu 16.04 instance on AWS. The instance is behind the company VPC and the security group assigned allows all TCP & UDP traffics.
However, the whole network crashed and I lost SSH connection to my instance when the installation of Docker reached:
Setting up docker-ce (17.09.0~ce-0~ubuntu) ...
Connection reset by ... port 22
Is that because of the VPC settings? Or any other reason?
Updated
As I'm not able to change the existing VPC. I decided not to use Ubuntu instance but amazon AMI instead.

It sounds like your VPC and docker subnets are conflicting. Which means you can either redo your VPC to use a different subnet, or you can change the docker bridge subnet: https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/

Related

How to connect to local virtual box instance from amazon ec2 instance

I have created a centos7 in my windows machine using oracle virtual box.
I also created and redhat instance in amazon aws.
Now I want to connect from aws ec2 instance to my centos7 instance which is in my windows laptop using SSH.
I have enabled the PasswordAuthentication to Yes in /etc/ssh/sshd_config file in both the machines/instances.
Now in aws ec2 instance I'm trying with below command and it is not showing anything.
ssh user1#122.175.101.188
Please guide me? Am I missed something here?
Make sure that both instances are connected to the internet and have a valid IP address assigned to them.
SSH service is running on your CentOS 7 instance by running the command sudo systemctl status sshd.
,ifconfig on your CentOS 7 instance and looking for the IP address assigned to the network interface you are using.
In your AWS EC2 instance, open a terminal and run the command ssh user1#. Replace with the actual IP address of your CentOS 7 instance.
If this is your first time connecting to the CentOS 7 instance, you may be prompted to confirm the SSH host key fingerprint. Type "yes" to continue.
This typically requires that your local network is configured to allow incoming connections from the Internet, and that your Amazon EC2 instance has the necessary firewall rules and security groups configured to allow outbound connections to your local network. Additionally, you will need to configure any necessary port forwarding or NAT rules on your router or firewall to allow incoming connections to your VirtualBox instance.

AWS EC2 expose port 5000

I am trying to install KafkaMagic on ec2 to manage our kafka cluster. I created an EC2 instance on our VPC and added the following inbound rules to the associated security group:
I then made sure the Network ACL had inbound rules to allow traffic
Where I saw that * is a catch all rule so rule 100 should overwrite this. I then connected to my ec2 instance using EC2 Instance Connect and downloaded KafkaMagic and got it running on localhost:5000 of my ec2 instance. Using the public dns for the ec2 instance i connected to {publicIp}:5000 where publicIp was copy pasted. I was unable to connect.
Im assuming there is a gap in my understanding of what happened. Where did I go wrong along the way setting this up? Im very new to AWS and I might be missing an important concept
I needed to run the application on a non localhost url. I updated the kafka magic url through this link: https://www.kafkamagic.com/download/#configuration to be on 0.0.0.0:5000 and then I was able to use the public ip associated with my instance to run the application on port 5000

Cannot access ecs ec2 instance public ip

I am connecting 2 docker container one python app container and redis container with each other using ECS. When I ssh into ec2 instance and curl localhost:5000 it gives the output which means the container are running and python is connected with redis. I am trying to achieve the result with ECS Ec2
But when I search the public ip of ec2 it doesnot show anything. My task configuration setting:
and in python container I am having this setting, pulling the image from ecr and giving the port 5000:5000 and in link giving the name of redis so the it can connect to redis container. What setting Am I missing so that I can hit the python app container without doing ssh to the ec2-instance and doing curl localhost:5000
If the application is accessible on localhost then possible reasons are
Instance security group does not allow 5000 Port, so check the security group and allow HTTP traffic on port 5000.
An instance might be in a private subnet, but how you do ssh from bastion or directly? if directly then the issue should be resolved by following step 1.
Use the public IP that you used for ssh and it should work
http://ec2Public_Ip:5000

AWS - custom VPC - IGW - EC2 instance not accessible through HTTP

I created a VPC .I did not create the NAT gateway but created IGW for my public subnet and then launched and EC2 in my public subnet. When I try to hit the public DNS (IPV4) in the browser I am not able to access the instance , I can SSH and access though . I have configured the security group for inbound all SSH and HTTP and outbound All . Also the route table is updated with the IGW entry . What can be wrong ?
As you're having no httpd service being found on your server, the reason you can not connect is that no httpd service is running to serve traffic over HTTP (port 80).
Try running the user data script manually and ensure you're on a RHEL distribution based instance (such as RedHat, CentOS or Amazon Linux).
If you're running debian based (such as apache) you would instead install it by running the below command.
apt-get update
apt-get install apache2
systemctl start apache2

Docker Service in Swarm cannot connect to AWS RDS when a port is published

The company I'm working for recently decided to deploy a new application with docker swarm on AWS using EC2 instances. We set up a cluster of three EC2 instances as nodes (one manager, two workers) and we use stack to deploy the services.
The problem is that one of the services, a django app, runs into a timeout when trying to connect to the postgres database that runs on RDS in the same VPC. But ONLY when the service publishes a port.
A service that doesn't publish any port can connect to the DB just fine.
The RDS endpoint gets resolved to the proper IP, so it shouldn't be a DNS issue and the containers can connect to the internet. The services are also able to talk to each other on the different nodes.
There also shouldn't be a problem with the security group definition of the db, because the EC2 instances themselves can get a connection to the DB.
Further, the services can connect to things that are running on other instances within the VPC.
It seems that it has something to do with swarm (and overlay networks) as running the app inside a normal container with a bridge network doesn't cause any problems.
Stack doesn't seem to be the problem, because even when creating the services manually, the issue still persists.
We are using Docker CE version 19.03.8, on Ubuntu 18.04.4 LTS and docker-compose version 3.
The problem come when you config your swarm subnet conflict with your subnets in VPC. You must change your swarm subnet different CIDR.