I am running odoo on an ec2 instance -aws.
The odoo code is pulled from the docker hub, running inside the ec2 via docker containers.
The problem is that the ec2 doesn't have a static IP and every time it's restarted the connection with odoo disappears.
This is at least the theory am working with.
I would appreciate other solutions or might be problems
You need to associate an Elastic IP to your ec2 instance. This will give your ec2 a fixed public IP address
You can follow the documentation in AWS below:
Associate an Elastic IP address with an instance or network interface.
Take in account that there are costs associated: pricing
Related
I want to update my docker image in existing running AWS ECS FARGATE task.
I could have go with new revision but when I run that task inside cluster, it is creating new public IP address.
I can't change my existing public IP. I only want to update my docker image of running task.
What could be the possible solution ?
Unfortunately if you're running the container as a publicly routable container it will always update the public IP address of the container whenever you update the task definition.
There is currently no support for elastic IP addresses in Fargate, which would be the solution you're looking for.
I would suggest if keeping the IP address is required that you look at re architecting your solution to the following:
Public facing Network Load Balancer with a static IP address
Fargate containers register to a target group of the Network Load Balancer.
Remember that if you had any kind of failure at the moment this might also cause your container to lose its IP address.
The company I'm working for recently decided to deploy a new application with docker swarm on AWS using EC2 instances. We set up a cluster of three EC2 instances as nodes (one manager, two workers) and we use stack to deploy the services.
The problem is that one of the services, a django app, runs into a timeout when trying to connect to the postgres database that runs on RDS in the same VPC. But ONLY when the service publishes a port.
A service that doesn't publish any port can connect to the DB just fine.
The RDS endpoint gets resolved to the proper IP, so it shouldn't be a DNS issue and the containers can connect to the internet. The services are also able to talk to each other on the different nodes.
There also shouldn't be a problem with the security group definition of the db, because the EC2 instances themselves can get a connection to the DB.
Further, the services can connect to things that are running on other instances within the VPC.
It seems that it has something to do with swarm (and overlay networks) as running the app inside a normal container with a bridge network doesn't cause any problems.
Stack doesn't seem to be the problem, because even when creating the services manually, the issue still persists.
We are using Docker CE version 19.03.8, on Ubuntu 18.04.4 LTS and docker-compose version 3.
The problem come when you config your swarm subnet conflict with your subnets in VPC. You must change your swarm subnet different CIDR.
The EC2 machines are running behind the ELB with the same AMI Image.
My requirement is, currently there are 5 EC2 instances are running behind the ELB this is my Min count in Auto-scaling Group and I also associated Elastic IP with them so its easy to serving code on them via Ansible, But when traffic Goes up Auto-scaling add more machines behind the same ELB, Its very Headache to add newly added machine public IP manually in Ansible Host.
How can I get all the machines IP to my Ansible host?
That's the classic use of dynamic inventory. Ansible docs even call out this specific use case :)
They also provide a working example. Check this link
So, I am using parse server which is hosted on an elastic beanstalk environment and I was able to upload it successfully since the Health states that it is 'OK'. My database is hosted within my EC2 instance and I'm usually able to access it via Mongodb Compass. The problem is that my Elastic beanstalk cannot seem to read the database that is within the ec2 instance.
I know that for apps built using parse server require one to set up environment variables shown in the screenshot. So my question is, which url should I use for the database_uri? I have tried using the Public DNS (IPv4) and the Private IPs from the EC2 instance but none of them have worked. I believe that knowing this answer will successfully connect the ec2 instance to the app. I appreciate the help in advance.
I assume the EC2 instance with MongoDB and the Elastic Beanstalk instance are both in the same VPC. If that is so, then you need to use the private IP of the EC2 MongoDB instance. You will also need to open the security group rules assigned to the MongoDB instance appropriately.
In aws ec2 machines,is there a way to start up an instance and tell it to use an Elastic IP?
Perhaps using user data or something?
Assigning IP to an aws ec2 machine after launch is pretty simple,following the amazon docs,but I would like to launch a machine with the elastic ip.
i.e) While the machine is launches itself, it should be assigned with the IP.
Someone has asked this in the aws support forum 10 years ago, the thread has no solution though.
https://forums.aws.amazon.com/thread.jspa?threadID=20927
It appears that this can be done:
Create a Network Interface
Attach the Elastic IP to the Network Interface
When launching the Amazon EC2 instance, specify the Network Interface to use (that is, the one created earlier)
The instance will then be launched with the Elastic IP address.
There does not appear to be a way to do this totally within the RunInstances command. Only a PublicIP address can be requested, which is temporary and is not an Elastic IP address.