Recently somebody manually deleted all Elastic Load Balancers on a AWS account I am working with. All the Load balancers had been provisioned from Elastic Beanstalk configs.
I rebuilt all the Elastic Beanstalk instances from previous configs to restore the deleted load balancers. The various applications are now running correctly apart from 2 which are failing to send traffic to each other. I will call them App A and App B.
App A is sending traffic to App B using its elastic beanstalk URL, however the messages are failing to send. If I SSH into App A, I can manually send JSON messages to App B using CURL and the EC2 private IP. When I ping the EB URL from App B, it shows me a IP to do not recognise and which is not allocated to any EC2 instances running on the account.
App B is in a private subnet with a network load balancer.
How can I get the Elastic Beanstalk URL to point at the correct IP.
I have recently inherited this environment and did not configure the original setup. So perhaps I am missing a step or aspect of how AWS Elastic Bean stalk is intended to work in this regard.
Additionally I am certain this is not a programmatic error ( the code has not changed since the instances where rebuild ) or a firewall setting as I am manually able to send traffic and get a response with a curl script.
Its is the beanstalk URL which appears to be incorrect
Related
So I have a website that was still being served even though I did not have an EC2 instance running on the us-east-1 dashboard.
I did have a load balancer running. When I terminated the load balancer the website is no longer being served.
My question is this...
Even though I had a load balancer there were no EC2 instances running. Where is the website being loaded from?
Doesn't an EC2 instance need to be running?
Not really, first of all, check if you have an Instance running in a different AWS region. If not, your site could be running in multiple other AWS Services like ECS, EKS or could be deployed Serverless (if the website is a Single Page Application: react, angular, vue).
So to answer your question: No, you don't need an EC2 instance running on AWS to host a website. And load balancers can be deployed in front of many other services that are not running on EC2.
I have a simple Java application listening on port 8443. I've deployed it as a Docker image into Fargate, it has a public IP address and I can access it through the IP address just fine.
The problem is every time I redeploy the image, it gets a new IP address.
I would like to have a static hostname. For example, when I use Elastic Beanstalk and deploy a website, it will get a hostname. How do I get the same thing?
I've been following the documentation for one whole day and didn't make any progress. I've created load balancers, targets, listeners, accelerators, nothing seems to work. For example, when creating a load balancer, it doesn't tell me what the hostname is.
I'm pretty sure this is supposed to be something really easy, but I just cannot figure it out. What am I doing wrong?
You may want to create an Application Load Balancer and register your Fargate services into a Target Group for the load balancer. You have to register your services only once, if you redeploy newer versions afterwards, they will be automatically added to the Target Group.
The Application Load Balancer will provide a publicly accessible hostname. Example:
For your load balancer to be reachable, it needs to be in a public subnet. It also needs to have a security group which allow traffic from the public internet and also allows traffic to the registered targets.
Steps to create an ALB for your ECS cluster: AWS docs
Registering ECS services into a Target Group: AWS docs
Update:
The problem is that when I create a Target Group I cannot associate it with the service.
When you create the cluster, the AWS console asks you if you would want to deploy your containers in a VPC. You have to select yes, and create a VPC:
Afterwards, you may want to get the id of the VPC (for example, in my case: vpc-0e6...) and you may want to go into your EC2 console an create a new Application Load Balancer, placing it into that VPC
ALB:
Now, when you create a new Fargate service, you should see the Application Load Balancer:
I was trying to host my API (.NET Core Web API) on Elastic Kubernetes Service in AWS. I followed around some tutorials and get it on EKS now. Everytime I run kubectl get pods I can see my service there.
Then in order to exposed the service to API Gateway, I was told that need to create a Load Balancer. So I create a Load Balancer using the kubectl expose command and success. Now I can see my Load Balancer is hosted on EC2 with a specific DNS address and able to see it via kubectl get svc command.
Here is the problem, according to tutorials, when I access to the specific DNS name with port, example: *****.ap-southeast-1.elb.amazonaws.com:8000, I should be able to access.
But no, all I get is empty response error from browser. When I go over EC2 pages to check my Load Balancer, I found that all the instances under the ELB is Out Of Service.
The Status of the ELB is: 0 of 6 instances in service
When I switched over to Instances tab, all the 6 instances are showing Out Of Service.
Is this why I could not access to the DNS address? And how can I make the instances In Service?
FYI: What I want to do eventually is using API Gateway to connect to the API on EKS.
Thank you very much if anyone knows how to solve this.
I have an Elastic Beanstalk instance that is running a Flask app. I want to know if there is any way through AWS to automatically block IP addresses that are doing unusual activity on my site.
This could be a range of things, for example:
Send several GET requests over and over
Trying to POST without a CSRF
And more. Any ideas? Thanks.
Generally, for that you would front your EB with application load balancer and AWS Web Application Firewall.
This setup is documented in the recent AWS blog and other sources:
How do I protect my Elastic Beanstalk environment against attacks from known unwanted hosts?
Setting up AWS Web Application Firewall (WAF) with Elastic Beanstalk
Guidelines for Implementing AWS WAF
I have a devops automation environment. Each successful build (web app) in Jenkins triggers a creation of EC2 (Linux) instance in AWS which is set to receive public IP and the app gets deployed on that instance. I'm calling the web application using instance's public IP. I need to mask the IP and call the app by custom name. I have created a subdomain on Route 53 subdomain.abc.com. I have three set of web apps and want to call them like one.subdomain.abc.com, two.subdomain.abc.com etc.
Since each time we have a different VM I'm not sure if EIP is an option.
Can someone please suggest a solution ?
Many thanks in advance.
If you are using just one Amazon EC2 instance for each app, then for each app you can:
Create an Elastic IP address that will be permanently used with the app
Create an A record in Amazon Route 53 to point to that Elastic IP address (eg app1.example.com)
When a new instance of the app is launched, re-associate the Elastic IP address with the new instance (assuming your old instance is then terminated)
If you wish to serve traffic from app1.example.com to several Amazon EC2 instances, then create an ALIAS record in Route 53 to point to an Elastic Load Balancer and register the EC2 instances with the load balancer.