Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 days ago.
Improve this question
Frontend and backend service seem to be working fine separately on its own, but when I try to communicate between frontend and backend I keep getting ERR_NAME_NOT_RESOLVED error.
Service discovery are all connected
All security groups are open
I think our architecture is very similar to this if that helps.
(https://mohamedwaelbenismail.medium.com/microservices-architecture-deployed-on-ecs-fargate-based-cluster-using-cloudformation-878cb6f90571)
It only works if we change the internal load balancer to public load balancer allow internet traffic and allow 0.0.0.0/0.
Status of health check are all 'healthy'
Based on your schematic illustration, your React web application front end will never be able to reach your backend. Your front end executes on a client side in their browsers/mobiles. This means that the only way to reach backend is through internet. So your backend can't be in a private subnet behind an internal load balancer.
You have to re-architect your application. Both frontend and backund must be accessible from the internet, for your front end to be able to query the backend.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am trying to deploy a Kafka cluster on AWS (using CloudFormation). My advertised listeners are (using a private DNS namespace to resolve the internal IP):
INTERNAL://kafka-${id}.local:9092
EXTERNAL://<public-ip>:9092
However, Kafka complains that two listeners cannot share the same port. The problem is I'm using a load balancer for external traffic, and I'm not sure if there's a way to redirect that traffic to a different port.
My desired configuration would be:
INTERNAL://kafka-${id}:9092
EXTERNAL://<public-ip>:19092
But the load balancer takes the incoming request and passes it to the internal IP at the same port. Ultimately I'd like to have the load balancer take connections on port 19092 and pass them to 9092, but I don't see any way to configure that.
If there are any recommendations on alternative ways to do this, I'm happy to hear them. Currently, I need services that are on other VPCs to be able to communicate with these brokers, and I'd prefer to use a load balancer to handle these requests.
Based on the comments.
The NLB does not support redirection rules in its listeners. It only has forwarding rules. But a listener can use different port that its targets defined by a target group. So a possible setup could be:
Client ---> Listener on port 19092 ---> NLB ---> Target group with port 9092
#Marcin answered this for me. See comments for details.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am looking for a way using which i can restrict the public URL endpoint for google cloud function
Basically we want to make sure that the endpoints we expose should only be invoke by specific thirdparty servers (List of ip address ranges). What authorization mechanism we can put at our endpoint to filter any request that is not originated from thirdparty server.
We do have list of ip address range for that third party service providers
You can configure your function's ingress settings to only allow internal traffic and traffic originating from a Google Cloud Load Balancer (ALLOW_INTERNAL_AND_GCLB in the API), and then use Cloud Armor in that Load Balancer.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Need some serious help here, thanks a lot in advance !
I put out a similar question the other day, didn't get the answer I was looking for. So refactored the entire question
I need to deploy a scalable 3 tier web application on AWS and I am having some doubts/trouble understanding the best practice to design the architecture.
NOTE: As per my understanding, all the backend requests are requested through the browser, after the Frontend server serves html/css/js to the user.
This is the solution I found online :
Question
Doesn't it kill all the logic that 'all the requests to the backend api is made through the client's web browser (since the frontend servers serve html/js code to the user's browser)'? WHICH MEANS, the request should go from the browser --> externalLoadbalancer --> backend API
Considering this, how would the routing in this work? Because, we cannot use frontend for routing, can we?
Right solution IMO (But it doesn't provide restriction to the backend API from the external world):
This definitely does not break any logic/concept but gives access to the world to access the backend api like <domain_name/api>
I am stuck with this design for days and I need to take the web app in production. I would REALLY APPRECIATE the help.
I think you should consider using the AWS API Gateway for access to private EC2 endpoints, and run react from S3 and CloudFront. I don't see those services called out in your architecture.
Here is a description of how the API Gateway supports private EC2 backends.
At re:Invent 2017, we announced endpoint integrations inside a private VPC. With this capability, you can now have your backend running on EC2 be private inside your VPC without the need for a publicly accessible IP address or load balancer.
See https://aws.amazon.com/blogs/compute/introducing-amazon-api-gateway-private-endpoints/
See also https://aws.amazon.com/api-gateway/
Unless you need React rendering on the server, you can just run it as a static website from S3, and call all your application functions from the API Gateway. This is a common way to architect React apps on AWS.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
On my backend instance a service is running which has to accept multiple connection in a second but TCP LB is not allowing multiple connection at a time.
Please help me to increase LB connection to max.
Where did you get the information it only allow one connection at time?
The Network Load Balancer (Also known as TCP load balancer) allows you to balance load of your systems based on incoming IP protocol data, such as address, port, and protocol type. As long as your instance services has resources to handle the request, the load balancer will redirect the traffic.
You can read more about it in this oficial document form Google.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have application deployed on tomcat server on machine A,B,C,D
I want to load balance using the Nginx using two load balancer nodes LB1 & LB2.
All configuration I got is using only one node as load balancer.
is it possible using Nginx.
If we have a critical application running on server require the zero down time. If we go with one LB and for some reason LB itself fails,then there will be an issue.
We have this set up initially using AWS Load balancer, but recently we start using the websockets. The web sockets are not working correctly on EC2 load balancer.
if some one has better option please suggest.
Use Amazon ELB and forward TCP:80/443 instead of HTTP:80/443
The only downside of balancing TCP is that your appservers have to deliver SSL certificates themselves if you use HTTPS.
If you want to run the loadbalancer yourself without having a single point of failure you can use haproxy to fall back to a standby machine when the primary balancer fails.
http://www.loadbalancer.org/blog/transparent-load-balancing-with-haproxy-on-amazon-ec2