i was following the wso2 clustering reference documentation.
In the section dealing with load balancer configuration I read read an example speaking about the configuration of 2 elbs. Being more precise it is described the configuration of each node specifing the "sibling" member tag in the axis2.xml file. Elb1 points elb2 and viceversa.
The question is: what the way i can specify unique public cluster host name wich external client have to point to?
Should i have to put another load balancer in front of the 2 clustered elbs the way it self becomes the single access point of the below cluster? Or this is not necessary?
you could try two ELBs to get High-Availability, Fail-Proof ELB Deployment. In this case, according to documentation you should have Keepalived (http://www.keepalived.org) installed on both host to check their availability and front them with a virtual IP address.
So, short answers are: 1) Virtual IP, 2) Not necessarily.
Have a look to this link where you can find more information on how to implement the scenario described above.
Related
I have a situation here.
I made 2 environments prod and preprod, both has two vms each (like two nodes per environment).
Now i have to create a Load Balancer keeping those to nodes on the back end. Once of the nodes has SSL configured with a domain name (say example.com).
Its a Pega App Server with two nodes pointing to the same DB on Google SQL. now Client wants a Load Balancer in the front which will share or balance the traffic between these two nodes.
Is that possible?
If yes, the domain name has been registered with the ip of Node1, but Load Balancer will have a different ip right?
So if the Pega URL that was working before https://example.com/prweb will not work, isnt it?
But the requirement is they will just type the domain name and ill access the Pega App via Load balancer, as in, to which Node the requests gonna go.
Is that Possible at all guys?
Guys honestly i am a noob in all these Cloud thing, please if possible help me out. I ould really appriciate it. Thanks.
I tried to create an HTTPS Load Balancer classic and added those two instances in the Backend, but 1 target pool detected out of 2 instances, its showing "instance xxxx is unhealthy for [the ip of the load balancer]
So next i created HTTPS type Load Balancer with Network endpoint group, where i added those two nodes private ip. But not sure how to do it. Please let me know if anybody knows how to do it.
I am new to Google Cloud Platform and advanced networking in general but I have been tasked with setting up an external HTTPS load balancer that can forward internet traffic to 2 separate Virtual Machines on the same VPC. I have created the load balancer, SSL certs, DNS, frontend, and a backend. I have also created an instance group containing the two VM's for use with the backend.
What I am failing to understand is, how do I determine which VM is going to receive the traffic? Example:
I want test.com/login to go to instance1/some/path/login.php
I want test.com/download to go to instance2/some/path/file.script
Any help is greatly appreciated here. Thanks
To detail what #John Hanley mentioned in configuring URL maps, you can follow these steps :
On you load balancer balancer page. Click the name of the load balancer, then look for Edit.
Select Host and path rules, then click Add host and path rule.
On the host field, enter test.com/login. Then for your path, instance1/some/path/login.php.
Once done, for the Backends, select the backend associated to the VM instance. Do the same step for test.com/downloadby adding another host and path rule.
Click Update.
You can check and refer to this guide for more details
We have AWS ECS instances.
We're using an external service (Twilio) that needs to reach a specific container:port.
And it's SSL, so it has to be a DNS name
Currently, our Upgrade scripts assigns each container an entry in Route53, and I can use a combination of nslookup and my external IP address to discover my name (and then set an env var) on bootup.
But if containers crash, my upgrade script won't have run, so updating Route 53 won't have happened.
Is this problem already solved in some way? At this point, I'm looking at 2 or 3 days to implement a solution.
I don't believe I can use Service Discovery, as SD uses the internal IP address and would be in foo.local, which isn't externally accessible.
At this point, I think I have to write a program that determines what my DNS name needs to be and updates Route 53. That seems simple, but I also have to add permissions to update Route 53 to the IAM user inside the container, and that sounds like a security problem. I'd write a different program to expire dead names.
Is there a better way? This doesn't seem like that unique a problem.
Isn't this the problem that ECS Services and their integration with AWS Load Balancers solve? If you have an ECS task that needs to run for a long time, and it needs to be accessible at a public address, then it needs to run in an ECS service that is configured to use a public load balancer.
our company just moved to a new office and therefore also got new network equipment. Es it turns out, our new firewall does not allow pushing routes over VPN that it first has to look up ip addresses for.
As we all know, amazon aws does not allow static ip addresses for its application load balancer.
So our idea was to simply put a network load balancer in front of the application load balancer (there is a pretty hacky way described by aws itself (https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/) that seemed to work fine (even if I don't really like the approach with the lambda script registering and deregistering targets)
So here is our problem: as it turns out, the application load balancer only gets to see the network load balancers ip address. This prevents us to use security groups for ip whitelisting which we do quite heavily. On top of that some of our applications (Nginx/PHP based) also do ip address verification and the alb used to pass the clients ip address as an x-forwarded-for header. Now our application only sees the one from the nlb.
We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve.
So how did you guys solve this problem ?
Thankful for any help :)
Greetings
You could get the list of AWS IP addresses for the region your ALB is located, and allow for them in your firewall. They do publish the list and you can filter through it https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
I haven't done this myself and I'm unsure if the addresses for ALB are included under the EC2 category of you would take the whole of AMAZON service "to be safe".
Can you expand on this? "We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve."
GA should give you better, more consistent performance, especially if your office is far away from the AWS Region where the ALB is running
I know this issue has been already discussed before , Yet I feel my question is a bit different.
I'm trying to figure out how am I to enable access to the Kibana over the self manged AWS elastic search which I have in my AWS account .
Could be that what am I about to say here is inaccurate or complete nonsense .
I am pretty novice in the whole AWS VPC wise section and to ELK stuck.
Architecture:
Here is the "Architecture":
I have a VPC.
Within the VPC I have several sub nets.
Each server sends it's data to the elastic search using log stash which runs on the server itself. For simplicity lets assume I have a single server.
The elastic search https url which can be found in the Amazon console is resolved to an internal IP within the sub net that I have defined.
Resources:
I have found the following link which suggest to use one of two option:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
Solutions:
Option 1: resource based policy
Either to allow resource based policy for elastic search by introducing condition which specify certain IP address.
This was discussed in the following thread but unfortunately did not work for me.
Proper access policy for Amazon Elastic Search Cluster
When I try to implement it in the Amazon console, Amazon notifies me that because I'm using Security group , I should resolve it by using security group.
Security group rules:
I tried to set a rule which allows my personal computer(Router) public IP to access Amazon elastic search ports or even opening all ports to my public IP.
But that didn't worked out.
I would be happy to get a more detailed explanation to why but I'm guessing that's because the elastic search has only internal IP and not public IP and because it is encapsulated within the VPC I am unable to access it from outside even if I define a rule for a public IP to access it.
Option 2: Using proxy
I'm decline to use this solution unless I have no other choice.
I'm guessing that if I set another server with public and internal IP within the same subnet and VPC as that of the elastic search , and use it as a proxy, I would be then be able to access this server from the outside by defining the same rules to the it's newly created security group . Like the article suggested.
Sources:
I found out of the box solution that some one already made for this issue using proxy server in the following link:
Using either executable or docker container.
https://github.com/abutaha/aws-es-proxy
Option 3: Other
Can you suggest other solution? Is it possible to use Amazon Load balancer or Amazon API gateway to accomplish this task?
I just need proof of concept not something which goes into production environment.
Bottom line:
I need to be able to aceess Kibana from browser in order to be able to search elastic search indexes.
Thanks a lot
The best way is with the just released Cognito authentication.
https://aws.amazon.com/about-aws/whats-new/2018/04/amazon-elasticsearch-service-simplifies-user-authentication-and-access-for-kibana-with-amazon-cognito/
This is a great way to authenticated A SINGLE USER. This is not a good way for the system you're building to access ElasticSearch.