I am trying to get a Windows Server EC2 instance to communicate with a running Kubernetes Service. I do not want to have to route through the internet, as both the EC2 Instance and Service are sitting within a private subnet.
I am able to get communication through when using the private IP address of the Service, but because of the nature of Kubernetes when the Service goes down, for whatever reason, the private IP can change. I want to avoid this if possible.
I either want to communicate with the service using a static private DNS name or some kind of static private IP address I can create and bind to the Service during creation. Is either of this possible to do?
P.S. I have tried looking into internal LoadBalancers, but I can't get it to work. Don't even know if this is the right direction. https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#traffic-routing. Currently I am using these annotations for EIP binding for some public-facing services.
Why not create a kubeconfig to access the EKS services through kubectl?
See documenation: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
or you want to send traffic to the services?
Related
Lets say I have a service running clustered on N ec2 instances. On top of that I have Amazon EKS and Elastic Loadbalancer. There is a service not managed by me running outside of AWS where I have an account that my services in AWS are using via HTTP requests. When I made an account to this external service I was asked for an IP (range) of services which will be using this external service. There is my problem. Currently lets say I have 3 EC2 instances with Elastic IP addresses (which are static), so I can just give those three IP addresses to this external service provider and everything works just fine. But in the future I might add more EC2 instances to scale out and whitelisting new IP addresses in the external service is a pain. In some cases those whitelist change requests may take for a week to approve by the external service provider and I dont have that time. Even further, accessing this external service is the only reason I go for static IPs for the EC2 instances. So if possible I would ditch the Elastic IPs.
So my question is how could I act so that if I make requests outside of the AWS in a random instance in my cluster, external service providers would always see the same IP address for me as a service consumer?
Disclaimer: I actually dont have that setup running yet, but I am in the middle of doing research if that would be a feasible option. So forgive me if my question sounds dumb for some obvious reason
Something like Network address translation (NAT) can solve your problem. A NAT gateway with Elastic IP, used for rerouting all traffic through it.
NAT gateway provided by AWS as service can be expensive if your data traffic is big, so you can make your own NAT instance, but that is bit complicated to set up and maintain.
The main difference between NAT gateway and NAT instance are listed here
The example bellow is assumed that EC2 instances are in private subnet, but it doesn't have to be a case.
I believe you need a proxy server in your environment with an Elastic IP. Basically you can use something like NGINX/Apache and configure it with an elastic IP. Configure the webserver to provide an endpoint to your EC2 instances, and doing a proxy pass to the external endpoint.
For high availability, you can manage a proxy in each availability zone, ideally configured using an auto scaling group to keep at leaset one instance alive in each AZ. Going through this approach, you will need to make sure that you assign the public IP from your elastic IP pool.
Generally, hostnames are better alternative to the IP addresses to avoid such situations as they can provide a static endpoint no matter what is the IP behind. Not sure whether you can explore that path with your external API provider. It can be challenging when there is static IP based routing/whitelisting rules in place.
This is what a NAT Gateway is for. NAT Gateways have an Elastic IP attached and allow the instances inside a VPC to make outbound connections, transparently, using the gateway's static address.
I have an instance and s3 bucket in AWS (which I'm limiting to a range of IPs). I'm wanting to create a VPN and be able to authenticate myself while trying to log into that VPN to get to that instance.
To simplify, I'm trying to set up a dev environment for my site. I'm wanting to make sure I can limit access to that instance. I'm wanting to use a service to authenticate anybody wanting to get to that instance. Is there a way to do all of this in AWS?
Have you looked at AWS Client VPN:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html
This allows you to create a managed VPN server in your VPC which you can connect to using any OpenVPN client. You could then allow traffic from this vpn to your instance using security group rules.
Alternatively you can achieve the same effect using OpenVPN on an EC2 server, available from the marketplace:
https://aws.amazon.com/marketplace/pp/B00MI40CAE/ref=mkt_wir_openvpn_byol
Requires a bit more set up but works just fine, perfect if AWS Client VPN isn't available in your region yet.
Both these approaches ensure that your EC2 instance remains in a private subnet and is not accessible directly from the internet. Also, the OpenVPN client for mac works just fine.
I have a tomcat app deployed onto multiple ec2 instances behind ELB ... Is there any way to access each instance using jmx? AWS provides any service for it??
Thanks.
Is there any way to access each instance using jmx?
If each instance has a public IP or Elastic IP, and the appropriate port in the Security Group is open, then you could connect directly, bypassing the ELB. You'll have to go around the ELB somehow in order to connect via JMX. I suggest using a bastion host and SSH forwarding.
AWS provides any service for it??
AWS does not provide any service specifically for this. This is just general networking, which is provided by the VPC service.
I have a kubernetes cluster having a master and two minions.
I have a service running using the public IP of one of the minion as the external IP of the service.
I have a deployment which runs a POD providing the service.Using the docker IP of the POD I am able to access the service.
But I am not able to access it using the external IP and the cluster IP.
The security groups have the necessary ports open.
Can someone help on what I am missing here.The same setup works fine in my local VM cluster.
Easiest way to access the service is to use a NodePort, then assuming your security groups allow that port you can access the service via the public ip of the node:nodeport assigned.
Alternately and a better approach to not expose your nodes to the public internet is to setup the CloudProvider to be type AWS and create a service type LoadBalancer, then the service will be provisioned with an ELB publicly.
I have two vnets that are connected using a gateway. VnET1 and VNET2. VNET2 has a VM which hosts a mongodb instance. I have a webjob running within an App service environment which is deployed into a subnet within VNET1. From this subnet i am able to access the VM in VNET2 with its DNS. But i am unable to access the VM's internal IP. Any suggestions are welcome.
An internal IP address is internal to a VNET, and VNETs are isolated from one another by design. See this site for a good overview.. https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-overview/. If you want to connect internally you might want to consider having multiple subnets within the same VNET instead.
At present, connecting two vnets using a gateway allows IP communication but doesn't allow DNS name resolution. In this scenario we recommend managing a local DNS server. This page shows the requirements for using your own DNS server in Azure.
Hth, Gareth