We are business that essentially provides proxy servers. our servers currently run on AWS Singapore.
But we have Malaysian clients who would like to be able to view Malaysian content from streaming services (e.g., netflix, iflix) through our servers. At the moment, they can either only view Singaporean content or nothing at all as the streaming services detect Singaporean IPs being used with Malaysian user accounts.
Does AWS have a service for us to register instance IPs as being in a different country than where the AWS server farm is?
AWS does not have "a service for us to register instance IPs as being in a different country".
However, an AWS account can launch an Amazon EC2 instance in any region around the world and it will receive a public IP address that is (usually) mapped to that part of the world. If you use that EC2 instance, it will "appear" be be in that part of the world (because it is!).
However, please note that many online services block Amazon EC2 instance IP address ranges to prevent such practices.
Related
I am trying to get a Windows Server EC2 instance to communicate with a running Kubernetes Service. I do not want to have to route through the internet, as both the EC2 Instance and Service are sitting within a private subnet.
I am able to get communication through when using the private IP address of the Service, but because of the nature of Kubernetes when the Service goes down, for whatever reason, the private IP can change. I want to avoid this if possible.
I either want to communicate with the service using a static private DNS name or some kind of static private IP address I can create and bind to the Service during creation. Is either of this possible to do?
P.S. I have tried looking into internal LoadBalancers, but I can't get it to work. Don't even know if this is the right direction. https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#traffic-routing. Currently I am using these annotations for EIP binding for some public-facing services.
Why not create a kubeconfig to access the EKS services through kubectl?
See documenation: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
or you want to send traffic to the services?
I was talking to one of my friends on talk on API access. So let us say I have given one AWS account read-only access to my resources, say ec2. And that account tries to scan all the metadata for my ec2 instances. Per my friend, this API call belongs control plane and connects to the AWS API ec2 endpoint over the internet. As per him, this call can not be blocked by any number of VPC controls like NACLs/Security Group, etc. As per him, any data plane calls only goes to VPC. I was kind of agree but still not very convinced that scanning ec2 instances call like listing all instances can not be blocked ..say I have given read-only permission and still wants to block that account ...so is this true that VPC controls do not project that call further. Please help me to understand better ...in case I am in my corporate network and my consuming account, which wants to scan my ec2 instances if SAAS provider.
The Amazon EC2 service is responsible for creating and managing Amazon EC2 instance, VPCs, networking, etc. The API endpoint for the EC2 service reside on the Internet. Permission to make API calls is controlled by AWS Identity and Access Management (IAM).
This is totally separate to the ability to connect to an Amazon EC2 instance. Any such connections would go via a virtualized network VPC.
For example, imagine an Amazon EC2 instance that is turned off (that is, in a Stopped state). There are no actual resources assigned to a Stopped instance -- it is just some metadata sitting in a database. It would not be possible to 'connect' with this instance because it does not exist. However, it would be possible to connect to the AWS EC2 service and issue a command to Start the instance. This API call is made via the Internet and does not require any connectivity to the VPC.
Your wording that "any data plane calls only goes to VPC" is not correct -- the calls go to the EC2 service and do not involve the VPC. The VPC is purely a network configuration that determines how resources can communicate with each other.
I can't figure out how to make them talk using API calls. Previously I used API Gateways which would trigger lambdas and that lambdas would interact with dynamodb and other services and send me back json response. Now I want to shift to EC2 instances and totally skip API gateway usage. And let a server I run in ec2 do the computation for me. Do I need to deploy a web service(DJango RESTFUL) in EC2 instance and then use it to call in my frontend? If yes I need little guidance how
And Suppose I want to access s3 storage from my DJango restufl in EC2. Can I do it without having to enter the access key and ID and use roles instead just like how I would access s3 from the ec2 instance without access key and ID. Traditionally with SDK we have to use access key and secret keys to even get authorized to use services in SDK so I was wondering if there was a way to get over this since the program will be running in EC2 instance itself. One really inefficient way will be to run a batch command that makes the EC2 interact with services I need without SDK and with roles instead but It is really inefficient and too much work as far as I can see.
As you are familiar with API Gateway, you can use the same to connect to your EC2 instance, its private integration, with the use of VPC Links.
You can create an API Gateway API with private integration to provide your customers access to HTTP/HTTPS resources within your Amazon Virtual Private Cloud (Amazon VPC). Such VPC resources are HTTP/HTTPS endpoints on an EC2 instance behind a Network Load Balancer in the VPC.
You can go though this document for step by step integration.
If you do not want to use API gateway any more, then you can simply use Route53 to route traffic to EC2 instance, all you need is the IP address of the EC2 instance and a hosted zone created using Route53.
Here is a tutorial for your reference.
I have an application hosted on AWS EC2 instance. Few developers access the instance via shell to do maintenance on the application. How can I limit them from accessing/downloading content from other websites through the EC2 instance? My end goal is to allow developers to access the instance via shell to do code management/deployment/maintenance and nothing else. Developers may change over time so I'm not sure if outgoing rules can be defined based on IP addresses. Any idea? Thanks in advance.
I am trying to create a web application using AWS Cloudformation. This particular app will have 3 instances (Web server, App server, RDS database). I want these instances to be able to talk to each other. For example, the Web server should talk to App server, and the App server should talk to RDS database.
I can't understand how to configure the servers so that they know each other's IP address. I figure there are 3 ways to do this - but I'm not sure which of these is realistically possible or feasible:
I can assign a fixed private IP address (e.g. 192.168.0.2 and so on) during stack creation - this way I know beforehand the IP address of each instance
I can wait for AWS Cloudformation to return the IP addresses of the created instances and manually tweak my code to use these IP addresses to communicate using these IPs
I can somehow get the IP address of the created instance during the stack creation process and store it as a parameter in the next instance I create (not sure if Cloudformation allows this?)
Which is the best way to set this up? Also, please share a little bit of detail around how I can do this in Cloudformation.
A solution would be to place your Web server and App server behind an ELB (load balancer). This way, your web server will communicate with the app server using the ELB's URL (not the app server's IP). The app server can communicate with the RDS instance via the RDS instance's endpoint (which is again an URL).
Let's suppose you separate your infrastructure into 3 CloudFormation stacks: the RDS database, the app server and the web server. The RDS stack will expose the RDS instance's through the CloudFormation Outputs feature. This endpoint will in turn be used as an CloudFormation Parameter to the App server stack. You can insert the RDS endpoint in the App server LauchConfiguration's UserData field, so that on startup, your App server will know the RDS instance's endpoint. Finally, your App server stack will expose the App server's ELB endpoint (again using the CloudFormation outputs feature). Using the same recipe, the URL of your App server's ELB will be injected and used by your Web server stack.
As a side note, it is also a good idea to oversee your services (web server, app server) using an Autoscaling group. It is very probable that your instances will be terminated by factors out of you control. In that case, you would want the Autoscaling group to start a fresh new instance and place it behind your ELB.