Running Get command on EC2 from Lambda - amazon-web-services

I am new to AWS environment.I have installed apache Atlas in EC2 instance and from Lambda I am trying to get metadata from glue data catalog and post it in apache atlas(apache atlas uses rest end-points) running on ec2.I am able to get the glue data catalog metadata in lambda function.
How can i access use curl/httpGet call from lambda function to access service running on port 21000 on localhost on my EC2 instance?
Update1 : Resolved by allowing all traffic for inbound on private IP for the EC2 instance in security group.
Update2 : Now I am able to access the rest URL(by its private IP) and glue catalog both within Lambda.What I did is I created a private and public subnet and put my EC2 instance and lambda on same private subnet with NAT configured on a public subnet.
Now my lambda is working but I am not able to ssh on my EC2 instance.Is there a way to get that working also?

"localhost" is relative to each computer. What is "localhost" on your EC2 server is different from what is "localhost" on AWS Lambda, etc. You need to stop trying to access "locahost" and use the server's IP address instead.
To access port 21000 on the EC2 server the Lambda function needs to be placed in the same VPC that the EC2 instance is in, and the EC2 server needs to be listening to external traffic on port 21000, not just localhost traffic. You would assign a security group to the Lambda function, and in the security group assigned to the EC2 server you would open port 21000 for traffic coming from the Lambda function's security group. Finally, the Lambda function would access the EC2 server by addressing it via the server's private IP.

I'm not familiar with Apache Atlas and whether it exposes it's own HTTP endpoints to external clients. What you need is a server running on EC2 for that.
EC2 server doesn't magically accept HTTP calls from external connections and route to the local resources you want (in this case, Atlas). Install Apache Server, nginx or any other server in your EC2 instance. Configure it properly and write some code that takes the data POSTed by your Lambda and submits to the local Apache Atlas API.
The following page contains some instructions in this direction. Search the web if you need more help, there are tons of tutorials for doing this already. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateWebServer.html

Related

How to retrieve the IP address of a EC2 (Server) from Lambda acting as a Client?

The app will be using gRPC with the Server listening and the Lambda connecting to the Server. The Lambda will have access to the VPC but am not sure the best way to retrieve the Server IP Address.
VPC DNS Routing can be enabled but the actual name of the Server appears to be a function of the IP Address and can change on each re-boot.
Thanks,
Created an EC2 in a VPC with DNS enabled, name is based on an IP Address and changes with each reboot.
You could create a Route53 Private Hosted Zone to give the EC2 server(s) whatever DNS names you want within your VPC.
Or you could do something like add a specific tag to the EC2 instance(s) that the Lambda function needs to connect to, and then have the Lambda function call the AWS API to query for EC2 instances with that tag, retrieving the IP address from the response.
You could use the AWS CloudMap service which is relatively new. [1]
It is very well integrated into container services such as ECS - the scheduler manages de-/registering entries. For EC2, you might have to write a script which queries EC2 instance metadata on startup an registers the instance to CloudMap. [2]
In order to deregister an instance properly, you could put it into an Auto Scaling group and register lifecycle hooks which call the appropriate CloudMap API commands.
[1] https://aws.amazon.com/de/blogs/aws/aws-cloud-map-easily-create-and-maintain-custom-maps-of-your-applications/
[2] https://docs.aws.amazon.com/cloud-map/latest/api/API_RegisterInstance.html

How do I set up and log into a vpn from my mac in aws?

I have an instance and s3 bucket in AWS (which I'm limiting to a range of IPs). I'm wanting to create a VPN and be able to authenticate myself while trying to log into that VPN to get to that instance.
To simplify, I'm trying to set up a dev environment for my site. I'm wanting to make sure I can limit access to that instance. I'm wanting to use a service to authenticate anybody wanting to get to that instance. Is there a way to do all of this in AWS?
Have you looked at AWS Client VPN:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html
This allows you to create a managed VPN server in your VPC which you can connect to using any OpenVPN client. You could then allow traffic from this vpn to your instance using security group rules.
Alternatively you can achieve the same effect using OpenVPN on an EC2 server, available from the marketplace:
https://aws.amazon.com/marketplace/pp/B00MI40CAE/ref=mkt_wir_openvpn_byol
Requires a bit more set up but works just fine, perfect if AWS Client VPN isn't available in your region yet.
Both these approaches ensure that your EC2 instance remains in a private subnet and is not accessible directly from the internet. Also, the OpenVPN client for mac works just fine.

Connect to a web application in private through bastion host for my local machine in aws

I have an EC2 instance running in AWS and here's the scenario I'm trying to achieve. I have a VPC setup with 3 subnets. 2 of them are private with no access to the internet (even using a NAT gateway/NAT instance), and another is a public subnet.
Bastion Host configured with Public IP (55.55.55.55 for example) in the public subnet.
I have ec2 instance launched in a private subnet that hosts my application, and I want my users to be able to access the application from their workstation browsers.
If I set up the SSH connection as discussed here, it works perfectly fine for the web browser page set up on my bastion host. However, for my use case, I need to achieve another level of SSH forward like above as my application is in the private subnet for that application to be accessible from my local machine. Is that possible somehow? I also need to make sure there are no issues with the DNS.
ssh -N <Bastion_IP/HostName> -L<LocalPort>:<Internal_IP_of_Web_Server>:<WebServer_Port>
Then you can access the webserver http://localhost:<LocalPort>/
Assuming you have a web application on ec2 in a private subnet and you want to make it available for access outside AWS.
You can setup port forwarding on your bastion host following this tutorial, but I suggest you use a load balancer (ELB) as described in this guide. To use an ELB you will need another public subnet in a different AZ. If you're application is serving HTTP traffic, then it's even better to use a Application ELB (ALB). Here is more info about ALB.

PgAdmin access to AWS Postgres instance in private subnet

I'm trying to create a realistic network setup for a multi-tiered web application. I've created a new VPC within AWS with 1 x public subnet & 2 x private subnet. I then created a Postgres instance within the private subnet and set it to not publicly accessible. This adds an extra layer of security around the database, but how do I then access the database from my local IP?
I created a security group & assigned my IP to the inbound rules & assigned that to the DB instance during creation:
But I still have no way of connecting to it? Do I need to create a VPN and connect to my VPC via the VPN and then connect to the DB instance? Within the proposed architecture, how do you connect to the DB?
What I'm trying to achieve is an architecture which will allow me to create Lambda functions which communicate with the DB via the API Gateway and serve data to a web frontend. So I want the DB protected via the private subnet. But I also want to be able to connect directly to the DB from my local laptop.
At the moment - the RDS instance is running in the VPC, but I don't know how to connect to it. DoI need to set up an Internet Gateway / VPN / EC2 instance and jump to the DB?
You have implemented excellent security by placing the Amazon RDS database into a private subnet. This means it is not accessible from the Internet, which blocks off the majority of potential security threats.
However, it also means that you cannot connect to it from the Internet.
The most common method to achieve your goals is to launch an Amazon EC2 instance in the public subnet and use it as a Bastion or Jump Box:
You SSH into the Bastion
The Bastion can then connect you to other resources within the VPC
Since you merely wish to connect to a database (as opposed to logging into another server), the best method is to use SSH with port forwarding.
In Windows, this can be done using your SSH client -- for example, if you are using PuTTY, you can configure Tunnelling. See: How to Configure an SSH Tunnel on PuTTY
For Mac/Linux, use this command:
ssh -i YOUR-KEYPAIR.pem -L 5555:RDS-ENDPOINT:5432 ec2-user#YOUR-BASTION-SERVER
You then point the SQL client on your laptop to: localhost:5555
The 5555 can be any number you wish. It is merely the "local port" on your own computer that will be used to forward traffic to the remote computer.
The RDS-ENDPOINT is the Endpoint of your RDS database as supplied in the RDS console. It will be similar to: db.cnrffgvaxtw8.us-west-2.rds.amazonaws.com
BASTION-SERVER is the IP address or DNS name of the Jump Box you will use to connect
Then, any traffic sent to localhost:5555 from your SQL client will be automatically sent over the SSH connection to the Bastion/Jump Box, which will then forward it to port 5432 on the RDS database. The traffic will be encrypted across the SSH connection, and establishment of the connection requires an SSH keypair.
I referred a lot of articles and videos to find this answer.
yes, you can connect to rds instances in private subnets
we have two ways to connect
With server: By using ec2 in the public subnet and using it as a bastion host. we can connect to pg admin by ssh tunneling
Serverless: By using client VPN endpoint. create a client VPN endpoint and associate the subnets and allow the internet to the private subnets. and then download the configuration file and install open VPN GUI and import the configuration file and add the keys and then connect the open VPN. Now try to connect to pgadmin, it will connect.
for steps: https://docs.google.com/document/d/1rSpA_kCGtwXOTIP2wwHSELf7j9KbXyQ3pVFveNBihv4/edit )

Unable to access service running inside AWS

I have a kubernetes cluster having a master and two minions.
I have a service running using the public IP of one of the minion as the external IP of the service.
I have a deployment which runs a POD providing the service.Using the docker IP of the POD I am able to access the service.
But I am not able to access it using the external IP and the cluster IP.
The security groups have the necessary ports open.
Can someone help on what I am missing here.The same setup works fine in my local VM cluster.
Easiest way to access the service is to use a NodePort, then assuming your security groups allow that port you can access the service via the public ip of the node:nodeport assigned.
Alternately and a better approach to not expose your nodes to the public internet is to setup the CloudProvider to be type AWS and create a service type LoadBalancer, then the service will be provisioned with an ELB publicly.