This is more of an architectural question rather than coding issue.
Please pardon me if I am in wrong place
I have an Ec2 instance running in private VPC where we in future are
going to deploy PII data and by no mean we can have internet access to it.
However We need to install ETL tool in docker(Airflow, docker, nifi, python etc)
and ofcourse need to ssh into from my local company vpc.
There is two approach as far as I think is
1. To create an another EC2 in public subnet and install all our tool there
and call the VPC EC2 from this one.
So that I can move the PII data to S3 through private Endpoint.
Cons: Does not it still raise the security concern as the EC2(ETL) is still
in internet where from one can access the PII data in second ec2.
Another Option
2. To create the Ec2 in public and install all tools and then
finally change it into private VPC.
Cons: in case if tool crash or there is any change needed then will have to
move it back to public which again does not look proper way of handling it.
I tried to search internet to get any tutorial or training about it.
But cannot find it.
Any suggestion will be highly appreciated.
You don't need to use internet at all if you don't want. I assume that by no internet access you mean that this is two ways - no access from the internet to the instance, nor the instance can connect internet at all (i.e. no NAT or any other proxy).
There are a couple of ways of doing this. One way is as follows:
Prepare custom AMI with per-installed all packages and software that you require.
Create private VPC without any public subnets.
Add VPC interface endpoints to the VPC for S3, AWS System Manager, ECR (to store your private docker images) and other AWS services you may require, e.g. KMS.
Launch your instance from the custom AMI in the private VPC.
Use SSM Session Manager to "ssh" to the instance without any internet access.
I think both approaches are inherently sub-optimal
If all you're trying to do is avoid exposing your compute instances to the internet, and your setup is docker based, simply setup your own docker repository, either using ECS or Sonatype Nexus (on another server), upload your docker images there and have that node use that ECS/Nexus as its docker registry.
That way, your enjoying free access to all resources exposed as docker images while maintaining security compliance.
Related
i have more than 30 production Windows severs in all AWS regions. I would like to connect all servers from one base bastion host. can any one please let me know which one is good choice? How can i setup one bastion host to communicate all servers which is different regions and different VPC's? Kindly anyone give advice for this?
First of all, I would question what are you trying to achieve with a single bastion design? For example, if all you want is to execute automation commands or patches it would be significantly more efficient (and cheaper) to use AWS System Manager Run Commands or AWS System Manager Patch Manager respectively. With AWS System Manager you are getting a managed service that offers advance management capabilities with highest security principles built-in. Additionally, with SSM almost all access permissions could be regulated via IAM permission policies.
Still, if you need to set-up bastion host for some other purpose not supported by SSM, the answer includes several steps that you need to do.
Firstly, since you are dealing with multiple VPCs (across regions), one way to connect them all and access them from you bastion's VPC would be to set-up a Inter-Region Transit Gateway. However, among other, you would need to make sure that your VPC (and Subnet) CIDR ranges are not overlapping. Otherwise, it will be impossible to properly arrange routing tables.
Secondly, you need to arrange that access from your bastion is allowed in the inbound connections of your target's security group. Since, you are dealing with peered VPCs you will need to properly allow your inbound access based on CIDR ranges.
Finally, you need to decide how you will secure access to your Windows Bastion host. Like with almost all use-cases relying on Amazon EC2 instances, I would stress to keep all the instances in private subnets. From private subnets you can always reach the internet via NAT Gateways (or NAT Instances) and stay protected from unauthorized external access attempts. Therefore, if your Bastion is in private subnet you could use the capability of SSM to establish a port-forwarding session to your local machine. In this way, you enable yourself the connection while even your bastion is secured in private subnet.
Overall, this answer to your question involves a lot of complexity and components that will definitely incur charges to your AWS account. So, it would be wise to consider what practical problem are you trying to solve (not shared in the question)? Afterwards, you could evaluate if there is an applicable managed service like SSM that is already provided by AWS. In the end, from a security perspective, granting access to all instances from a single bastion might not be best practice. If you consider scenarios in which you bastion is compromised for whatever reason, you basically compromised all of your instances across all of the regions.
Hope it gives you slightly better understanding of your potential solution.
I have inherited a webserver on AWS running an EC2 instance
which is inherited via CloudFront.
I want to SSH in, but there is no keypair assigned to the EC2 instance.
The previous dev is not very helpful - all he told me was "use cloudfront".
Looking into CloudFront - I saw nothing that indicated I could SSH in that
way. Did I miss something?
Is it possible for me to access the instance via SSH without a private key
via Cloudfront?
I would appreciate any help
You can't ssh into your instance through CloudFront. If you don't have the private key to ssh, there are some options you can use:
Try EC2 Instance Connect which is a web-based ssh client. It will not ask for private key, if it works.
Try AWS Systems Manager Session Manager which is also web client. This will work even if the instance was launched without any ssh client. You will need to read up on how to set it up as it requires special instance role, and the ability of the instance to connect to the SSM service.
Use AWSSupport-ResetAccess SSM Automation to reset the ssh key for the instance.
Use recovery instance as shown in the official AWS video.
The best options would be to try 2 and 1. But depending on how the instance is setup, is it in private or public subnet, does it have internet access, is it Amazon Linux 2 or some non-standard AMI, what kind of roles it has, etc., you may need to perform extra steps to make it work.
Options 3 and 4 will require downtime and making backup before you attempt them would be good choice. Options 1 and 2 may work without any downtime, depending on the instance current setup.
I made and deployed my Django application in AWS Elastic Beanstalk. It has a connection to a Postgres DB in RDS, through the EBS console.
When I click configuration -> network in EBS, I see: "This environment is not part of a VPC."
How can I make it a part of a VPC? Thanks!
NOTE: very new to this ;)
You have to recreate the Elastic Beanstalk environment and pick the VPC during the creation. It is not possible to move an existing environment into a VPC.
But, unless you have access to EC2-Classic, the EC2 servers that were launched are already be in a VPC. They are just in the default VPC. But as far as Elastic Beanstalk is concerned, it seems oblivious to this.
I am not sure if there are any features that are exclusively available to VPC environments. My suggestion is to try to use your current environment, and if you happen to recreate the environment later for some other reason, then you can try picking a VPC and see if it offers anything new.
As already explained by #stefansundin you can't move existing EB into a custom VPC. You have to create new one.
These are general steps to consider:
Create a custom VPC with public and private subnets as described in the docs: VPC with public and private subnets (NAT). NAT is needed for instances and rds in private subnet to communicate with internet, but no inbound internet traffic will be allowed. This ensures that your instances and rds are not accessible from the outside.
Create new RDS, external to EB. This is good practice as otherwise the lifetime of your RDS is coupled with EB environment. Starting point is the following AWS documentation: Launching and connecting to an external Amazon RDS instance in a default VPC
Create new EB environment and ensure to customize its settings to use the VPC. Pass the RDS endpoint to the EB instances using environmental variables. Depending on how you want to handle password to the RDS, there are few options, starting from using environmental variables (low security) through SSM Parameter Store (free) or AWS Secrets Manager (not free).
Setting this all up correctly can be difficult for someone new to AWS. But with patience and practice it can be done. Thus, I would recommend with starting with default VPC, as you have now. Then once you are comfortable with how to work with external RDS, think on creating custom VPC as described.
I want to connect to a database running in different cloud provider and it is exposed publicly.
I need to connect to that database from sagemaker notebook instance.
But the public ip of the sagemaker notebook instance needs to be whitelisted on the other side.
Is it possible to attach elastic ip to sagemaker notebook instance as I don't see any option to attach eip to sagemaker notebook instance?
No, it is not possible to assign a SageMaker notebook an Elastic IP, which is a disappointment. This missing feature makes the SageMaker product a lot more difficult to use with many sources of data, limiting its utility.
Official Amazon Answer
From the AWS SageMaker product forums on Dec 12, 2019: Possible to attach Elastic IP to sagemaker notebook instance?
Question> Is it possible to attach elastic ip to sagemaker notebook instance?
Answer> We are always re-evaluating our backlog of features based on customer requests,
so we appreciate the feedback on this feature.
You might want to start a new thread or chime in on that one if you want them to add this feature.
Possible Solutions
A general strategy for using a particular IP to access a resource would be to setup a proxy machine and authorize its IP and use it as a proxy to access your service. How hard this is depends on what you are doing - for S3 it doesn't seem possible - but for web-based requests this shouldn't be too hard. For AWS services you can use a proxy.
Personally I am trying to access Algoseek's requestor-pays S3 buckets directly from SageMaker notebooks and this isn't possible. I looked at setting up a proxy but can't figure out how. Instead I will copy the S3 data each time they add a day into our own S3 bucket.
In my case, I have whitelisted the NAT Gateway's IP in the external database.
EDIT: This works only for private subnets.
Before moving to Amazon Web Services, I was using Google Cloud Platform to develop my aplication, CloudSQL to be specific, and GCP have something called Cloud SQL Proxy that allows me to connect to my CloudSQL instance using my computer, instead of having to deploy my code to the server and then test it. How can I make the same thing using AWS?
I have a python environment on Elastic Beanstalk, that uses Amazon RDS.
AWS is deny be default so you cannot access an RDS instance outside of the VPC that your application is running in. With that being said... you can connect to the RDS instance via a VPN that can be stood up in EC2 that has rules open to the RDS instance. This would allow you to connect to the VPN on whatever developer machine and then access the RDS instance as if your dev box was in the VPC. This is my preferred method because it is more secure. Only those with access to the VPN have access to the RDS instance. This has worked well for me in a production sense.
The VPN provider that I use is https://aws.amazon.com/marketplace/pp/OpenVPN-Inc-OpenVPN-Access-Server/B00MI40CAE
Alternatively you could open up a hole in your VPC to the RDS instance and make it publicly available. I don't recommend this however because it will leave your RDS instance open to attack as it is publicly exposed.
You can expose your AWS RDS to the internet by proper VPC setting, I did it before.
But it has some risks
So usually you can use those ways to figure it out:
Create a local database server and restore snapshot from your AWS RDS
or use VPN to connect to your private subnet which hold your RDS
A couple people have suggested putting your RDS instance in a public subnet, and allowing access from the internet.
This is generally considered to be a bad idea, and should be the last resort.
So you have a couple of options for getting access to RDS in a private subnet.
The first option is to set up networking between your local network and your AWS VPC. You can do this with Direct Connect, or with a point-point VPN. But based on your question, this isn't something you feel comfortable with.
The second option is to set up a bastion server in the public subnet, and use ssh port forwarding to get local access to the RDS over the SSH tunnel.
You don't say if you on linux or Windows, but this can be accomplished on either OS.
What I did to solve was:
Go to Elastic Beanstalk console
Chose you aplication
Go to Configurations
Click on the endpoint of your database in Databases
Click on the identifier of your DB Instance
In security group rules click in the security groups
Click in the inbound tab
Click edit
Change type to All Traffic and source to Anywhere
Save
This way you can expose the RDS connected to your Elastic Beanstalk aplication to the internet, which is not recommended as people sugested, but it is what I was looking for.