Heroku and amazon VPC - amazon-web-services

Do Heroku apps run on default VPC or do they run on custom VPC? (I assume by now everyone is using VPC and not the older EC2-Classic)
Does anyone one have information about the VPC id of Heroku (if they are using a custom VPC)?
Earlier they used the AWS account number: 098166147350
as per AWS Forum.
Do they still use the same account?
But, Heroku has recommended not to use account id to control access at Heroku DevCenter.
My idea is to restrict access to my service only to the VPC heroku is using. Also, I want to add a VPC peering connection from my VPC.
On top of this, I will add other security features to further restrict access only to the relevant apps.

Heroku's cedar stack currently still runs on EC2 classic.
The beta private spaces allow you to create a VPC and host your apps inside it.

Related

Elastic Beanstalk in VPC

I made and deployed my Django application in AWS Elastic Beanstalk. It has a connection to a Postgres DB in RDS, through the EBS console.
When I click configuration -> network in EBS, I see: "This environment is not part of a VPC."
How can I make it a part of a VPC? Thanks!
NOTE: very new to this ;)
You have to recreate the Elastic Beanstalk environment and pick the VPC during the creation. It is not possible to move an existing environment into a VPC.
But, unless you have access to EC2-Classic, the EC2 servers that were launched are already be in a VPC. They are just in the default VPC. But as far as Elastic Beanstalk is concerned, it seems oblivious to this.
I am not sure if there are any features that are exclusively available to VPC environments. My suggestion is to try to use your current environment, and if you happen to recreate the environment later for some other reason, then you can try picking a VPC and see if it offers anything new.
As already explained by #stefansundin you can't move existing EB into a custom VPC. You have to create new one.
These are general steps to consider:
Create a custom VPC with public and private subnets as described in the docs: VPC with public and private subnets (NAT). NAT is needed for instances and rds in private subnet to communicate with internet, but no inbound internet traffic will be allowed. This ensures that your instances and rds are not accessible from the outside.
Create new RDS, external to EB. This is good practice as otherwise the lifetime of your RDS is coupled with EB environment. Starting point is the following AWS documentation: Launching and connecting to an external Amazon RDS instance in a default VPC
Create new EB environment and ensure to customize its settings to use the VPC. Pass the RDS endpoint to the EB instances using environmental variables. Depending on how you want to handle password to the RDS, there are few options, starting from using environmental variables (low security) through SSM Parameter Store (free) or AWS Secrets Manager (not free).
Setting this all up correctly can be difficult for someone new to AWS. But with patience and practice it can be done. Thus, I would recommend with starting with default VPC, as you have now. Then once you are comfortable with how to work with external RDS, think on creating custom VPC as described.

can Ec2 instances from one account access ec2 instance of other account?

currently working with two environments/account.
Dev and staging
We are paling to spin up a new instance to install Jenkins for CI/CD in dev environment.
We are also wondering if we can use the same instance which is in dev as a CI/CD for staging account as well.
How will access work?
How can the CI/CD instance access the instances in stating for CI/CD?
Do we need to set up a cross-account role for this which allowed dev CI/CD to access the stating instances?
or
the private key is enough to have access to EC2 irrespective of account?
You can definitely enable this. Take a look at VPC peering.
This features enables 2 VPCs whether different account or different region, to connect to each other as there networks become connected via a tunnel between.
When you implement this the following factors are important:
No cross over of CIDR ranges within VPCs
The VPC peering connection must be added to the route table(s) in both VPCs allowing them to know how to connect to the other VPC.
You will need to whitelist in security groups to allow access fro the instances that you want to be able to connect.
By doing this you also benefit from any network connections traversing the AWS backbone rather than across the public internet which will lead to improvements for security and performance.

Why do hostnames of connections to Aurora Serverless come from outside the VPC?

I have a php website running in Beanstalk behind a load balancer.
The website is connecting to a MySQL compatible database running as Aurora Serverless.
Both the elastic beanstalk instance and Aurora is set up in the same VPC.
The VPC CIDR is 10.10.0.0/24
The elastic beanstalk instance has local IP 10.10.0.18
The serverless Aurora cluster is using VPC endpoints in the two subnets of the VPC and their IP addresses are 10.10.0.30 and 10.10.0.75.
Even though Aurora Serverless is limited to only accepting connections from within the VPC, out of habit I have still only granted my user permission if they are coming from the VPC.
So for instance I have granted permissions to 'user'#'10.10.0.%'
When my website tries to connect to the database however it gets permission denied because it is trying to access it with a user that was not granted permission because the host is not in the 10.10.0.0/24 subnet.
Here are some of the errors that I am getting:
Access denied for user 'user'#'10.1.17.79' (using password: YES)
Access denied for user 'user'#'10.1.18.17' (using password: YES)
Access denied for user 'user'#'10.1.19.1' (using password: YES)
Access denied for user 'user'#'10.1.19.177' (using password: YES)
As you can see, none of those hosts are within my VPC.
Is this because the cluster is running in its own VPC, linked to mine via the private links?
And if so, are my only option to use % as the host for users I grant privileges?
Personally I would like to have locked it down to only my VPC just in case Serverless Aurora opens up for connections from the internet in the future.
Don't whitelist to any specific IP addresses like that for RDS. Especially with Aurora serverless where the node IPs can change at a moments notice a it scales you will find there is no way to know the true IP address of the node.
Remember that all the RDS database services technically run within a AWS managed VPC, you do however get an ENI attached to your VPC that allows you to connect to the instance. This allows you to communicate as if the resource is actually created within your VPC.
The best way to enhance security is going to be through security groups and NACLs, combined with using TLS and encryption at rest. Finally ensure your passwords are strong and rotated frequently.
The RDS Best Practices should help you to dive into other practices you can follow to enhance your security.

How to develop a AWS Web App that uses AWS RDS locally?

Before moving to Amazon Web Services, I was using Google Cloud Platform to develop my aplication, CloudSQL to be specific, and GCP have something called Cloud SQL Proxy that allows me to connect to my CloudSQL instance using my computer, instead of having to deploy my code to the server and then test it. How can I make the same thing using AWS?
I have a python environment on Elastic Beanstalk, that uses Amazon RDS.
AWS is deny be default so you cannot access an RDS instance outside of the VPC that your application is running in. With that being said... you can connect to the RDS instance via a VPN that can be stood up in EC2 that has rules open to the RDS instance. This would allow you to connect to the VPN on whatever developer machine and then access the RDS instance as if your dev box was in the VPC. This is my preferred method because it is more secure. Only those with access to the VPN have access to the RDS instance. This has worked well for me in a production sense.
The VPN provider that I use is https://aws.amazon.com/marketplace/pp/OpenVPN-Inc-OpenVPN-Access-Server/B00MI40CAE
Alternatively you could open up a hole in your VPC to the RDS instance and make it publicly available. I don't recommend this however because it will leave your RDS instance open to attack as it is publicly exposed.
You can expose your AWS RDS to the internet by proper VPC setting, I did it before.
But it has some risks
So usually you can use those ways to figure it out:
Create a local database server and restore snapshot from your AWS RDS
or use VPN to connect to your private subnet which hold your RDS
A couple people have suggested putting your RDS instance in a public subnet, and allowing access from the internet.
This is generally considered to be a bad idea, and should be the last resort.
So you have a couple of options for getting access to RDS in a private subnet.
The first option is to set up networking between your local network and your AWS VPC. You can do this with Direct Connect, or with a point-point VPN. But based on your question, this isn't something you feel comfortable with.
The second option is to set up a bastion server in the public subnet, and use ssh port forwarding to get local access to the RDS over the SSH tunnel.
You don't say if you on linux or Windows, but this can be accomplished on either OS.
What I did to solve was:
Go to Elastic Beanstalk console
Chose you aplication
Go to Configurations
Click on the endpoint of your database in Databases
Click on the identifier of your DB Instance
In security group rules click in the security groups
Click in the inbound tab
Click edit
Change type to All Traffic and source to Anywhere
Save
This way you can expose the RDS connected to your Elastic Beanstalk aplication to the internet, which is not recommended as people sugested, but it is what I was looking for.

Access an RDS instance created in Elastic Beanstalk

When you set up a new Elastic Beanstalk cluster you can access your EC2 instance by doing this:
eb ssh
However, it's not clear how to access the RDS instance.
How do you access an RDS in an Elastic Beanstalk context in order to perform CRUD operations?
The RDS command-line can be accessed from anywhere, by adjusting the RDS security group.
Check your AWS VPC configuration.
The security-group will need to be
adjusted to allow you to connect from a new source/port.
Find the security Group-id for the RDS.
Find that group in AWS Console > VPC > secuirty groups
Adjust the Inbound and Outbound Rules accordingly.
You need to allow access to/from the IP or security group that needs to connect to the RDS.
FROM: https://stackoverflow.com/a/37200075/1589379
After that, all that remains is configuring whatever local DB tool you would like to use to operate on the database.
EDIT:
Of additional note, if the ElasticBeanstalk Environment is configured to use RDS, the EC2 Instances will have environment variables set with the information needed to connect to the RDS.
This means that you can import those variables into any code that needs access.
Custom environment variables may also be set in Elastic Beanstalk Environment Configuration, and these too may be included this way.
PHP
define('RDS_HOSTNAME', getenv('RDS_HOSTNAME'));
$db = new rds(RDS_HOSTNAME);
Linux CommandLine
mysql --host=$RDS_HOSTNAME --port=$RDS_PORT -u $RDS_USERNAME -p$RDS_PASSWORD
RDS is a managed database service, which means it is that you can only access it through database calls.
If it is a MySQL database you can access through your EC2 instance through mysql like this:
mysql -u user -p password -h rds.instance.endpoint.region.rds.amazonaws.com
or set it up to work with your app with settings needed for that.
Make sure that you set up security groups correctly so that your EC2/other service has access to your RDS instance.
Update:
If you want what you are asking for then you should use an EC2 instance with a mysql server on. It would cost the same (even though a fraction of performance is lost in comparison). An EC2 instance you can turn off when you are not using as well.