Heroku + Amazon EC2 Security groups - django

So i have read many things about connecting Heroku Django App to my Amazon EC2 (which will serve as an external API to my site) and most of the solution were not free or not advised anymore.
I have 2 problems.
1 - Basically my problem is related to Amazon's security groups, where i need to add a static IP to allow external connection's into amazon. The problem is that Heroku's dynos are dynamic, and to make them static i need a NON FREE add-on, something like Prometheus for example.
I was looking for a free solution. Any ideas on what i can do to achieve that?
2 - Amazon EC2 instances have dynamic Ips also, whats the best solution (free) to be able to have static ips so i can give to my Heroku app and tell it to always connect to that API sitting on amazon ec2?
Thanks in advance

There is no way to reliably have static IPs for your dynos. You'll have to do in this case with out firewall-level security (meaning, opening up your relevant ports for 0.0.0.0/0, and make sure that you have proper authentication mechanisms for your API, such that no malicious user can gain access to your data. Most web frameworks support this out of the box, or have modules for API authentication (oauth comes to mind.)
Amazon EC2 IPs are static. You can request for a new IP and attach it to a running instance. Unless explicitly detached and released, it will remain static for as long as the instance is alive.

Related

External requests in Cloud Run project

Currently my projects in Cloud Run that make external requests come out with random IP from Google IP's pool.
A new micro-service that I am developing that needs to make an external request on a critical external micro-service that is limited by IP.
Google Cloud Platform has any solution to channel the output from a specific IP to the outside? Some kind of proxy for these kinds of needs?
Thanks
As clarified in this other case here, there is no way to directly setup a static or specific IP for outbound requests for Cloud Run. The only possibility as clarified in this answer from a Google's developer, unless Cloud Run starts supporting Cloud NAT or Serverless VPC Access, you won't be able to achieve such configuration.
There are some workarounds.
One of them would be to create a SOCKS proxy by running a ssh client that routes the traffic through a GCE VM instance that has a static external IP address. More details here.
Another solution is to send your outbound requests through a proxy that has a static IP. You can get details here.
Both these two were provided by developers from Google, so they should be good to go and use it.

SSL Install on AWS

I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.

How to set up Tomcat session state in AWS EC2 for failover and security

I am setting up a Tomcat application in EC2. For reliability, I am running two or more instances. If one server goes down, my users should be redirected to the other instance. This suggests that session state should be kept in an external source, or mirrored between the servers.
AWS offers a hosted service, Elasticache, which seems like it would work well. I even found a nice library, memcached-session-manager. However, I soon ran into some issues.
Unless someone can convince me otherwise, I need the session states to be encrypted in transit. Otherwise someone could intercept the network traffic and pretend to be someone else on my site. I don't see any built-in Amazon method to keep traffic off the internet. (Is peering available here?)
The library mentioned earlier does have Redis support with SSL, but it does not support a Redis cluster. Someone put in a pull request for this but it has not been incorporated and this library is a complex build. I may talk myself into living without the cluster, but that puts us back at a single point of failure.
Tomcat is running on EC2 in your VPC, and ElastiCache is in your VPC. Your AWS VPC is an isolated network. Nobody can intercept the traffic between the EC2 and Elasticache servers unless your VPC network becomes compromised in some way.
If you want to use Redis instead, with SSL connections, then I believe at this time you would need a Tomcat Session Manager implementation that uses Jedis. This one uses Jedis, but you would need to upgrade the version of Jedis it uses in order to use SSL connections.

Sails app with multiple instances on AWS - Redis/Elasticache/ALB

I'm building a Sails app that is using socket.io and see that Sails offers a method for using multiple servers via redis:
http://sailsjs.org/documentation/concepts/realtime/multi-server-environments
Since I will be placing the app on AWS, preferably with ELB (elastic load balancer) and autoscale group with multiple EC2 instances was wondering how I can handle so it doesn't need a separate redis instance?
Maybe we can use AWS Elasticache? If so - how would this be done?
Now that AWS has released the new ALB application load balancer which has websockets, could this be used to help simplify things?
Thanks in advance
Updates for use-cases in application
Allow end-user to update data dynamically from their own dashboard
and display analytics/stats in real-time to an administrator
Application status' to change based on specific timings eg. at a
given start date/time the app allows users to update data.
Regarding your first question, you don't want to run Redis on the same servers that Sails is running on, especially if you are using AutoScaling. The Redis server needs to be a separate server that won't disappear if your environment experiences a "scale-in" event. So Redis is going to have to be on a separate "server" somewhere.
ElastiCache is just separate EC2 instances, running Redis, where AWS handles most of the management for you to the point that you can't even SSH into the instance. It's similar to how RDS works. ElastiCache will certainly work for your scenario. You might also want to look at the third-party service RedisLabs which also manages Redis instances on AWS for you.
Regarding your second question, an Application Load Balancer will have no bearing on your Redis usage. It will however bring actual support for WebSockets which it sounds like you are using. So yes, you should be using an ALB instead of an ELB.

Amazon EC2: load balancing / way to sync files / EC2 + CF

As I understand I can use EC2 as web server for my application. But how load balancer is working?
For example I have one EC2 instance. In this way load balancer will not work. Am I right?
For example I have few EC2 instances. In this way I can configure load balancer to balance between all my EC2 servers.
Am I right?
Application files at all instances should be synced? Is there is any Amazon tool to sync? Or I should use something like rsync or post commit hooks to sync files between EC2 instances?
Is it possible to use one EC2 instance for web application (php + nginx) and for CDN (Cloud Front)? Or what is the better way to reach this: I need to store static files but I should access them from web application (php scripts) through file system. So I am going to use EC2 and Clod Front. But how can I get access?
Thanks for your time.
Technically, the load belancer will work, it's just that it'll only balance the traffic to one instance.
Correct. You register the instances with the elastic load balancer, and whilst those instances are healthy - it will respond to them.
There's many different ways to sync files - it all depends on what you want to sync. Cloud Architecture is a little different to traditional architecture. For example, rather than loading the images onto the EBS volume, you'd try and offload them (and serve them) from S3. Therefore the only things you'd need to "sync" would be the webserver files themselves. You could use CloudFormation to roll out updates, post commit hooks and rsync are also good options. The challenge is to remember that it can scale / fail almost at will - so you need to ensure that each instance knows how to get the information and keep itself updated in isolation.
Yes. It's called a custom origin. What you want to do though is put a url rewrite on the outbound server that rewrites the local urls to cloudfront domains.
Hope that helps