How to setup EC2 Security Group to allow working with Firebase? - amazon-web-services

I am preparing a system of EC2 workers on AWS that use Firebase as a queue of tasks they should work on.
My app in node.js that reads the queue and works on tasks is done and working and I would like to properly setup a firewall (EC2 Security Group) that allows my machines to connect only to my Firebase.
Each rule of that Security Group contains:
protocol
port range
and destination (IP address with mask, so it supports whole subnets)
My question is - how can I setup this rule for Firebase? I suppose that IP address of my Firebase is dynamic (it resolves to different IPs from different instances). Is there a list of possible addresses or how would you address this issue? Can some kind of proxy be a solution that would not slow down my Firebase drastically?

Since using node to interact with Firebase is outbound traffic, the default security group should work fine (you don't need to allow any inbound traffic).
If you want to lock it down further for whatever reason, it's a bit tricky. As you noticed, there are a bunch of IP addresses serving Firebase. You could get a list of them all with "dig -t A firebaseio.com" and add all of them to your firebase rules. That would work for today, but there could be new servers added next week and you'd be broken. To try to be a bit more general, you could perhaps allow all of 75.126.., but that is probably overly permissive and could still break if new Firebase servers were added in a different data center or something.
FWIW, I wouldn't worry about it. Blocking inbound traffic is generally much more important than outbound (since to generate outbound traffic you have to have already managed to somehow run software on the box)

Related

Fixed IP address for service behind aws application load balancer

our company just moved to a new office and therefore also got new network equipment. Es it turns out, our new firewall does not allow pushing routes over VPN that it first has to look up ip addresses for.
As we all know, amazon aws does not allow static ip addresses for its application load balancer.
So our idea was to simply put a network load balancer in front of the application load balancer (there is a pretty hacky way described by aws itself (https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/) that seemed to work fine (even if I don't really like the approach with the lambda script registering and deregistering targets)
So here is our problem: as it turns out, the application load balancer only gets to see the network load balancers ip address. This prevents us to use security groups for ip whitelisting which we do quite heavily. On top of that some of our applications (Nginx/PHP based) also do ip address verification and the alb used to pass the clients ip address as an x-forwarded-for header. Now our application only sees the one from the nlb.
We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve.
So how did you guys solve this problem ?
Thankful for any help :)
Greetings
You could get the list of AWS IP addresses for the region your ALB is located, and allow for them in your firewall. They do publish the list and you can filter through it https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
I haven't done this myself and I'm unsure if the addresses for ALB are included under the EC2 category of you would take the whole of AMAZON service "to be safe".
Can you expand on this? "We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve."
GA should give you better, more consistent performance, especially if your office is far away from the AWS Region where the ALB is running

How can I use one SSL certificate to secure multiple, dynamic, EC2 instances?

I am working with an application deployed on AWS EC2. One head instance, www.me.com, provides login & admin such that a user can spawn an instance of the application. A new AWS EC2 instance is started to run the application, then the user's browser is redirected to that instance (with long URL ending amazonaws.com), and the application is available until the user closes it (then the EC2 instance is stopped).
We now wish to move the application to use SSL. We can get a certificate from some CA, and tie it to *.me.com. The question is, how do we use the same certificate to secure the application instances?
One idea is to use Elastic IP - we reserve N IPs, and tie N sub-domains (foo1.me.com, foo2, ...) to these. Then we start each application instance associated with one of these, and direct the user to the associated sub-domain. Certificate is valid, all is well. I think this works?
Trouble is, the application should scale to 1000's of simultaneous users, but may well spend the majority of time bouncing along around zero users, so we'll pay significant penalty costs for reserving the unused IPs, and besides, we might exceed N and have to deny access.
Simpler, perhaps, would be to provide access to the application routed through the head server, www.me.com, using either of "a.me.com" or "www.me.com/a". The former, I think doesn't work, because it would need the DNS records to be updated to be useful to the user, which cannot happen quickly enough to be offered to a user on the fly. The latter might work, but I don't know web infrastructure well enough to imagine how to engineer it. Even if we were only serving port 80, and in fact we are also providing services on other ports. So we'd need something like:
www.me.com/a:80 <--> foo.amazonaws.com:80
www.me.com/a:8001 <--> foo.amazonaws.com:8001
www.me.com/a:8002 <--> foo.amazonaws.com:8002
...
It seems to me there are two options, either a head server that handles all traffic (under me.com, and thus under the certificate) and somehow hands it off to the application instances, or some method for allowing the users to connect directly to the application instances but in such a way that we can reasonably manage securing these connections using one (or a small number of) certificates.
Can anyone suggest what is the route one way to do this? I'm assuming it's not Route 53, since - again - that's a DNS thing, with DNS lag. Unless I've misunderstood.
Thanks.

Several firewalls when creating Google Cloud Platform instance. Which to use?

I created a Google Cloud Platform Ubuntu 16.04 instance. It seems the GCP has several places where traffic can be filtered:
The Instances section of the GCP console lets me allow or disallow
HTTP and HTTPS traffic.
In the Networking section I can create additional firewall rules which limit access to the network.
Finally, in the Ubuntu instance itself I can configure UFW to block/allow certain ports.
Should I configure all of these? Would it be better to just configure one and allow all in the others?
As a note, this instance will serve a website, so I would only allow HTTP/HTTPS traffic.
The complete answer is that it depends.
For number one, the only thing that happens is that the default-allow-http rule gets applied to that instance.
The networking section is where you define your own rules that what to be applied to instances. It becomes easier to maintain all your networking configs in Google Cloud if you start having multiple instances and load balancers. You can share apply a single rule to some machines and you can compose them.
Finally, I would use ufw/iptables only as a last resort config. For example I have a some machines behind a load balancer and one of them is doing something weird. I would ssh into it and block port 80 and investigate it.

Using Redis behing AWS load balancer

We're using Redis to collect events from our web application (pub/sub based) behind AWS ELB.
We're looking for a solution that will allow us to scale-up and high-availability for the different servers. We do not wish to have these two servers in a Redis cluster, our plan is to monitor them using cloudwatch and switch between them if necessary.
We tried a simple test of locating two Redis server behind the ELB, telnetting the ELB DNS and see what happens using 'redis-cli monitor', but we don't see nothing. (when trying the same without the ELB it seems fine)
any suggestions?
thanks
I came across this while looking for a similar question, but disagree with the accepted answer. Even though this is pretty old, hopefully it will help someone in the future.
It's more appropriate for your question here to use DNS failover with a Redis Replication Auto-Failover configuration. DNS failover provides groups of availability (if you need that level of scale) and the Replication group provides cache up time.
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html
The Active-passive failover should provide the solution you're wanting with High Availability:
Active-passive failover: Use this failover configuration when you want
a primary group of resources to be available the majority of the time
and you want a secondary group of resources to be on standby in case
all of the primary resources become unavailable. When responding to
queries, Amazon Route 53 includes only the healthy primary resources.
If all of the primary resources are unhealthy, Amazon Route 53 begins
to include only the healthy secondary resources in response to DNS
queries.
After you setup the DNS, then you would point that to the Elasticache Redis failover group's URL and add multiple groups for higher availability during a failover operation.
However, you might need to setup your application to write and read from different endpoints to maximize the architecture's scalability.
Sources:
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Replication.html
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoFailover.html
Placing a pair of independent redis nodes behind a LB will likely not be what you want. What will happen is ELB will try to balance connections to each instance, splitting half to one and half to another. This means that commands issued by one connection may not be seen by another. It also means no data is shared. So client a could publish a message, and client b being subscribed to the other server won't see the message.
For PUBSUB behind ELB you have a secondary problem. ELB will close an idle connection. So if you subscribe to a channel that isn't busy your ELB will close your connection. As I recall the max you can make this is 60s, meaning if you don't publish a message every single minute your clients will be disconnected.
As to how much of a problem that is depends on your client library, and frankly in my experience most don't handle it well in that they are unaware of the need to re-subscribe upon re-establishing the connection, meaning you would have to code that yourself.
That said a sentinel + redis solution would be quite ideal if your c,isn't has proper sentinel support. In this scenario. Your client asks the sentinels for the master to talk to, and on a connection failure it repeats this process. This would handle the setup you describe, without the problems of being behind an ELB.
Assuming you are running in VPC:
did you register the EC2 instances with the ELB?
did you add the correct security group setting to the ELB (allowing inbound port 23)?
did you add an ELB listener that maps port 23 on the ELB to port 23 on the instances?
did you set sensible ELB health checks (e.g. TCP on port 23) so that ELB thinks the EC2 instances are healthy?
If the ELB thinks the servers behind it are not healthy then ELB will not send them any traffic.

Amazon Elastic IP issues

I've read a lot of questions already posted on this topic but none seem to provide an answer that helps, so forgive me for the duplicate post if I missed one...
I setup an elastic beanstalk single instance application. I then ensure'd the EC2 instance that it spawned had a security group to allow port 80 incoming requests. I then created an elastic ip and associated the EC2 instance with the ip, but neither the public dns or the elastic ip will respond to http requests.
Any ideas why this might be an issue for me?
In my case the problem was, even though I'd associated my elastic IP to my instance and created firewall rules in new security groups to provide access, I hadn't associated my new security groups with my instance. To fix this, I used the Change Security Groups menu from my Instances screen:
This caused the following popup to appear, where, sure enough, my new security groups existed but weren't associated with my instance:
After I (1) checked the appropriate boxes and (2) clicked on Assign Security Groups, all was well.
In classic-EC2 scenario:
Make sure port 80 is allowed in your AWS security group.
Make sure port 80 is allowed in local operating based firewall on your system. OR disable the local firewall for the time being to narrow down the issue.
Make sure that your application is indeed listening on port 80. You can check this by running telnet 127.0.0.1 80.
If above 3 points are satisfied, I don't see a reason why you are not able to access your application on port 80.
Let us know in case you are using VPC and not classic-EC2.
BTW, when you attach elastic IP, the instance will drop the public DNS that it had earlier. So now you should work with elastic IP only.
I have had a case where the elastic IP address was itself not responding on a specific port number. When I associated the instance with a different elastic IP, everything worked fine. So I resolved the issue by allocating a new elastic IP address. Root cause: Amazon evidently does not have an effective internal process for validating the integrity of an elastic IP. Obviously that's a tall order considering the things outside their control that can happen, with denial of service attacks and etc.
It cost me a day of doing progressive isolation to get to this, which I would have never otherwise suspected.
Any chance there is also a firewall running on the machine? I know in windows I usually need to open the port on the windows firewall AND on amazon's security.