Why doesn't GCP's "Memorystore for Redis" doesnot allow option to add Public IP? - google-cloud-platform

Currently, when trying to create "MemoryStore for Redis" in GCP, there is no option to add Public IP.
This poses a problem as I am unable to connect to it from a Compute Engine from external network with this REDIS instance in another network.
Why is this missing?

Redis is designed to be accessed by trusted clients inside trusted
environments. This means that usually it is not a good idea to expose
the Redis instance directly to the internet or, in general, to an
environment where untrusted clients can directly access the Redis TCP
port or UNIX socket.
Redis Security

I think because a design decision but in general this is not something we will know since we are not part of the Product team so I don't think this question can be easily answered in SO.
According to this Issue Tracker there are no plans to support this a near future.
Said that you may want to take a look to at this doc where it shows some workarounds to connect from a network outside the VPC.

Related

Restrict access to some endpoints on Google Cloud

I have a k8s cluster that runs my app (gce as an ingress) and I want to restrict access to some endpoints "/test/*" but all other endpoints should be publically available. I don't want to restrict for specific IP's to have some flexibility and ability to access restricted endpoints from any device like phones.
I considered IAP but it restricts access to the full service when I need it only for some endpoints. Hence extra.
I have thought about VPN. But I don't understand how to set this up, or would it even resolve my issues.
I have heard about proxy but seems to me it can't fulfill my requirements (?)
I can't tell that solution should be super extensible or generic because only a few people will use this feature.
I want the solution to be light, flexible, simple, and fulfill my needs at the same time. So if you say that there are solutions but it's complex I would consider restricting access by the IP, but I worry about how the restricted IP's approach is viable in the real life. In a sense would it be too cumbersome to add the IP of my phone every time I change my location and so on?
You can use API Gateway for that. It approximatively meets your needs, it's not so flexible and simple.
But it's fully managed and can scale with your traffic.
For a more convenient solution, you have to use software proxy (or API Gateway), or go to the Bank and use Apigee
I set up OpenVPN.
It was not a tedious process because of the various small obstacles but I encourage you to do the same.
Get a host (machine, cluster, or whatever) with the static IP
Setup an OpenVPN instance. I do docker https://hub.docker.com/r/kylemanna/openvpn/ (follow instructions but update a host -u YOUR_IP)
Ensure that VPN setup works from your local machine
To the routes you need limit IP access to the VPN one. Nginx example
allow x.x.x.x;
deny all;
Make sure that nginx treats IP right. I had an issue that the nginx was having Load Balancer IP as client IP's, so I have to put some as trusted. http://nginx.org/en/docs/http/ngx_http_realip_module.html
Test the setup

Question about how GCP resources communicate

I've got a few people I know who operate under the assumption that all GCP resources communicate over Google's internal network, even if something is configured to talk to an instance's public IP address (VM, SQL, etc). This seems possible through some complicated NAT management, but I'm not sure it's true.
For example, a web server is set up in one project to use a MySQL database that lives in another project. Without setting up VPC peering, we set the website to use the public IP of the MySQL database, since the private IP isn't available without peering. At this point, is the web server communicating with the MySQL database over the internet? Does this traffic leave Google's network at any point?
I've been unable to find docs that answer this question, but I could be searching the wrong terms. If someone could provide docs that would be very helpful.
Please let me know if I should clarify further.
Thanks!

Why doesn't a software VPN take advantage of an already existing Direct Connect connection?

The official sample of AWS Advanced Networking Speciality questions contains a question about the most cost-effective connection between your
on-premises data centre and AWS ensuring confidentiality and integrity of the data in transit to your VPC (the question #7).
The correct answer implies establishing of the managed VPN connection between the customer gateway appliance and the virtual private gateway over the Direct Connect connection.
However one of the possible options in the list of answers offers a software VPN solution ("Set up an IPsec tunnel between your customer gateway and a software VPN on Amazon EC2 in the
VPC"). The explanation why this answer is incorrect says that:
it would not take
advantage of the already existing Direct Connect connection
My question is: why would not this software VPN connection take advantage of the already existing DC connection? What's the principal difference here?
Option 1: The question is flawed.
If you built a tunnel between a customer gateway device and an EC2 instance with traffic routing through the Direct Connect interconnection, then you are quite correct -- that traffic would use the existing Direct Connect connection.
If, on the other hand, you built a tunnel from the customer gateway to an EC2 instance over the Internet, then of course that traffic would not use the Direct Connect route.
There appears to be an implicit assumption that a tunnel between a device on the customer side and an EC2 instance would necessarily traverse the Internet, and that is a flawed assumption.
There are, of course, other reasons why the native solution might be preferable to a hand-rolled one with EC2 (e.g. survival of a the complete loss of an AZ or avoidance of downtime due to eventual instance hardware failures), but that isn't part of the scenario.
Option 2. The answer is wrong for a different reason than the explanation offered.
Having written and reflected on the above, I realized there might be a much simpler explanation: "it would not take advantage of the already existing Direct Connect connection" is simply the wrong justification for rejecting this answer.
It must be rejected on procedural grounds, because of the instruction to Choose 3. Here are the other two correct answers.
A) Set up a VPC with a virtual private gateway.
C) Configure a public virtual interface on your Direct Connect connection.
You don't need to have either of these things in order to implement a roll-your-own IPSec tunnel between on-premise and EC2 over Direct Connect. A Virtual Private Gateway is the AWS side of an AWS-managed VPN, and a Public Virtual Interface is necessary to make one of those accessible from inside Direct Connect (among other things, but it is not necessary in order to access VMs inside a VPC using private IPs over Direct Connect).
I would suggest that the answer you selected may simply be incorrect, because it doesn't belong with the other two, and the explanation that is offered misses the point entirely, and the explanation is itself incorrect.

Large organizations connecting to EC2

I work for a rather large organization and we recently started working on a cloud transition. We are not currently looking into the direct connect as an option, but would like to establish connectivity to our ec2 machines.
Of course as an org, we block port 22 and RDP, so our current model is a vpc, which we connect to via a VPN, but this model is not scalable, nor is it that convenient (RDP over VPN).
I have gone over several options on this site as well as the AWS documentation, but I can't find a reasonably scalable option. I need to be able to allow multiple users to access the resources at once, and still have a secure connections. Thoughts and suggestions are appreciated.
Thanks!

How To Secure Erlang Cluster Behind Private Subnet

I am testing Erlang and have a few questions related to Security of the Distribution. (There is a lot of mixed information out there) These type of questions come with lots of opinions related to situations, and depends on personal comfort level on the type of data you are dealing with. For the sake of this question, lets assume it is a simple chat server where users can connect to and chat together.
Example Diagram:
The cluster will be behind a private subnet VPC with elastic-load-balancing directing all connections to these nodes (to and from). The elastic-load-balancing will be the only direct path to these nodes (there would be no way to connect to a node via name#privatesubnet).
My question is the following:
Based on this question and answer: Distributed erlang security how to?
There are two different types of inner-communication that can take place. Either, directly connecting nodes using built in functionality, or doing everything over a TCP connection with a custom protocol. The first is the most easiest, but I believe it comes with a few security issues, and I was wondering based on the above diagram if It would be good enough (Er, okay, Good Enough is not always good when dealing with sensitive information, but there can always be better ways to do everything ...)
How do you secure and Erlang cluster behind a private subnet? I would like to hide the nodes, and manually connect them, and of course use cookies on them. Is there any flaws with this approach? And since a custom protocol using TCP would be the best option, what type of impact does that have on performance? I want to know the potential security flaws(As I said, there is a lot of mixed information out there on how to do this).
I would be interested in hearing from people who have used Erlang in this manner!
On AWS, with your EC2 nodes in a private subnet, you are pretty safe from unwanted connections to your nodes. You can verify this by trying to connect (in any way) to the machines running your code: if you're using a private subnet you will be unable to do so because the instances are not even addressable outside the subnet.
Your load-balancer should not be forwarding Erlang node traffic.
You can do a little better than the above using some security-group rules. Configure your nodes to use some range of ports. Then make a group "erlang" that allows connections to that port range from the "erlang" group and denies the connection otherwise. Finally, assign that security-group to all your Erlang-running instances. This prevents instances that don't need to talk to Erlang from being able to do so.
I think you have a very "classic" setup over there.
You aren't going to connect to the cluster from the Internet ― "outside" the ELB. Assuming the "private" sub-net is shared for something else, you can allow only certain IPs (or ranges) to connect via EPMD.
In any case, some machines must be "trusted" to connect to via EPMD and some other(s) can only establish a connection to some other port(s)...otherwise anything that's running your Erlang cluster is useless.
Something to think about is: you might want to (and indeed you will have to) connect to the cluster for doing some "administrative task(s)", either from the Internet or from somewhere else. I've seen this done via SSH; Erlang support that out-of-the-box.
A final word on doing everything over a TCP connection with a custom protocol, please don't, you will end-up implementing something on your own that hardly have what Erlang offers, and it's really awesome at. In the end, you'll have the same constraints.