It is feasible to pull messages from GCP Pub/Sub subscription over public Internet by reaching to the public GCP Pub/Sub API endpoint.
However, is it feasible to pull messages over the GCP dedicated Interconnect for more stable network connection? I would like to reduce the workload of proxy for reaching Public Internet by going through the private dedicated Interconnect channel.
GCP provide private access options for private routing. One of the example that suits the use case is the private service connect endpoint.
A private endpoint is deployed in the GCP project. On-premise host can access google API via this endpoint through VPC or Interconnect access, instead of the public API endpoint, such that the traffic can avoid exposing in the public Internet.
https://cloud.google.com/vpc/docs/private-access-options
Please see the docs on service APIs overview, which explain that:
Requests to the global endpoint made over Interconnect are routed similarly to requests originating in the region associated with the interconnection. In case Pub/Sub becomes unavailable in a region, requests originating within the same Google Cloud region are not load balanced to a different region.
Related
I have a slack bot which is running on a EC2 in a VPC.
The VPC/ API gateway is supposed to only be exposed to slack (for slack event listening), Its not supposed to be publicly accessible.
How would I filter based on slack's DNS? https://api.slack.com/robots
I saw that API gateway has resource policies however they are only IP\ AWS account\ VPC based.
Any other AWS services that can help?
If the only reason you're exposing it to the web is for Slack to access it, then you could try using Socket Mode, which pushes all the Slack traffic to websockets, meaning you don't need a public endpoint anymore.
What is the difference between Private Link and VPC endpoint? As per the documentation it seems like VPC endpoint is a gateway to access AWS services without exposing the data to internet. But the definition about AWS private link also looks similar.
Reference Link:
https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html
Does Private Link is the superset of VPC endpoint?
It would be really helpful if anyone provides the difference between these two with examples!
Thanks in Advance!
AWS defines them as:
VPC endpoint — The entry point in your VPC that enables you to connect privately to a service.
AWS PrivateLink — A technology that provides private connectivity between VPCs and services.
So PrivateLink is technology allowing you to privately (without Internet) access services in VPCs. These services can be your own, or provided by AWS.
Let's say that you've developed some application and you are hosting it in your VPC. You would like to enable access to this application to services in other VPCs and other AWS users/accounts. But you don't want to setup any VPC peering nor use Internet for that. This is where PrivateLink can be used. Using PrivateLink you can create your own VPC endpoint services which will enable other services to use your application.
In the above scenario, VPC interface endpoint is a resource that users of your application would have to create in their VPCs to connect to your application. This is same as when you create VPC interface endpoint to access AWS provided services privately (no Internet), such as Lambda, KMS or SMS.
There are also Gateway VPC endpoints which is older technology, replaced by PrivateLink. Gateways can only be used to access S3 and DynamoDB, nothing else.
To sum up, PrivateLink is general technology which can be used by you or AWS to allow private access to internal services. VPC interface endpoint is a resource that the users of such VPC services create in their own VPCs to interact with them.
Suppose there is a website xyz.com that I am hosting in a bunch of Ec2 instances, exposed to the outside world thru a Network load balancer.
Now, a client who has his/her own AWS account, wants to access this xyz.com from an Ec2 running in their aws account.
One approach is to go thru the Internet.
However the client wants to avoid the internet route.
He/she wants to use the AWS backbone to reach xyz.com.
The technology that enables that, is AWS Private link.
(note that if you search for Private Link in the AWS services, there will be none.
You will get "End point services" as the closest hit)
So, this is how to route traffic through the AWS backbone:
I, the owner of xyz.com, will create a VPC End Point Service (NOTE the keyword Service here)
The VPC End point service will point to my Network load balancer.
I will then give my VPC End point service name to the client.
The client will create a VPC End Point (NOTE.. this is different from #1).
While creating it, the client will specify the VPC End Point Service name (from #1) that he got from me.
I can choose to be prompted to accept the connection from the client to my VPC End point service.
As soon as I accept it, then the client can reach xyz.com from his/her EC2 instance.
There is no Internet, no direct connect or VPN.. this simply works; and its secure.
And which technology enabled it.. AWS Private link !!!
PRIVATE LINK IS THE ONLY TECHNOLOGY THAT ALLOWS 2 VPCS TO CONNECT THAT HAVE OVERLAPPING CIDR RANGES.
A useful way in understanding differences is in how they technically connect private resources to public services.
Gateway Endpoints route traffic by adding prefix lists within a VPC route table which targets the Gateway endpoint. It is a logical gateway object similar to a Internet Gateway.
In contrast, an Interface Endpoint uses Privatelink to inject into a VPC at the subnet level, via an Elastic Network Interface (ENI), giving network interface functionality, and therefore, DNS and private IP addressing as a means to connect to AWS public services, rather than simply being routed to it.
The differences in connections offer differing advantages and disadvantages (availability, resiliency, access, scalability, and etc), which then dictates how best to connect private resources to public services.
Privatelink is simply a very much abstracted technology to allow a more simplified connection by using DNS. The following AWS re:Invent offers a great overview of Privatelink: https://www.youtube.com/watch?v=abOFqytVqBU
As you correctly mentioned in the question that both VPC endpoint and AWS private link do not expose to internet. On AWS console under VPC, there is a clear option available to create an endpoint. But there is no option/label to create AWS private link. Actually, there is one more option/label called endpoint service. Creating endpoint service is one way to establish AWS private link. At one side of this AWS private link is your endpoint service and at the other side is your endpoint itself. And interestingly we create both these sides in two different VPCs. In other words, you are connecting two VPCs with this private link (instead of using internet or VPC peering).
understand like,
VPC1 got endpoint service ----> private link -----> VPC2 got endpoint
Here endpoint service side is service provider while endpoint is service consumer. So when you have some service (may be some application or s/w) that you think other VPC endpoints can consume you create endpoint service at your end and consumers will create endpoints at there end. When consumers create endpoints at their end they have to give/select your service name and thus private link will be established with your service.
Ultimately you can have multiple consumers of your service just like one to many relationship.
I have done a clean sweep of AWS docs but couldn't find answer to my scenario. I'm looking for a solution wherein I will have private connectivity(no data flows through Internet but within AWS network) between my two VPCs and VPC to On-premise connectivity. I'm aware of AWS PrivateLink and Direct Connect but they have some limitations e.g. a RDS Instance cannot be exposed as an Endpoint service to be consumed and things like that.
Is there any way I can achieve the above ?
AWS Transit Gateway allows you to setup direct networking between VPCs and your on premises environment. It supports both VPN and Direct Connect for the on premises leg of the connection.
https://aws.amazon.com/transit-gateway/
With AWS API Gateway, is there a way to send a request through a corporate proxy? Let's say that I have a service that will only accept traffic sourced from http://proxy.my-proxy.domain.com:8000.
If the above is not possible, is there a way to send requests with an IP from my VPC CIDR?
NOTE - This is a private API Gateway with all VPC-E configured
correctly.
NOTE - As I am merely a simpleton, I do not have
privileges to modify this proxy.
NOTE - I'd rather not use lambda (if
possible)
Private endpoints are only private within the AWS ecosystem, they cannot be utilized outside them unless you establish connectivity between AWS VPC and your corporate network.
There are three ways to achieve this as far as i know
You can make your API Gateway be public and use WAF to control access to it. You can whitelist only your corporate proxy IP addresses that are only allowed to access this gateway.
Establish a VPN connection between your AWS VPC and the corporate network. This will allow you to use private endpoints without making them public using a secure encrypted pipe
Setup AWS Direct Connect between your AWS VPC and the corporate network. This may not be an option considering the cost to the value proposition
I just ended up using Lambda attached to my VPC w/ API Gateway proxy integration.
Official documentation for Pub/Sub service states that Push is available to listeners that are available on public network:
An HTTPS server with non-self-signed certificate accessible on the public web.
That sounds pretty clear - but I wonder if I haven't miss something. Is it in any way possible to have Pub/Sub service push messages to on-premise machines, that are not on public internet?
You should be able to achieve this with cloud Nat
Reserve a static IP
Link your DNS with this IP
Create a subnet
Create a route from this subnet to your VPN
Create a Nat with your external IP and which forward request to your subnet
Deploy an OnPrem webserver (apache, nginx) with valid certificate for your DNS
Update your OnPrem route for reaching your webserver and don't forget to route the flow back!
Is it in any way possible to have Pub/Sub service push messages to
on-premise machines, that are not on public internet?
Not easily, if at all. You might be able to use a Reverse Proxy. This introduces several layers to manage: proxy configuration, proxy compute instance, SSL Certificates, VPC routing, on-prem router, etc. See guillaume blaquiere's answer.
On-prem resource can reach Pub/Sub via public Internet or via VPN to private.googleapis.com but Pub/Sub cannot connect to on-prem or VPC resources configured with private IP addresses.
Cloud Pub/Sub push subscriptions require a publicly accessible HTTPS endpoint. If you want to reach on-premise machines, that would have to be done via a proxy/router accessible via the public internet (as others have mentioned). Cloud Pub/Sub does not currently support VPC for push subscriptions.
Please see the note section under https://cloud.google.com/pubsub/docs/push
Previous answers are outdated. You can use restricted Virtual IP with Private Google Access to provide a private network route for requests to Google Cloud services without exposing the requests to the internet.