Consider two GCP Cloud Run services that communicate with each other.
Their URLs will look like follow:
http(s)://service1-gcphash.a.run.app/
http(s)://service2-gcphash.a.run.app/
Note the DNSs of these urls are public, and if the permissions allow, may be accessed from the outside world.
Now, imagine these two service communicate with each other, my questions are:
If one service calls directly the other service, will the request be routed ONLY in the internal GCP network or is it possible that it will pass through the outside world?
In case the request stays only inside the GCP network, does it make sense to be encrypted via https, or will http request be secure enough?
If one service calls directly the other service, will the request be
routed ONLY in the internal GCP network or is it possible that it will
pass through the outside world?
Network traffic between Google services stays on Google's private backbone.
In case the request stays only inside the GCP network, does it make
sense to be encrypted via https, or will http request be secure
enough?
If you attempt to connect via HTTP, Cloud Run will send an HTTP Redirect with an HTTP Location header set to a secure URL (HTTPS).
Related
This is the first time that I am using load balancer... I have spent quite a bit of time going through documentation and I am still quite confused.
I want to host my website. My website supports HTTPS only. I want to put my backend servers behind an Application Load Balancer.
I am using AWS' default VPC, I have created an ALB (myALB) and installed my SSL certificate on it. I have also created 2 EC2 instances (myBackEndServer1 & myBackEndServer2).
Questions:
Should the communication between backend servers and myALB be
through HTTP or HTTPS?
I have created an HTTPS listener on myALB, do I also need an HTTP
listener on myALB? what I want is to redirect any HTTP request to
HTTPS (I believe this should happen on myALB)?
I want to use External ID login (using Facebook). I have set up Facebook
login to work with HTTPS only. Does the communication between
Facebook and my backend servers go through myALB? I mean, I either
need HTTPS on my backend servers, or the communication with facebook
should go through myALB.
I would appreciate any general advice.
You can use both HTTP and HTTPS listeners.
Yes, you can achieve that with ALB. You can add a rule to it that says that any request that is coming to port 80 will be redirected to port 443 on a permanent basis. Check out rules for ALB.
If you make a request from your instances to Facebook - it depends on Facebook, whether your communication will be encrypted, because in such case you are a client. However if you set up some webhook, Facebook is now a client and to communicate with you, you're gonna give your load balancer's DNS name. And due to the point 2 in this list, Facebook will be forced to use TLS.
I'm not sure I fully understood your question number three, but here's something you may also find useful. ALB has some features that allows to authenticate users with Cognito. It explicitly says that your EC2 instances can be abstracted away from any authentication, also if it makes use of Facebook ID or Google Id or whatever. Never tried though.
According to this thread, the endpoint is the URL where your service can be accessed by a client application.
But, it sounds to me like a kind of server. In that case, an endpoint will be always an URL? what is the difference between an endpoint and a server?
An endpoint is a URL which allows you to access a (web) service running on a server. A server(program) may actually host multiple such services exposing them through different endpoint.
e.g. To access twitter search API, https://api.twitter.com/1.1/search/tweets.json is the endpoint. But the same server also has another endpoint https://api.twitter.com/oauth/authenticate for authentication. Both the endpoints are hosted on the same server which runs on a machine with domain name twiter.com
server is something who host your side/data or run multiple services, like php, mysql, etc
end point is where something points like we say end point of phpmyadmin, and might be its a end point of some api.
api.example.com/getusers
tldr: See bold generic questions below.
I have built the infrastructure outlined below(in attachment) in AWS. OAuth specifies an auth server which issues tokens(Authorizes) and then authenticates tokens on each request and allows a proxy to the internal ALB.
It's based on a micro-services architecture and uses oauth to issue tokens and authenticate them from the client apps. The client apps could be apps inside the VPC or apps external to the VPC. As you can see I want all requests to go through OAuth server before they get to the internal ALB. Now the different types of apps should use different types of grants to get an access tokens. Those access tokens will contain a scope which relates to the routes(API endpoints) of the internal ALB.
Now I have a few questions which I hope are as succinct as possible:
AWS VPC ALB Questions
What is the most secure way of insuring that only the oauth apps communicate with the internal ALB and not other apps in the public subnet? So we can be sure that all requests to the internal ALB are authenticated? Do I have to somehow attach a new oauth only subnet to the input of internal ALB but how do I restrict the internal ALBs input?
To the same end, how do I ensure apps in the same subnet do not communicate with each other? Basically, how do I ensure that no internal apps communicate with each other and must be passed all the way to the external load balancer and therefore to oauth from the private subnet.
Route 53 SLL termination ALB
Does SSL termination on certain port stop traffic directed from different domains. If I make a call to ALB port 433 from internal ALB with SSL termination do I have to call from the host(route53 something.com) specified by the certificate or can I use the DNS hostname of the ALB(something.elb.amozonaws.com) resolved by AWS ok?
Scopes and OAuth
How to compare each request's url and it's token with oauth scopes? I want to relate oauth scopes to api endpoints. So each request goes to a route endpoint with an access_token which contains scopes. This scope will have to be compared with the request url on each request to make sure it’s allowed. Does oauth come with this functionality? I would guess not. However whats the point of scopes if this is not the case? Seems like scope is just an array I need to do some processing on after authentication rather than it being special in oauth. I’m probably missing something :-).
This post is too long already so I can’t for obvious reasons get into all the details but if you would like more detail I would of course give them. Even a help in the right direction would be useful at this point.
Thanks in advance.
I have a website that depends on multiple auxiliary services, to which it makes http requests. (These services e.g. provide information from data probes to update the website via ajax in real-time, and are written in a different language to the website so that I can't use RMI or similar.)
I'm trying to secure the website with SSL so that it displays as secure in browsers, but due to the http requests to the auxiliary services I'm getting mixed content warnings. I'm hosting on AWS which only seems to allow https requests to the single port 443, on which I have my website itself listening; how can I set things up so that I can access my auxiliary services securely if I need them to listen on a different port to the website?
EDIT: I should add that this is for our test website, so there's no load balancing enabled...
This seems very easily solved with CloudFront.
Forget all about caching and the fact that CloudFront is a CDN. It isn't just those things. It has a number of other useful tricks up its sleeve.
CloudFront is also a reverse proxy that can route requests to multiple destinations ("origin servers"), including multiple ports on a single instance, based on the request path (cache behaviors)... meanwhile, the browser thinks everything is coming from a single web site and is speaking HTTPS to CloudFront, yet CloudFront can optionally speak ordinary HTTP to the back-end services if that is a security policy you allow. A single CloudFront distribution can have up to 25 path mappings (including the default catchall mapping for the main site), referencing up to 25 different backend IP/Hostname+port combinations.
I goggled but I cannot determined what are the difference between endpoint and gateway. Based on their definition, they seems alike.
Description of Endpoint
What is Web Service Gateway? Web Service Gateway is a server-side
application that opens a communication channel between Bentley’s Apps
for mobile devices and Bentley’s project information management
systems.
Description of Web Service
Web services expose one or more endpoints to which messages can be
sent. A web service endpoint is an entity, processor, or resource that
can be referenced and to which web services messages can be addressed.
Endpoint references convey the information needed to address a web
service endpoint. Clients need to know this information before they
can access a service.
Endpoint:
The endpoint is a connection point where HTML files or active server pages are exposed. Endpoint is the URL where your service can be accessed by a client application. The same web service can have multiple endpoints. An end point indicates a specific location for accessing a service using a specific protocol and data format.
GateWay:
An service Gateway provides a central access point for managing, monitoring, and securing access to your publicly exposed web services. It would also allow you to consolidate services across disparate endpoints as if they were all coming from a single host. A service gateway encapsulates all the details of accessing the service into a single component and hides the component behind an interface that has no direct dependencies on the underlying communications channel.