AWS cloudFront and oauth2 with Autorization Code callback - amazon-web-services

I am hosting my reactJs app on cloudFront, Lambda is the backend.
I am integrating with 3-party oauth2 server, which supports only the authorization code grand type. I need to handle the callback with authorization code. My problem is that the callback will be directed to the cloudFront address, like this:
https://dicla0olcdd7.cloudfront.net/callback?code=ss540azzC7xL6nCJDWto
Do you think it is a safe approach? I am a bit worried that the code is send to a service out of my control. The code should never rich any place out of my app, right?
What other solutions do I have?
thx.

CloudFront has received certifications of compliance with relevant security standards for processing credit card and healthcare data.
CloudFront is compliant with the PCI DSS and HIPAA standards.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/compliance.html
It thus stands to reason that transient authentication tokens should also be quite safe traversing that network.

Customer data and privacy are of utmost concern to AWS. Your concern is valid but goes against using any platform as a service.
Many enterprises, before moving to any cloud infrastructure raised similar concerns about data, security and privacy, but ultimately the SLAs and security commitments enforced by AWS, along with the long term cost benefits override any of these concerns.
The only way to resolve this concern is to host your own data center, which is probably not the way you want to go.

Related

How to securely access API Gateway from a frontend application hosted in AWS amplify?

I have the following:
A Vue.js frontend app hosted in AWS Amplify.
An API Gateway that triggers several Lambdas that make changes in a MongoDB hosted in an EC2 instance.
My idea is that the frontend approaches the API Gateway and GET/POST data.
The problem is that I would like to make the API Gateway accessible only from my App (nobody could make requests without authorization).
How should I handle it?
If I provide API Keys to the API Gateway, how do I inject them securely in the frontend app? Those will be accessible to anyone, right? If so, where should I put that API Key? Inside an .env file? Would that be secure enough?
Using API Gateway authorizers?
I've seen some examples where people place an intermediate backend in Amplify in order to do so, but I'd like to avoid that if possible.
This answer isn't prescriptive, but hopefully has enough detail and buzzwords to start you on your journey.
Your front-end requests will should an Authorization header with a JWT token from Cognito or your auth provider.
API Gateway can wire-up to Cognito (or you can wire-up a "custom authorizer") and only pass-through requests that have a valid token.
Your Lambda will know the user is validated (as a known user or guest) but must still determine if they are authorized to perform the action they're attempting. If using something like AppSync, you can pass the user's Authorization header though to the AppSync API and the resolvers can allow/reject based on the authorization rules in your schema. I'm not familiar with EC2 hosted MongoDB, but imagine you'll need to wire it up to your auth so it can behave similarly.
I wouldn't recommend API keys. You can't put them client-side, they need to be managed and rotated, you're letting the "lambda" have permissions instead of the "user", etc.
I do recommend Amplify. You may be able to ditch MongoDB and the EC2 (yuck, you don't want to manage that) for AppSync backed by DynamoDB. And if you also use Cognito, you can do much of the above with very little effort.
EDIT To address your comment:
While your website may be hosted on your servers (or your account on a cloud provider's servers), it runs on people's personal devices. The HTTP requests (e.g. REST) your server receives don't indicate they originated from your website and there is no 'link' that ties YOUR front-end to your backend. HTTP does have a Referer Header that indicates the webpage the request is associated with, but you can't trust it.
Because your site is public, your API will receive requests from anywhere and everywhere. There is no way to prevent that. You can put less expensive request handlers in front of your API handlers to catch and discard invalid requests (or return cached responses when appropriate).
Your server could require requests include a special header (e.g. an API-KEY) that only your website will include in requests. But anyone can look at your website code and/or the network traffic (even simply via the browser debugging tools) and learn about that secret header.
You can look into XSRF tokens. This is where the front-end provides a unique token when serving a page (usually in conjunction with a form), and it must be included when sending data back to the server or the data will be considered invalid.
Cognito / Amplify will generate tokens for GUEST/un-authenticated users as well. So they can be used for what you want. It doesn't guarantee the requests are coming from your websites's javascript, but it'd be annoying to work around.
You can use CORS in your server responses. That prevents other websites from directly calling your APIs. Your server would still be called and return data, but an unmodified browser will see the CORS header and throw away the data before making it available to the calling javascript.
At the end of the day if your APIs are on the public internet, anyone and everyone can poke them.

How to secure API behind Kong Gateway for both pubic and internal traffic

We currently have multiple APIs that are not behind a gateway. The APIs that are exposed publicly use OpenID Connect for authentication and claims authorization. Some of the APIs are internal only and are network secured behind a firewall.
We plan to setup Kong Gateway Enterprise in front of our APIs. We should be able to centralize token validation from public clients at the gateway. We could possibly centralize some basic authorization as well (e.g. scopes). Some logic will probably still need to happen in the upstream API. So, those APIs will still need to know the context of the caller (client and user).
Ideally, we would like to be able to have APIs that can be exposed publicly and also called internally to avoid duplicating logic. I'd like to understand some secure approaches for making this happen with Kong. Exactly how to setup the system behind the gateway is still unclear to me.
Some questions I have are:
Should we have both an internal gateway and an external? Is there guidance on how to choose when to create separate gateways?
If we have multiple upstream services in a chain, how do you pass along the auth context?
Custom headers?
Pass along the original JWT?
How can we make a service securely respond to both internal and external calls?
We could setup up a mesh and use mTLS but wouldn't the method of passing the auth context be different between mTLS and the gateway?
We could set custom headers from Kong and have other internal services render them as well. But since this isn't in a JWT, aren't we loosing authenticity of the claims?
We could have every caller, including internal services, get their own token but that could make the number of clients and secrets difficult to manage. Plus, it doesn't handle the situation when those services are still acting on behalf of the user as a part of an earlier request.
Or we could continue to keep separate internal and external services but duplicate some logic.
Some other possibly helpful notes:
There is no other existing PKI other than our OIDC provider.
Services will not all be containerized. Think IIS on EC2.
Services are mostly REST-ish.
There is a lot there to unpack here, and the answer is: it depends
Generally, you should expose the bare minimum API externally, so a separate gateway in the DMZ with only the API endpoints required by external clients. Your generally going to be making more internal changes so you don't want to expose a sensitive endpoint by accident.
Don’t be too concerned about duplication when it comes to APIs, it’s quite common to have multiple API gateways, even egress gateways for external communication. There are patterns like (BFF - Backend for frontend pattern) where each client has its own gateway for orchestration, security, routing, logging, auditing. The more clients are isolated from each other the easier and less risky it is to make API changes.
In regards to propagating the Auth context, it really comes down to trust, and how secure your network and internal actors are. If you're using a custom header then you have to consider the "Confused Deputy Problem". Using a signed JWT solves that, but if the token gets leaked it can be used maliciously against any service in the chain.
You can use RFC8693 Token exchange to mitigate that and even combine it with MTLS, but again that could be overkill for your app. If the JWT is handled by an external client, it becomes even riskier. In that case, it should ideally be opaque and only accepted by the external-facing gateway. That GW can then exchange it for a new token for all internal communication.

Attaching a usage plan to a public Api Gateway Endpoint

For learning purposes, I have developed a front-end application in Angular with AWS as back-end.
As this is a didactic project, I must prevent any possible costs explosion. Overall, for what concerns API Gateway calls.
At the moment, I have a single public GET endpoint for providing the information to the public homepage in the front-end.
I need to attach a usage plan to this endpoint for limiting the maximum number of calls to this endpoint. For example, max 10000 calls/week.
I already tried with an API-KEY:
Created the Usage Plan with "Quota: 10,000 requests per week"
Created the API KEY connected to the Usage Plan
Connected the API KEY to the authentication method of the endpoint
It works, but in this way I need to hard code the API KEY on the front-end.
I know that hard coding sensitive information on the front-end is a bad practice, but I thought that in this case the API KEY is needed only for connecting a Usage Plan, not for securing private information or operations. So I'm not sure if in this case it should be acceptable or not.
Is this solution safe, or could the same API KEY be used for other malicious purposes?
Are there any other solutions?
To add to the other answer, API Keys are not Authorization Tokens.
API Keys associated with Usage Plans are sent to the gateway on the x-api-key header (by default).
Tokens used by authorizers live on the standard Authorization header.
Use API Keys to set custom throttling and quotas for a particular caller, but you still need an Authorizer on any secure endpoints.
You can optionally set an API Key to be returned from the Authorizer when using a custom authorizer to be applied to the Usage Plan, which prevents you from having to distribute keys to clients in addition to their secrets.
APIG Key Source Docs
As the documentation says, generally you wouldn't want to rely on a hardcoded API key for authentication.
In addition, based on the information you provided, the usage plan limits are tied to the use by the user of the API key. So you could also set up throttling on the account or stage itself.
Finally, if it's possible, you could set up security group rules on your server or access control lists on your vpc that is serving your front end so that not everyone can access it.
Other ideas are to shut down the API gateway when you aren't using it and also rotate the API key frequently.
Obviously none of these are going to be enough if this were hosting sensitive information or doing anything remotely important. But as long as the only information in the client side is that API Key to access the API Gateway and that API Gateway itself doesn't interact with anything sensitive that's probably enough for a learning project.
But you should do all of this at your own risk as for all you know I'm the person who's going to try to hack you ;)

How to secure communication between Pact Broker, Consumer and Provider

We are planning to implement CDC in our project and Pact is being considered as primary candidate. Currently I am working on a POC to set up end to end flow with CI/CD integration with GitLab. I have couple of questions related to Authentication/Authorization/security.
Consumer - Pact Broker: Consumers here are external partners. I see client side certificates as an option. I am not able to find much documentation or info on Web for the options available. Pact broker will be hosted in AWS. Can we place this behind a gateway?
Pact Broker and Provider: Both components are part of our infrastructure. In this case I understand that we will be generating a GitLab trigger token which will be passed as part of future requests to Provider pipeline. We will be using same token every time.
Could you please advise options available in both cases to make the communication more secure.
Thanks in advance.
We are planning to implement CDC in our project and Pact is being considered as primary candidate.
Good choice! :)
I have couple of questions related to Authentication/Authorization/security
The OSS broker doesn’t have any security controls other than basic auth and read-only/read-write access permissions (which isn’t very appropriate for external use for obvious reasons). There is basic support for redacting credentials in the UI, but you can still get them through API calls (even for read-only accounts).
Consumer - Pact Broker: Consumers here are external partners. I see client side certificates as an option. I am not able to find much documentation or info on Web for the options available. Pact broker will be hosted in AWS. Can we place this behind a gateway?
Where did you see that client certificates were supported? I’m sorry to say that is incorrect.
You can definitely put it behind a gateway/reverse proxy type thing: https://docs.pact.io/pact_broker/configuration/#running-the-broker-behind-a-reverse-proxy
You would need to add your own authentication layer for this purpose, so using a an API gateway for this that might be a good starting point.
Pact Broker and Provider: Both components are part of our infrastructure. In this case I understand that we will be generating a GitLab trigger token which will be passed as part of future requests to Provider pipeline. We will be using same token every time.
The provider side authentication is the same as consumer.
Alternatively, we have created Pactflow, which is a commercial version of the OSS Broker designed for enterprise use which has a full security model wrapped over the OSS broker including API tokens, and secrets, teams management and other useful features (see https://pactflow.io/features/ for more). We are also almost ready release CI users and fine-grained permissions management.

API Gateway Best Practises

We are looking at utilising aws api gateway for better management of APIs. However, at a enterprise level, what will be the best practise? Will a common API gateway for to be used by all app teams be necessary (In this case, we might need a administrator for this common API gateway which adds to overhead) or should each app team build their own API gateway and administration of their API calls?
Hope to have someone share their experiences.
I have used AWS API gateway for different web/mobile application projects. let me try to answer your questions one by one here.
Limitations Based Design
API gateway comes with limitations. You can find answers based on these limitations.
For eg: There is a soft limit on "Resources per API" and its set at 300 which can be increased up to 500 max. This means that in future, if more than 500 resources are needed, new API gateway needs to be created.
So, it's better to logically segregate the APIs and have different API gateways depending on the purpose.
Throttle limit per region across REST APIs, WebSocket APIs, and WebSocket callback APIs is limited (Soft) to 10,000 requests per second (RPS) with an additional burst capacity provided by the token bucket algorithm, using a maximum bucket capacity of 5,000 requests.
So based on traffic API gateway needs to be designed.
There are many such limitations https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Features Based Design
API Gateway uses OPEN API standards and facilitates XML/JSON import and export features. So if a new API Gateway is created with a swagger file from an application, its better not to mix it with other applications.
There are many features like 1)Enable API cache, 2)Enable throttling, 3) Web Application Firewall 4) Client Certificate which cannot be common for all APis in an enterprise. So again, it's better to have separate APIs based on the requirements.
AWS API Gateway is engineered with different logging mechanisms and each API gateway implementation will need a tailored approach.
SDK generation comes very handy for mobile development and again there is no use of bundling all apis into one SDk and providing access.
So my suggestion is to use multiple API gateways, for an enterprise based on specific needs.