How to protect AWS S3 uploaded / downloaded data, in transit? - amazon-web-services

When we upload data to S3, is it protected in transit by default (via HTTPS maybe)?
I found this article which, if I understand correctly, states S3 does not use HTTPS:
Amazon Simple Storage Service: You can still use HTTP with Amazon S3
and securely make authenticated requests. The service uses a different
secure signing protocol.
Should we in this case protect the data in transit with Client-Side Encryption?

The article you cited is obsolete. It was originally written in 2008, and apparently when updated in 2015, some of the outdated information was left in place.
The version refers to the particular algorithm for signing the request. These AWS services have deprecated the older, less-secure methods (signature versions 0 and 1) and will no longer allow them after September 2009.
Indeed, versions 0 and 1 are not supported.
A few AWS services don't support signature version 2:
Amazon Simple Storage Service: You can still use HTTP with Amazon S3 and securely make authenticated requests. The service uses a different secure signing protocol.
This is also inaccurate. S3 supports signature version 2 in all regions where signature version 2 was deployed. Regions launched in 2014 or later do not support V2 at all, they require Signature Version 4, and in those regions, S3 also requires Signature Version 4.
Importantly, though, none of this has anything at all to do with HTTPS.
From the same document:
Most AWS services accept HTTPS requests, including:
...
Amazon Simple Storage Service
Okay, so, let's revisit this line:
The service uses a different secure signing protocol.
This statement is not about encryption, or security of the payload. This is a statement about the secuity of the request authentication and authorization process -- its resistance to forgery and reverse-engineering -- whether or not the request is sent encrypted.
HTTPS is supported by S3, to protect data in transit.

Quoting from the Security section of the S3 FAQs:
You can securely upload/download your data to Amazon S3 via SSL
endpoints using the HTTPS protocol.
If you're using the https:// endpoint for S3, then your data in transit should be encrypted properly. The quote that you referred to in the question means that it's also possible to access S3 using http:// protocol, in which case the data wouldn't be encrypted in transit. See this related question.
If you were asking specifically about whether AWS CLI encrypts data in transit, then the answer is yes. See this question.
Also, please note that the primary purpose of using client-side encryption would be to encrypt data at rest, and to use an encryption algorithm of your own choosing. If you use client-side encryption but still use the http:// endpoint, your communication over the wire would still be unencrypted, technically speaking, because the cyphertexts being passed over the wire could be extracted by an attacker for analysis.
Update:
If you were asking specifically about AWS Java SDK, the default protocol is again https. Quoting from javadocs for AWS Java SDK:
By default, all service endpoints in all regions use the https
protocol. To use http instead, specify it in the ClientConfiguration
supplied at construction.
And from the javadocs for ClientConfiguration.getProtocol:
The default configuration is to use HTTPS for all requests for
increased security.
Client-side/server-side encryption's primary purpose is to secure data at rest. If anyone was to break open your cloud provider's data center somehow and steal the disks that had your data, you're making it difficult for them to get hold of your data in plaintext by encrypting it either client-side/server-side. Doing it client-side gives you the benefit of having more control on the encryption algorithm with the seemingly additional side-effect of your data not being transmitted in plaintext over the wire. However, the communication channel itself is not encrypted. If you were using a weak encryption algorithm for example, an attacker could still sniff the encrypted data over the wire and decrypt it. Also, it's important to know that using SSL means:
You as the client can be sure you're talking to AWS
Your communication with AWS is encrypted, so others can't intercept it
You have verification that the message received is the same as the
message sent
In essence, you definitely want to use SSL irrespective of whether you want to use client-side encryption or not.

Related

How to securely access API Gateway from a frontend application hosted in AWS amplify?

I have the following:
A Vue.js frontend app hosted in AWS Amplify.
An API Gateway that triggers several Lambdas that make changes in a MongoDB hosted in an EC2 instance.
My idea is that the frontend approaches the API Gateway and GET/POST data.
The problem is that I would like to make the API Gateway accessible only from my App (nobody could make requests without authorization).
How should I handle it?
If I provide API Keys to the API Gateway, how do I inject them securely in the frontend app? Those will be accessible to anyone, right? If so, where should I put that API Key? Inside an .env file? Would that be secure enough?
Using API Gateway authorizers?
I've seen some examples where people place an intermediate backend in Amplify in order to do so, but I'd like to avoid that if possible.
This answer isn't prescriptive, but hopefully has enough detail and buzzwords to start you on your journey.
Your front-end requests will should an Authorization header with a JWT token from Cognito or your auth provider.
API Gateway can wire-up to Cognito (or you can wire-up a "custom authorizer") and only pass-through requests that have a valid token.
Your Lambda will know the user is validated (as a known user or guest) but must still determine if they are authorized to perform the action they're attempting. If using something like AppSync, you can pass the user's Authorization header though to the AppSync API and the resolvers can allow/reject based on the authorization rules in your schema. I'm not familiar with EC2 hosted MongoDB, but imagine you'll need to wire it up to your auth so it can behave similarly.
I wouldn't recommend API keys. You can't put them client-side, they need to be managed and rotated, you're letting the "lambda" have permissions instead of the "user", etc.
I do recommend Amplify. You may be able to ditch MongoDB and the EC2 (yuck, you don't want to manage that) for AppSync backed by DynamoDB. And if you also use Cognito, you can do much of the above with very little effort.
EDIT To address your comment:
While your website may be hosted on your servers (or your account on a cloud provider's servers), it runs on people's personal devices. The HTTP requests (e.g. REST) your server receives don't indicate they originated from your website and there is no 'link' that ties YOUR front-end to your backend. HTTP does have a Referer Header that indicates the webpage the request is associated with, but you can't trust it.
Because your site is public, your API will receive requests from anywhere and everywhere. There is no way to prevent that. You can put less expensive request handlers in front of your API handlers to catch and discard invalid requests (or return cached responses when appropriate).
Your server could require requests include a special header (e.g. an API-KEY) that only your website will include in requests. But anyone can look at your website code and/or the network traffic (even simply via the browser debugging tools) and learn about that secret header.
You can look into XSRF tokens. This is where the front-end provides a unique token when serving a page (usually in conjunction with a form), and it must be included when sending data back to the server or the data will be considered invalid.
Cognito / Amplify will generate tokens for GUEST/un-authenticated users as well. So they can be used for what you want. It doesn't guarantee the requests are coming from your websites's javascript, but it'd be annoying to work around.
You can use CORS in your server responses. That prevents other websites from directly calling your APIs. Your server would still be called and return data, but an unmodified browser will see the CORS header and throw away the data before making it available to the calling javascript.
At the end of the day if your APIs are on the public internet, anyone and everyone can poke them.

Verifying a Sigv4 signature with differing temporary credentials, but for the same long-term credentials

So I have a scenario which I'm trying to solve. Requests are coming into my api, which is hosted on-prem, and included is an X-Amz-Security-Token header. This is because the caller of said api is using a set of long-term credentials to assume an IAM role and is using the returned temporary credentials to sign the request. This is specified in the note here
In my api, I am doing the same logic of assuming the role using the same long-term credentials, but what seems to be happening is that because I need to sign the request myself in order to compare the signatures to validate the request, I'm getting a different X-Amz-Security-Token generated, since the temporary credentials granted are different, and therefore the signatures don't match. How do I get around this? And more to the point, if I wasn't using an on-prem hosted api, how would other AWS services validate the request by default? Is X-Amz-Security-Token able to be passed back to the STS for validation somehow? I'm confused how this header adds any value to the sigv4 as it seems to only be causing me problems.
Apologies if this a bit vague. I'm new to the AWS world and only need to get involved in order to integrate on-prem systems with an AWS-hosted partner. And sigv4 is contractual requirement for both inbound and outbound communications.
Any help would be much appreciated. Cheers

Attaching a usage plan to a public Api Gateway Endpoint

For learning purposes, I have developed a front-end application in Angular with AWS as back-end.
As this is a didactic project, I must prevent any possible costs explosion. Overall, for what concerns API Gateway calls.
At the moment, I have a single public GET endpoint for providing the information to the public homepage in the front-end.
I need to attach a usage plan to this endpoint for limiting the maximum number of calls to this endpoint. For example, max 10000 calls/week.
I already tried with an API-KEY:
Created the Usage Plan with "Quota: 10,000 requests per week"
Created the API KEY connected to the Usage Plan
Connected the API KEY to the authentication method of the endpoint
It works, but in this way I need to hard code the API KEY on the front-end.
I know that hard coding sensitive information on the front-end is a bad practice, but I thought that in this case the API KEY is needed only for connecting a Usage Plan, not for securing private information or operations. So I'm not sure if in this case it should be acceptable or not.
Is this solution safe, or could the same API KEY be used for other malicious purposes?
Are there any other solutions?
To add to the other answer, API Keys are not Authorization Tokens.
API Keys associated with Usage Plans are sent to the gateway on the x-api-key header (by default).
Tokens used by authorizers live on the standard Authorization header.
Use API Keys to set custom throttling and quotas for a particular caller, but you still need an Authorizer on any secure endpoints.
You can optionally set an API Key to be returned from the Authorizer when using a custom authorizer to be applied to the Usage Plan, which prevents you from having to distribute keys to clients in addition to their secrets.
APIG Key Source Docs
As the documentation says, generally you wouldn't want to rely on a hardcoded API key for authentication.
In addition, based on the information you provided, the usage plan limits are tied to the use by the user of the API key. So you could also set up throttling on the account or stage itself.
Finally, if it's possible, you could set up security group rules on your server or access control lists on your vpc that is serving your front end so that not everyone can access it.
Other ideas are to shut down the API gateway when you aren't using it and also rotate the API key frequently.
Obviously none of these are going to be enough if this were hosting sensitive information or doing anything remotely important. But as long as the only information in the client side is that API Key to access the API Gateway and that API Gateway itself doesn't interact with anything sensitive that's probably enough for a learning project.
But you should do all of this at your own risk as for all you know I'm the person who's going to try to hack you ;)

Can Server sent events (sse) work with AWS Cloudfront?

Is there a way to make sse (server sent events) work using Cloudfront?
I know they announced websockets support few years ago but I can not find any reference or cases related to using sse communication through Cloudfront.
I did a test and the client response ends with 504 Gateway Time-out after a minute approx.
Yes, you can use SSE (Server-Sent Events) with CloudFront.
There are many different ways to implement your API behind CloudFront. So, in some cases, there could be limitations. But let me describe one standard and straightforward way you could set up your application that is tested to work with SSE.
Let's say you have an EC2 instance (at least one) that is behind an ALB (Application Load Balancer). Even if you don't need more than one EC2 instance, you might need an ALB in order to use HTTPS. Even though you will need to import your TLS/SSL certificate into your CloudFront Distribution, you will also need your API to be accessible (by CloudFront itself) via HTTPS (don't forget it could be located in another continent).
In CloudFront you can create a Distribution with an Origin that basically maps https://yourapp.com/api to that ALB. Note that CloudFront also allows you to forward traffic to a different (sub)domain if that's where your API/ALB is (that setup I've also tested successfully).
Websockets works with AWS API Gateway. You can use AppSync (GraphQL) Subscriptions also. CloudFront can’t send anything himself.
AWS resources are linked with event bridge (Basically async way to trigger an event) and its stateless so it is not possible. The only way is you have to deploy your app in some sort of web container using which you can achieve your expected behaviour.
Another way is you can use AWS API Gateways's websocket open the connection (Full duplex) and back and forth transafer what ever data you want.

Verify that API call was made from AWS Lambda (or another server)

Is there any way I can check that a call is made from an expected lambda function or you can consider it as another server.
The use case is: I want to emit a websocket event to notify my web app when my EC2 status changes. (Do suggest a better way if any) My web socket server will be an EC2 instance. So I am wondering, if there is some way to allow calls to this specific API to be only from a valid AWS Lambda function?
Currently I am thinking maybe I can just use a shared secret ... but since this never expires, wonder if its a security risk?
UPDATE
A thought came to mind, isit ok to use asymmetric encryption like RSA for this?
Oh wait, is RSA not suitable for such data encryption? I read its for signing keys?
I assume you have already set a EC2 state change trigger to a SNS topic you have created.
You can verify singature of SNS notification by processing the JSON body as instructed in AWS SNS documentation. It also has a Java example but if you want to use Python, here is an example for X509 verification part using M2Crypto module:
from M2Crypto import X509
from base64 import b64decode
cert = X509.load_cert_string(str(r.text))
pubkey = cert.get_pubkey()
pubkey.reset_context(md='sha1')
pubkey.verify_init()
pubkey.verify_update(str_to_sign.encode())
result = pubkey.verify_final(b64decode(signature))
if result != 1:
raise Exception('Signature could not be verified')
else:
return True
As advised in the doc, AWS SDK implemented for language might already have built-in ability just like for Ruby. It is good idea to check that before implementing this.
So basically, you should have the X509 that is used by SNS on the receiving server.
It is also recommended you to subscribe your SNS topic to a HTTPS endpoint so it will validate the authenticity of the server before sending the notification as a secure request. SNS supports verifying certificate authorities listed here.