I am writing an application using the AWS SDK for C++. I would like to enable integrity checking for S3 transfers, even transfers that require multiple requests due to the size of the file.
How can I do this? The documentation for the C++ version of the AWS SDK is scanty.
I scanned the source code to the SDK and found this in AmazonWebServiceRequest:
inline virtual bool ShouldComputeContentMd5() const { return false; }
but it's not clear to me how to get the S3 classes to use an overridden version of this class.
While we're on the subject, I'd rather use the relatively new SHA256 AWS feature instead of MD5, but there seem to be even fewer hooks for that hash algorithm in the C++ SDK.
Can anyone help? Thanks.
S3 has Etag feature. Once an object is uploaded either partially or fully, you can get the Etag from the S3 API Call and Read the Etag from its header.
Below links discusses more on the etags.
What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB?
S3 Documentation on ETag Header:
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html
Related
Occasionally, a client requests a large chunk of data to be transferred to them.
We host our data in AWS S3, and a solution we use is to generate presign URLs for the data they need.
My question:
When should data integrity checks actually be performed on data migration or is relying on TSL good enough...
From my understanding, most uploads/downloads used via AWS CLI will automatically perform data integrity checks.
One potential solution I have is to manually generate MD5SUMS for all files transferred, and for them to perform a local comparison.
I understand that the ETAG is a checksum of sorts, but because a lot of the files are multipart uploads, the ETAG becomes a very complicated mess to use as a comparison value.
You can activate "Additional checksums" in AWS S3.
The GetObjectAttributes function returns the checksum for the object and (if applicable) for each part.
Check out this release blog: https://aws.amazon.com/blogs/aws/new-additional-checksum-algorithms-for-amazon-s3/
I have an application running since many time that uploads files (images) on S3 storage.
Now I've been requested to update this application and upload file using SSE-C encryption (Server Side Encryption with Customer provided key). So I did it.
I'm also able to upload SSE-C encrypted files using aws cli.
What I need now, and here is my question, is to find a way to apply SSE-C encryption to earlier files already on S3 without SSE-C encryption.
Could someone explain me if and how this can be accomplished or point me to some doc or support page in order to find a solution?
One (maybe inefficient) way I found is doing the following for each file:
copy filename to filename.encrypted applying the SSE-C encryption
move filename.encrypted to filename
Is this the only way to do it or there is a better one?
NOTES:
Since I have many many files I obviously excluded the option to download the file and then upload again with SSE-C encryption because it'll be too slow and too expensive.
A solution that let apply the SSE-C without data transfert from and back to S3 is the one I'm looking for.
Thank you very much for any feedback on this.
You can apply encryption to already-existing objects by simply copying the object on top of itself:
aws s3 cp s3://bucket/foo.txt s3://bucket/foo.txt --sse-c --sse-c-key fileb://key.bin
This works as long as something (eg the encryption) is changing.
I got the --sse-c syntax from: How to supply a key on the command line that's not Base 64 encoded
Specifically, in an origin response triggered function (EX. With 404 Status), how can I read an HTML file stored in S3 and use its content for the response body?
(I would like to manually return a custom error page just as CloudFront does, but choosing it based on cookies).
NOTE: The HTML file in S3 is stored in the same bucket of my website. OAI Enabled.
Thank you very much!
Lambda#Edge functions don't currently¹ have direct access to any body content from the origin.
You will need to grant your Lambda Execution Role the necessary privileges to read from the bucket, and then use s3.getObject() from the JavaScript SDK to fetch the object from the bucket, then use its body.
The SDK is already in the environment,² so you don't need to bundle it with your code. You can just require it, and create the S3 client globally, outside the handler, which saves time on subsequent invocations.
'use strict';
const AWS = require('aws-sdk');
const s3 = new AWS.S3({ region: 'us-east-2' }); // use the correct region for your bucket
exports.handler ...
Note that one of the perceived hassles of updating a Lambda#Edge function is that the Lambda console gives the impression that redeploying it is annoyingly complicated... but you don't have to use the Lambda console to do this. The wording of the "enable trigger and replicate" checkbox gives you the impression that it's doing something important, but it turns out... it isn't. Changing the version number in the CloudFront configurarion and saving changes accomplishes the same purpose.
After you create a new version of the function, you can simply go to the Cache Behavior in the CloudFront console and edit the trigger ARN to use the new version number, then save changes.
¹currently but I have submitted this as a feature request; this could potentially allow a response trigger to receive a copy of the response body and rewrite it. It would necessarily be limited to the maximum size of the Lambda API (or smaller, as generated responses are currently limited), and might not be applicable in this case, since I assume you may be fetching a language-specific response.
²already in the environment. If I remember right, long ago, Lambda#Edge didn't include the SDK, but it is always there, now.
What I would like to do:
What I would like to do is have a url which would return to the caller a CSV file which is essentially a export of data. I would like this to remain to be a serverless solution.
What I have done:
I have created an AWS API Gateway with the URL I want. I have created a lambda that will query the database and create a CSV string of that data. That data is placed in a JSON object and returned. API gateway then gets the CSV data from the json object and returns CSV to the caller with appropriate headers to indicate tht it is a CSV and attachment. Testing from the browser I get the download automatically just like I intended.
The problem I see:
This works well until there is a sizable amount of data at which point I start getting "body size is too long".
My attempts to resolve:
I did some googling around and I see others have had similar issues. In one solution I saw that they return a link to the file that they created. This solution seems viable for them because they had a server. For my serverless architecture it seems to be a little trickier. I could take and store the file into S3 but then i would have to return a link to S3. That seems like it could work but doesn't feel right like im missing a configuration option. It also feels like im exposing the implementation by returning the s3 urls as well.
I have looked around for tutorials and example of people doing similar things and i haven't found any.
My Questions:
Is there a way to do this?
Is there another solution that i dont know of?
How do i return a file, in this case CSV, from API Gateway of a larger size
There is a limit of 6 MB for AWS Lambda response payloads. If the files you need to server are larger than that you won't be able to serve them directly from Lambda.
Using S3 to store and serve the files is the standard way of doing something like this. I would leave the S3 bucket private and generate S3 Pre-signed URLs in the Lambda function. That will limit the time that the CSV file is available for download, and it will prevent people from being able to guess the URLs of files you are serving. You would use an S3 Lifecycle Policy to archive or delete the files after a period of time.
I'm using S3 to backup large files that are critical to my business. Can I be confident that once uploaded, these files are verified for integrity and are intact?
There is a lot of documentation around scalability and availability but I couldn't find any information talking about integrity and/or checksums.
When uploading to S3, there's an optional request header (which in my opinion should not be optional, but I digress), Content-MD5. If you set this value to the base64 encoding of the MD5 hash of the request body, S3 will outright reject your upload in the event of a mismatch, thus preventing the upload of corrupt data.
The ETag header will be set to the hex-encoded MD5 hash of the object, for single part uploads (with an exception for some types of server-side encryption).
For multipart uploads, the Content-MD5 header is set to the same value, but for each part.
When S3 combines the parts of a multipart upload into the final object, the ETag header is set to the hex-encoded MD5 hash of the concatenated binary-encoded (raw bytes) MD5 hashes of each part, plus - plus the number of parts.
When you ask S3 to do that final step of combining the parts of a multipart upload, you have to give it back the ETags it gave you during the uploads of the original parts, which is supposed to assure that what S3 is combining is what you think it is combining. Unfortunately, there's an API request you can make to ask S3 about the parts you've uploaded, and some lazy developers will just ask S3 for this list and then send it right back, which the documentarion warns against, but hey, it "seems to work," right?
Multipart uploads are required for objects over 5GB and optional for uploads over 5MB.
Correctly used, these features provide assurance of intact uploads.
If you are using Signature Version 4, which also optional in older regions, there is an additional integrity mechanism, and this one isn't optional (if you're actually using V4): uploads must have a request header x-amz-content-sha256, set to the hex-encoded SHA-256 hash of the payload, and the request will be denied if there's a mismatch here, too.
My take: Since some of these features are optional, you can't trust that any tools are doing this right unless you audit their code.
I don't trust anybody with my data, so for my own purposes, I wrote my own utility, internally called "pedantic uploader," which uses no SDK and speaks directly to the REST API. It calculates the sha256 of the file and adds it as x-amz-meta-... metadata so it can be fetched with the object for comparison. When I upload compressed files (gzip/bzip2/xz) I store the sha of both compressed and uncompressed in the metadata, and I store the compressed and uncompressed size in octets in the metadata as well.
Note that Content-MD5 and x-amz-content-sha256 are request headers. They are not returned with downloads. If you want to save this information in the object metadata, as I described here.
Within EC2, you can easily download an object without actually saving it to disk, just to verify its integrity. If the EC2 instance is in the same region as the bucket, you won't be billed for data transfer if you use an instance with a public IPv4 or IPv6 address, a NAT instance, an S3 VPC endpoint, or through an IPv6 egress gateway. (You'll be billed for NAT Gateway data throughput if you access S3 over IPv4 through a NAT Gateway). Obviously there are ways to automate this, but manually, if you select the object in the console, choose Download, right-click and copy the resulting URL, then do this:
$ curl -v '<url from console>' | md5sum # or sha256sum etc.
Just wrap the URL from the console in single ' quotes since it will be pre-signed and will include & in the query string, which you don't want the shell to interpret.
You can perform an MD5 checksum locally, and then verify that against the MD5 checksum of the object on S3 to ensure data integrity. Here is a guide