AWS service to verify data integrity of file in S3 via checksum? - amazon-web-services

One method of ensuring a file in S3 is what it claims to be is to download it, get its checksum, and match the result against the checksum you were expecting.
Does AWS provide any service that allows this to happen without the user needing to first download the file? (i.e. ideally a simple request/url that provides the checksum of an S3 file, so that it can be verified before the file is downloaded)
What I've tried so far
I can think of a DIY solution along the lines of
Create an API endpoint that accepts a POST request with the S3 file url
Have the API run a lambda that generates the checksum of the file
Respond with the checksum value
This may work, but is already a little complicated and would have further considerations, e.g. large files may take a long time to generate a checksum (e.g. > 60 seconds)
I'm hoping AWS have some simple way of validating S3 files?

There is an ETag created against each object, which is an MD5 of the object contents.
However, there seems to be some exceptions.
From Common Response Headers - Amazon Simple Storage Service:
ETag: The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata. The ETag may or may not be an MD5 digest of the object data. Whether or not it is depends on how the object was created and how it is encrypted as described below:
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data.
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.
If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption.
Also, the calculation of an ETag for a multi-part upload can be complex. See: s3cmd - What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB? - Stack Overflow

Related

Checking data integrity of downloaded AWS S3 data when using presigned URLS

Occasionally, a client requests a large chunk of data to be transferred to them.
We host our data in AWS S3, and a solution we use is to generate presign URLs for the data they need.
My question:
When should data integrity checks actually be performed on data migration or is relying on TSL good enough...
From my understanding, most uploads/downloads used via AWS CLI will automatically perform data integrity checks.
One potential solution I have is to manually generate MD5SUMS for all files transferred, and for them to perform a local comparison.
I understand that the ETAG is a checksum of sorts, but because a lot of the files are multipart uploads, the ETAG becomes a very complicated mess to use as a comparison value.
You can activate "Additional checksums" in AWS S3.
The GetObjectAttributes function returns the checksum for the object and (if applicable) for each part.
Check out this release blog: https://aws.amazon.com/blogs/aws/new-additional-checksum-algorithms-for-amazon-s3/

How to download object from S3 with checksum validation

I have a number of objects in an S3 bucket which I need to download to my local storage and validate the downloaded copy ideally using a checksum.
Is there a way of validating objects' integrity after downloading with the following assumptions:
those objects have been delivered to my bucket by a third party and have the S3-native multipart checksum instead of MD5 in the metadata.
I have no way to figure out what was the part size used for uploading those objects which I could use to calculate the mutipart checkum on the ground.
I have no influence over that third party to privide either the md5 or the chunk size they used when they delivered the file to my bucket.
Any ideas?

ObjectCreated:Post on S3 Console upload?

My S3 Lambda Event listener is only seeing ObjectCreated:Put events when a file is uploaded via the S3 console. This is both for new files and overwriting existing files. Is this the expected behavior?
It seems like a new file upload should generate ObjectCreated:Post in keeping with the POST == Create, PUT == Update norm.
S3 has 4 APIs for object creation:
PUT is used for requests that send only the raw object bytes in the HTTP request body. It is the most common API used for creation of objects up to 5 GB in size.
POST uses specially-crafted HTML forms with attributes, authentication, and a file all as part of a multipart/form-data HTTP request body.
Copy is used where the source bytes come from an existing object in HTTP (which incidentally also uses HTTP PUT on the wire, but is its own event type). The Copy API is also used any time you edit the metadata of an existing object: once stored in S3, objects and their metadata are completely immutable. The console allows you to "edit" metadata, but it accomplishes this by copying the object on top of itself (which is a safe operation in S3, even when bucket versioning is not enabled, because the old object is untouched until the new object creation has succeeded) while supplying revised metadata. S3 does not support move or rename -- these are done with a copy followed by a delete. The maximum size of object that can be copied with the Copy API is 5 GB.
Multipart, which is mandatory for creating objects exceeding 5 GB and recommended for multi-megabyte objects. Multipart can be used for objects of any size, but each part (other than the last) must be at least 5 MiB in size, so it is not typically used for smaller uploads. This API also allows safe retrying of any parts that failed, uploading parts in parallel, and has multiple integrity checks to prevent any defects from appearing in the object that S3 reassembles. Multipart is also used to copy large objects
The console communicates with S3 using the standard public APIs, the same as the SDKs use, and uses either PUT or multipart, depending on the object size, and Copy for editing object metadata, as mentioned above.
For best results, always use the s3:ObjectCreated:* event, unless you have a specific reason not to.

AWS S3 Upload Integrity

I'm using S3 to backup large files that are critical to my business. Can I be confident that once uploaded, these files are verified for integrity and are intact?
There is a lot of documentation around scalability and availability but I couldn't find any information talking about integrity and/or checksums.
When uploading to S3, there's an optional request header (which in my opinion should not be optional, but I digress), Content-MD5. If you set this value to the base64 encoding of the MD5 hash of the request body, S3 will outright reject your upload in the event of a mismatch, thus preventing the upload of corrupt data.
The ETag header will be set to the hex-encoded MD5 hash of the object, for single part uploads (with an exception for some types of server-side encryption).
For multipart uploads, the Content-MD5 header is set to the same value, but for each part.
When S3 combines the parts of a multipart upload into the final object, the ETag header is set to the hex-encoded MD5 hash of the concatenated binary-encoded (raw bytes) MD5 hashes of each part, plus - plus the number of parts.
When you ask S3 to do that final step of combining the parts of a multipart upload, you have to give it back the ETags it gave you during the uploads of the original parts, which is supposed to assure that what S3 is combining is what you think it is combining. Unfortunately, there's an API request you can make to ask S3 about the parts you've uploaded, and some lazy developers will just ask S3 for this list and then send it right back, which the documentarion warns against, but hey, it "seems to work," right?
Multipart uploads are required for objects over 5GB and optional for uploads over 5MB.
Correctly used, these features provide assurance of intact uploads.
If you are using Signature Version 4, which also optional in older regions, there is an additional integrity mechanism, and this one isn't optional (if you're actually using V4): uploads must have a request header x-amz-content-sha256, set to the hex-encoded SHA-256 hash of the payload, and the request will be denied if there's a mismatch here, too.
My take: Since some of these features are optional, you can't trust that any tools are doing this right unless you audit their code.
I don't trust anybody with my data, so for my own purposes, I wrote my own utility, internally called "pedantic uploader," which uses no SDK and speaks directly to the REST API. It calculates the sha256 of the file and adds it as x-amz-meta-... metadata so it can be fetched with the object for comparison. When I upload compressed files (gzip/bzip2/xz) I store the sha of both compressed and uncompressed in the metadata, and I store the compressed and uncompressed size in octets in the metadata as well.
Note that Content-MD5 and x-amz-content-sha256 are request headers. They are not returned with downloads. If you want to save this information in the object metadata, as I described here.
Within EC2, you can easily download an object without actually saving it to disk, just to verify its integrity. If the EC2 instance is in the same region as the bucket, you won't be billed for data transfer if you use an instance with a public IPv4 or IPv6 address, a NAT instance, an S3 VPC endpoint, or through an IPv6 egress gateway. (You'll be billed for NAT Gateway data throughput if you access S3 over IPv4 through a NAT Gateway). Obviously there are ways to automate this, but manually, if you select the object in the console, choose Download, right-click and copy the resulting URL, then do this:
$ curl -v '<url from console>' | md5sum # or sha256sum etc.
Just wrap the URL from the console in single ' quotes since it will be pre-signed and will include & in the query string, which you don't want the shell to interpret.
You can perform an MD5 checksum locally, and then verify that against the MD5 checksum of the object on S3 to ensure data integrity. Here is a guide

Data integrity check during upload to S3 with server side encryption

Data integrity check is something that the AWS Java SDK claims that it provides by default where either the client can calculate the object checksum on its own and add it as a header “Headers.CONTENT_MD5” in the S3 client or if we pass it as null or not set it, the S3 client internally computes an MD5 checksum on the client itself which it uses to compare to the Etag ((which is nothing but the MD5 of the created object) obtained from the object creation response to throw an error back to the client in case of a data integrity failure. Note that in this case though, the integrity check happens on the client side and not on the S3 server side which means that the object will still be created successfully and the client would need to clean it explicitly.
Hence, using the header is recommended(where the check happens at the S3 end itself and fails early) but as TransferManager uses part upload, it is not possible for the client to explicitly set the MD5 for a specific part. The Transfer Manager should take care of computing the MD5 of the part and setting the header but I don’t see that happening in the code.
As we want to use the Transfer Manager for multi-part uploads, we would need to depend on the client side checking which is enabled by default. However, there is a caveat to that too. When we enable SSE-KMS or SSE-C on the object in S3, then this data integrity check is skipped as it seems (as they mention in one of the comments in the code) that in that case an MD5 of the ciphertext is received from S3 which cant be verified with the MD5 which was computed at the client side.
What should I use to enable the data integrity check with SSE in S3?
Note: Please verify that the above understanding is correct.