Amazon S3 concatenate small files - amazon-web-services

Is there a way to concatenate small files which are less than 5MBs on Amazon S3.
Multi-Part Upload is not ok because of small files.
It's not a efficient solution to pull down all these files and do the concatenation.
So, can anybody tell me some APIs to do these?

Amazon S3 does not provide a concatenate function. It is primarily an object storage service.
You will need some process that downloads the objects, combines them, then uploads them again. The most efficient way to do this would be to download the objects in parallel, to take full advantage of available bandwidth. However, that is more complex to code.
I would recommend doing the processing on "in the cloud" to avoid having to download the objects across the Internet. Doing it on Amazon EC2 or AWS Lambda would be more efficient and less costly.

Based on #wwadge's comment I wrote a Python script.
It bypasses the 5MB limit by uploading a dummy-object slightly bigger than 5MB, then append each small file as if it was the last. In the end it strips out the dummy-part from the merged file.
import boto3
import os
bucket_name = 'multipart-bucket'
merged_key = 'merged.json'
mini_file_0 = 'base_0.json'
mini_file_1 = 'base_1.json'
dummy_file = 'dummy_file'
s3_client = boto3.client('s3')
s3_resource = boto3.resource('s3')
# we need to have a garbage/dummy file with size > 5MB
# so we create and upload this
# this key will also be the key of final merged file
with open(dummy_file, 'wb') as f:
# slightly > 5MB
f.seek(1024 * 5200)
f.write(b'0')
with open(dummy_file, 'rb') as f:
s3_client.upload_fileobj(f, bucket_name, merged_key)
os.remove(dummy_file)
# get the number of bytes of the garbage/dummy-file
# needed to strip out these garbage/dummy bytes from the final merged file
bytes_garbage = s3_resource.Object(bucket_name, merged_key).content_length
# for each small file you want to concat
# when this loop have finished merged.json will contain
# (merged.json + base_0.json + base_2.json)
for key_mini_file in ['base_0.json','base_1.json']: # include more files if you want
# initiate multipart upload with merged.json object as target
mpu = s3_client.create_multipart_upload(Bucket=bucket_name, Key=merged_key)
part_responses = []
# perform multipart copy where merged.json is the first part
# and the small file is the second part
for n, copy_key in enumerate([merged_key, key_mini_file]):
part_number = n + 1
copy_response = s3_client.upload_part_copy(
Bucket=bucket_name,
CopySource={'Bucket': bucket_name, 'Key': copy_key},
Key=merged_key,
PartNumber=part_number,
UploadId=mpu['UploadId']
)
part_responses.append(
{'ETag':copy_response['CopyPartResult']['ETag'], 'PartNumber':part_number}
)
# complete the multipart upload
# content of merged will now be merged.json + mini file
response = s3_client.complete_multipart_upload(
Bucket=bucket_name,
Key=merged_key,
MultipartUpload={'Parts': part_responses},
UploadId=mpu['UploadId']
)
# get the number of bytes from the final merged file
bytes_merged = s3_resource.Object(bucket_name, merged_key).content_length
# initiate a new multipart upload
mpu = s3_client.create_multipart_upload(Bucket=bucket_name, Key=merged_key)
# do a single copy from the merged file specifying byte range where the
# dummy/garbage bytes are excluded
response = s3_client.upload_part_copy(
Bucket=bucket_name,
CopySource={'Bucket': bucket_name, 'Key': merged_key},
Key=merged_key,
PartNumber=1,
UploadId=mpu['UploadId'],
CopySourceRange='bytes={}-{}'.format(bytes_garbage, bytes_merged-1)
)
# complete the multipart upload
# after this step the merged.json will contain (base_0.json + base_2.json)
response = s3_client.complete_multipart_upload(
Bucket=bucket_name,
Key=merged_key,
MultipartUpload={'Parts': [
{'ETag':response['CopyPartResult']['ETag'], 'PartNumber':1}
]},
UploadId=mpu['UploadId']
)
If you already have a >5MB object that you want to add smaller parts too, then skip creating the dummy file and the last copy part with the byte-ranges. Also, I have no idea how this performs on a large number of very small files - in that case it might be better to download each file, merge them locally and then upload.

Edit: Didn't see the 5MB requirement. This method will not work because of this requirement.
From https://ruby.awsblog.com/post/Tx2JE2CXGQGQ6A4/Efficient-Amazon-S3-Object-Concatenation-Using-the-AWS-SDK-for-Ruby:
While it is possible to download and re-upload the data to S3 through
an EC2 instance, a more efficient approach would be to instruct S3 to
make an internal copy using the new copy_part API operation that was
introduced into the SDK for Ruby in version 1.10.0.
Code:
require 'rubygems'
require 'aws-sdk'
s3 = AWS::S3.new()
mybucket = s3.buckets['my-multipart']
# First, let's start the Multipart Upload
obj_aggregate = mybucket.objects['aggregate'].multipart_upload
# Then we will copy into the Multipart Upload all of the objects in a certain S3 directory.
mybucket.objects.with_prefix('parts/').each do |source_object|
# Skip the directory object
unless (source_object.key == 'parts/')
# Note that this section is thread-safe and could greatly benefit from parallel execution.
obj_aggregate.copy_part(source_object.bucket.name + '/' + source_object.key)
end
end
obj_completed = obj_aggregate.complete()
# Generate a signed URL to enable a trusted browser to access the new object without authenticating.
puts obj_completed.url_for(:read)
Limitations (among others)
With the exception of the last part, there is a 5 MB minimum part size.
The completed Multipart Upload object is limited to a 5 TB maximum size.

Related

Data loss in AWS Lambda function when downloading file from S3 to /tmp

I have written a Lambda function in AWS to download a file from an S3 location to /tmp directory (local Lambda space).
I am able to download the file however, the file size is changing here, not sure why?
def data_processor(event, context):
print("EVENT:: ", event)
bucket_name = 'asr-collection'
fileKey = 'cc_continuous/testing/1645136763813.wav'
path = '/tmp'
output_path = os.path.join(path, 'mydir')
if not os.path.exists(output_path):
os.makedirs(output_path)
s3 = boto3.client("s3")
new_file_name = output_path + '/' + os.path.basename(fileKey)
s3.download_file(
Bucket=bucket_name, Key=fileKey, Filename=output_path + '/' + os.path.basename(fileKey)
)
print('File size is: ' + str(os.path.getsize(new_file_name)))
return None
Output:
File size is: 337964
Actual size: 230MB
downloaded file size is 330KB
I tried download_fileobj() as well
Any idea how can i download the file as it is, without any data loss?
The issue can be that the bucket you are downloading from was from a different region than the Lambda was hosted in. Apparently, this does not make a difference when running it locally.
Check your bucket locations relative to your Lambda region.
Make a note that setting the region on your client will allow you to use a lambda in a different region from your bucket. However if you intend to pull down larger files you will get network latency benefits from keeping your lambda in the same region as your bucket.
Working with S3 resource instance instead of client fixed it.
s3 = boto3.resource('s3')
keys = ['TestFolder1/testing/1651219413148.wav']
for KEY in keys:
local_file_name = '/tmp/'+KEY
s3.Bucket(bucket_name).download_file(KEY, local_file_name)

File truncated on upload to GCS

I am uploading a relatively small(<1 MiB) .jsonl file on Google CLoud storage using the python API. The function I used is from the gcp documentation:
def upload_blob(key_path,bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
# The ID of your GCS bucket
# bucket_name = "your-bucket-name"
# The path to your file to upload
# source_file_name = "local/path/to/file"
# The ID of your GCS object
# destination_blob_name = "storage-object-name"
storage_client = storage.Client.from_service_account_json(key_path)
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print(
"File {} uploaded to {}.".format(
source_file_name, destination_blob_name
)
)
The issue I am having is that the .jsonl file is getting truncated at 9500 lines after the upload. In fact, the 9500th line is not complete. I am not sure what the issue is and don't think there would be any limit for this small file. Any help is appreciated.
I had a similar problem some time ago. In my case the upload to bucket was called inside a with python clause right after the line where I recorded contents to source_file_name, so I just needed to move the upload line outside the with in order to properly recorded and close local file to be uploaded.

Combine all txt files in an S3 bucket into 1 large file

Problem: I am trying to combine large amounts of small-sized text files into 1 large-sized file in S3 bucket. Using python:
The code I tested to try this locally is below. It works perfectly. (obtained from another post):
with open(outfilename, 'wb') as outfile:
for filename in glob.glob('UBXEvents*'):
if filename == outfilename: # don't want to copy the output into the output
continue
with open(filename, 'rb') as readfile:
shutil.copyfileobj(readfile, outfile)
Now, since my files are located in an S3 bucket, I have trouble referencing the S3 bucket. I wanted to run this code for all files (using wild card *) in an S3 but I am having a hard time connecting the two.
Below is the s3 object I created:
object = client.get_object(
Bucket= 'my_bucket_name',
Key='bucket_path/prefix_of_file_name*'
)
Question: How would I reference the S3 bucket/path in my combining code above?
Obtaining a list of files
You can obtain a list of files in the bucket like this:
import boto3
s3_client = boto3.client('s3')
response = s3_client.list_objects_v2(Bucket='my-bucket', Prefix = 'folder1/')
for object in response['Contents']:
# Do stuff here
print(object['Key'])
Reading & Writing to Amazon S3
Normally, you would need to download each file from Amazon S3 to the local disk (using download_file() and then read the contents). However, you might instead want to use smart-open ยท PyPI, which is a library that allows files to be opened on S3 using similar syntax to the normal Python open() command.
Here's a program that uses smart-open to read files from S3 and combine them into an output file in S3:
import boto3
from smart_open import open
BUCKET = 'my-bucket'
PREFIX = 'folder1/' # Optional
s3_client = boto3.client('s3')
# Open output file with smart-open
with open(f's3://{BUCKET}/out.txt', 'w') as out_file:
response = s3_client.list_objects_v2(Bucket=BUCKET, Prefix = PREFIX)
for object in response['Contents']:
print(f"Copying {object['Key']}")
# Open input file with smart-open
with open(f"s3://{BUCKET}/{object['Key']}", 'r') as in_file:
# Read content from input file
for line in in_file:
# Write content to output file
out_file.write(line)

How to extract files in S3 on the fly with boto3?

I'm trying to find a way to extract .gz files in S3 on the fly, that is no need to download it to locally, extract and then push it back to S3.
With boto3 + lambda, how can i achieve my goal?
I didn't see any extract part in boto3 document.
You can use BytesIO to stream the file from S3, run it through gzip, then pipe it back up to S3 using upload_fileobj to write the BytesIO.
# python imports
import boto3
from io import BytesIO
import gzip
# setup constants
bucket = '<bucket_name>'
gzipped_key = '<key_name.gz>'
uncompressed_key = '<key_name>'
# initialize s3 client, this is dependent upon your aws config being done
s3 = boto3.client('s3', use_ssl=False) # optional
s3.upload_fileobj( # upload a new obj to s3
Fileobj=gzip.GzipFile( # read in the output of gzip -d
None, # just return output as BytesIO
'rb', # read binary
fileobj=BytesIO(s3.get_object(Bucket=bucket, Key=gzipped_key)['Body'].read())),
Bucket=bucket, # target bucket, writing to
Key=uncompressed_key) # target key, writing to
Ensure that your key is reading in correctly:
# read the body of the s3 key object into a string to ensure download
s = s3.get_object(Bucket=bucket, Key=gzip_key)['Body'].read()
print(len(s)) # check to ensure some data was returned
The above answers are for gzip files, for zip files, you may try
import boto3
import zipfile
from io import BytesIO
bucket = 'bucket1'
s3 = boto3.client('s3', use_ssl=False)
Key_unzip = 'result_files/'
prefix = "folder_name/"
zipped_keys = s3.list_objects_v2(Bucket=bucket, Prefix=prefix, Delimiter = "/")
file_list = []
for key in zipped_keys['Contents']:
file_list.append(key['Key'])
#This will give you list of files in the folder you mentioned as prefix
s3_resource = boto3.resource('s3')
#Now create zip object one by one, this below is for 1st file in file_list
zip_obj = s3_resource.Object(bucket_name=bucket, key=file_list[0])
print (zip_obj)
buffer = BytesIO(zip_obj.get()["Body"].read())
z = zipfile.ZipFile(buffer)
for filename in z.namelist():
file_info = z.getinfo(filename)
s3_resource.meta.client.upload_fileobj(
z.open(filename),
Bucket=bucket,
Key='result_files/' + f'{filename}')
This will work for your zip file and your result unzipped data will be in result_files folder. Make sure to increase memory and time on AWS Lambda to maximum since some files are pretty large and needs time to write.
Amazon S3 is a storage service. There is no in-built capability to manipulate the content of files.
However, you could use an AWS Lambda function to retrieve an object from S3, decompress it, then upload content back up again. However, please note that there is default limit of 500MB in temporary disk space for Lambda, so avoid decompressing too much data at the same time.
You could configure the S3 bucket to trigger the Lambda function when a new file is created in the bucket. The Lambda function would then:
Use boto3 to download the new file
Use the gzip Python library to extract files
Use boto3 to upload the resulting file(s)
Sample code:
import gzip
import io
import boto3
bucket = '<bucket_name>'
key = '<key_name>'
s3 = boto3.client('s3', use_ssl=False)
compressed_file = io.BytesIO(
s3.get_object(Bucket=bucket, Key=key)['Body'].read())
uncompressed_file = gzip.GzipFile(None, 'rb', fileobj=compressed_file)
s3.upload_fileobj(Fileobj=uncompressed_file, Bucket=bucket, Key=key[:-3])

Error when using continuation token on S3 download

I'm trying to download a large amount of small files from an S3 bucket - I'm doing this by using the following:
s3 = boto3.client('s3')
kwargs = {'Bucket': bucket}
with open('/Users/hr/Desktop/s3_backup/files.csv','w') as file:
while True:
# The S3 API response is a large blob of metadata.
# 'Contents' contains information about the listed objects.
resp = s3.list_objects_v2(**kwargs)
try:
contents = resp['Contents']
except KeyError:
return
for obj in contents:
key = obj['Key']
file.write(key)
file.write('\n')
# The S3 API is paginated, returning up to 1000 keys at a time.
# Pass the continuation token into the next response, until we
# reach the final page (when this field is missing).
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
However, after a certain amount of time I received this error message 'EndpointConnectionError: Could not connect to the endpoint URL'.
I know that there is still considerably more files on the s3 bucket. I have three questions:
Why is this error occurring when I haven't downloaded all files in the bucket?
Is there a way to start my code from the last file I downloaded from the S3 bucket (I don't want to have to re-download the file names I've already downloaded)
Is there a default ordering of the S3 bucket, is it alphabetical?