I am uploading multipart audio and pdf files to Amazon se using Java Play framework. Files have been successfully uploaded on Amazon S3. But I am unable to access them by URL even I set the Access Channel List as "Public Read while uploading files.
Here is code I used for uploading files
public void uploadFile2(String bucketName, String folderName,File file)
throws Exception{
try {
validateCrediantals();
}catch (Exception e){
throw e;
}
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(bucketName, folderName).withCannedACL(CannedAccessControlList.PublicRead);
InitiateMultipartUploadResult initResponse =
amazonS3.initiateMultipartUpload(initRequest);
long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
try {
// Step 2: Upload parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucketName).withKey(folderName)
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(
amazonS3.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(
bucketName,
folderName,
initResponse.getUploadId(),
partETags);
amazonS3.completeMultipartUpload(compRequest);
} catch (Exception e) {
amazonS3.abortMultipartUpload(new AbortMultipartUploadRequest(
bucketName, folderName, initResponse.getUploadId()));
}
}
This is the error I am getting when I try to hit the file URL
> <Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>3A791F42F7EE2DBB</RequestId>
<HostId>
jzdu7uNv4RAkqMcaX2BGIK2Zs77j3kO0xPozxS6vcje0MOZIAVm55ZfC9LhgIg1lIuqXWwonpB4=
</HostId>
</Error>
This is the policy I am using
public void setBucketPolicy(String bucketName){
if(isBucketExist(bucketName)){
Statement allowPublicReadStatement = new Statement(Statement.Effect.Allow)
.withPrincipals(Principal.AllUsers)
.withActions(S3Actions.GetObject)
.withResources(new S3ObjectResource(bucketName, "*"));
Policy policy = new Policy()
.withStatements(allowPublicReadStatement
);
amazonS3.setBucketPolicy(bucketName, policy.toJson());
}
}
Related
I am trying to upload 2GB+ file to my bucket using MultipartFile and AmazonS3, controller:
#PostMapping("/uploadFile")
public String uploadFile(#RequestPart(value = "file") MultipartFile file) throws Exception {
String fileUploadResult = this.amazonClient.uploadFile(file);
return fileUploadResult;
}
amazonClient-uploadFile:
public String uploadFile(MultipartFile multipartFile) throws Exception {
StringBuilder fileUrl = new StringBuilder();
try {
File file = convertMultiPartToFile(multipartFile);
String fileName = generateFileName(multipartFile);
fileUrl.append(endpointUrl);
fileUrl.append("/");
fileUrl.append(bucketName);
fileUrl.append("/");
fileUrl.append(fileName);
uploadFileTos3bucket(fileName, file);
file.delete();
} catch (Exception e) {
e.printStackTrace();
throw e;
}
return fileUrl.toString();
}
amazonClient-convertMultiPartToFile:
private File convertMultiPartToFile(MultipartFile file) throws IOException {
File convFile = new File(file.getOriginalFilename());
FileOutputStream fos = new FileOutputStream(convFile);
fos.write(file.getBytes());
fos.close();
return convFile;
}
amazonClient-uploadFileTos3bucket:
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(
new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
The process works well for small file, to deal with large ones I defined in my application.properties -
spring.servlet.multipart.max-file-size=5GB
spring.servlet.multipart.max-request-size=5GB
spring.servlet.multipart.enabled=true
And getting - java.lang.OutOfMemoryError so:
1- How can I upload file without loading it to memory(not sure that is possible)?
2- How to load it in smaller parts?
Excption -
{"timestamp":"2018-11-12T12:50:38.250+0000","status":500,"error":"Internal Server Error","message":"No message available","trace":"java.lang.OutOfMemoryError\r\n\tat java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)\r\n\tat java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)\r\n\tat java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)\r\n\tat java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)\r\n\tat org.springframework.util.StreamUtils.copy(StreamUtils.java:143)\r\n\tat org.springframework.util.FileCopyUtils.copy(FileCopyUtils.java:110)\r\n\tat org.springframework.util.FileCopyUtils.copyToByteArray(FileCopyUtils.java:162)\r\n\tat org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile.getBytes(StandardMultipartHttpServletRequest.java:245)\r\n\tat com.siemens.plm.it.aws.connect.handels.AmazonClient.convertMultiPartToFile(AmazonClient.java:51)\r\n\tat com.siemens.plm.it.aws.connect.handels.AmazonClient.uploadFile(AmazonClient.java:75)\r\n\tat com.siemens.plm.it.aws.connect.controllers.BucketController.uploadFile(BucketController.java:48)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:215)\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:142)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:800)\r\n\tat org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)\r\n\tat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)\r\n\tat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)\r\n\tat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:998)\r\n\tat org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:901)\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:660)\r\n\tat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:875)\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:741)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)\r\n\tat org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)\r\n\tat org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)\r\n\tat org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)\r\n\tat org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)\r\n\tat org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)\r\n\tat org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)\r\n\tat org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)\r\n\tat org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)\r\n\tat org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:770)\r\n\tat org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415)\r\n\tat org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\r\n\tat org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n","path":"/storage/uploadFile"}
I too had this problem and I solved it by directly uploading the file as stream to the bucket.
In case you are using java servlet , below line of code is to read the video from form-data.
Part filePart = request.getPart("video");
//Now convert it to inputstream
InputStream in = request.getPart("video").getInputStream();
Now I made one S3Class, below is the part of code from the function.
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
.build();
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(in.available());
meta.setContentType("video/mp4");
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
PutObjectRequest request = new PutObjectRequest(bucketName, keyName, in, meta);
Upload upload = tm.upload(request);
// Optionally, you can wait for the upload to finish before continuing.
upload.waitForCompletion();
if(upload.isDone())
{
System.out.println("Total bytes transferred is : "+totalBytesTransferred);
tm.shutdownNow();
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
output = "Couldn't Upload the file.";
output_code = 0;
System.out.println("Inside exception");
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
output = "Couldn't Upload the file.";
output_code = 0;
e.printStackTrace();
}
P.S. Don't forget to add setContentLength() to ObjectMetaData else it will lead to OOM error.
We have some code that downloads a bunch of S3 files to a local directory. The list of files to retrieve is from a query we run. It only lists files that actually exist in our S3 bucket.
As we loop to retrieve these files, about 10% of them return a 404 error as if the file doesn't exist. I log out the name/location of that file, so I can go to S3 and check, and sure enough every single one of the IS ON S3 in the location we went looking for it.
Why does S3 throw a 404 when the file exists?
Here is the Groovy code of the script.
class RetrieveS3FilesFromCSVLoader implements Loader {
private static String missingFilesFile = "00-MISSED_FILES.csv"
private static String csvFileName = "/csv/s3file2.csv"
private static String saveFilesToLocation = "/tmp/retrieve/"
public static final char SEPARATOR = ','
#Autowired
DocumentFileService documentFileService
private void readWithCommaSeparatorSQL() {
int counter = 0
String fileName
String fileLocation
File missedFiles = new File(saveFilesToLocation + missingFilesFile)
PrintWriter writer = new PrintWriter(missedFiles)
File fileCSV = new File(getClass().getResource(csvFileName).toURI())
fileCSV.splitEachLine(SEPARATOR as String) { nextLine ->
//if (counter < 15) {
if (nextLine != null && (nextLine[0] != 'FileLocation')) {
counter++
try {
//Remove 0, only if client number start with "0".
fileLocation = nextLine[0].trim()
byte[] fileBytes = documentFileService.getFile(fileLocation)
if (fileBytes != null) {
fileName = fileLocation.substring(fileLocation.indexOf("/") + 1, fileLocation.length())
File file = new File(saveFilesToLocation + fileName)
file.withOutputStream {
it.write fileBytes
}
println "$counter) Wrote file ${fileLocation} to ${saveFilesToLocation + fileLocation}"
} else {
println "$counter) UNABLE TO RETRIEVE FILE ELSE: $fileLocation"
writer.println(fileLocation)
}
} catch (Exception e) {
println "$counter) UNABLE TO RETRIEVE FILE: $fileLocation"
println(e.getMessage())
writer.println(fileLocation)
}
} else {
counter++;
}
//}
}
writer.close()
}
Here is the code for getFile(fileLocation) and client creation.
public byte[] getFile(String filename) throws IOException {
AmazonS3Client s3Client = connectToAmazonS3Service();
S3Object object = s3Client.getObject(S3_BUCKET_NAME, filename);
if(object == null) {
return null;
}
byte[] fileAsArray = IOUtils.toByteArray(object.getObjectContent());
object.close();
return fileAsArray;
}
/**
* Connects to Amazon S3
*
* #return instance of AmazonS3Client
*/
private AmazonS3Client connectToAmazonS3Service() {
AWSCredentials credentials;
try {
credentials = new BasicAWSCredentials(S3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY);
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
AmazonS3Client s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_EAST_1);
s3.setRegion(usWest2);
return s3;
}
The code above works for 90% of the files in the list passed to the script, but we know with fact that all 100% of the files exist in S3 and with the location String we are passing.
I am just an idiot. Thought it had the production AWS credentials in the properties file. Instead it was development credentials. So I had the wrong credentials.
Here is my controller...
public ActionResult RequestImage(string url, int width, int height)
{
Stream stream = fileService.Request(url, width, height);
return new FileStreamResult(stream, "image/jpg");
}
The file service...
public Stream Request(string url, int width, int height)
{
var request = new S3StorageRequest(url, null, null);
var stream = s3Service.Request(request);
var outputStream = new MemoryStream();
var settings = $"maxwidth={width}";
if (height > 0)
{
settings = string.Concat(settings, $"&maxheight={height}");
}
imageResizer.Build(stream, outputStream, settings);
stream.Dispose();
outputStream.Position = 0;
return outputStream;
}
S3 request...
public Stream Request(S3StorageRequest s3Request)
{
GetObjectRequest request = new GetObjectRequest { BucketName = bucketName };
request.Key = s3Request.Path;
GetObjectResponse response = S3Client.GetObject(request);
return response.ResponseStream;
}
When I run this locally (which is still downloading from S3 over the internet), the S3 request takes around 130ms (as low as 78ms).
The images resizing takes around 50ms.
Here is the request according to the browser...
On the live server, this is what the browser says...
Its a t2.micro instance, my Windows instance is running via Parallels on a 2.4Gz i5. Any ideas what it's doing for 2s? The TTFB can go down to around 854ms.
Without the image resizing and download, its about 450ms. So the image resizing or the file response seems to be slow.
I am looking for a service to upload videos and images from my mobile applications (frontend). I have heard about Amazon S3 and CloudFront. I am looking for a service that will store them, and will also be able to check if they meet certain criteria (for example, maximum file size of 3MB per picture), and return an error to the client if the file doesn't meet the criteria. Does Amazon S3 or CloudFront provide this? If not, is there any other recommended service for this?
You could use the AWS SDK. Here follows an example of the Java version (Amazon provides SDKs for different languages):
/**
* It stores the given file name in S3 and returns the key under which the file has been stored
* #param resource
* #param bucketName
* #return
*/
public String storeProfileImage(File resource, String bucketName, String username) {
String resourceUrl = null;
if (!resource.exists()) {
throw new IllegalArgumentException("The file " + resource.getAbsolutePath() + " doesn't exist");
}
long lengthInBytes = resource.length();
//For demo purposes. You should use a configurable property for the max size
if (lengthInBytes > (3 * 1024)) {
//Your error handling here
}
AccessControlList acl = new AccessControlList();
acl.grantPermission(GroupGrantee.AllUsers, Permission.Read);
String key = username + "/profilePicture." + FilenameUtils.getExtension(resource.getName());
try {
s3Client.putObject(new PutObjectRequest(bucketName, key, resource).withAccessControlList(acl));
resourceUrl = s3Client.getResourceUrl(bucketName, key);
} catch (AmazonClientException ace) {
LOG.error("A client exception occurred while trying to store the profile" +
" image {} on S3. The profile image won't be stored", resource.getAbsolutePath(), ace);
}
return resourceUrl;
}
You can also perform other operations, e.g. check if the bucket exists before storing the image
/**
* Returns the root URL where the bucket name is located.
* <p>Please note that the URL does not contain the bucket name</p>
* #param bucketName The bucket name
* #return the root URL where the bucket name is located.
*/
public String ensureBucketExists(String bucketName) {
String bucketUrl = null;
try {
if (!s3Client.doesBucketExist(bucketName)) {
LOG.warn("Bucket {} doesn't exists...Creating one");
s3Client.createBucket(bucketName);
LOG.info("Created bucket: {}", bucketName);
}
bucketUrl = s3Client.getResourceUrl(bucketName, null) + bucketName;
} catch (AmazonClientException ace) {
LOG.error("An error occurred while connecting to S3. Will not execute action" +
" for bucket: {}", bucketName, ace);
}
return bucketUrl;
}
I am trying to append a string to the end of a text file stored in S3.
Currently I just read the contents of the file into a String, append my new text and resave the file back to S3.
Is there a better way to do this. I am thinkinig when the file is >>> 10MB then reading the entire file would not be a good idea so how should I do this correctly?
Current code
[code]
private void saveNoteToFile( String p_note ) throws IOException, ServletException
{
String str_infoFileName = "myfile.json";
String existingNotes = s3Helper.getfileContentFromS3( str_infoFileName );
existingNotes += p_note;
writeStringToS3( str_infoFileName , existingNotes );
}
public void writeStringToS3(String p_fileName, String p_data) throws IOException
{
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream( p_data.getBytes());
try {
streamFileToS3bucket( p_fileName, byteArrayInputStream, p_data.getBytes().length);
}
catch (AmazonServiceException e)
{
e.printStackTrace();
} catch (AmazonClientException e)
{
e.printStackTrace();
}
}
public void streamFileToS3bucket( String p_fileName, InputStream input, long size)
{
//Create sub folders if there is any in the file name.
p_fileName = p_fileName.replace("\\", "/");
if( p_fileName.charAt(0) == '/')
{
p_fileName = p_fileName.substring(1, p_fileName.length());
}
String folder = getFolderName( p_fileName );
if( folder.length() > 0)
{
if( !doesFolderExist(folder))
{
createFolder( folder );
}
}
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(size);
AccessControlList acl = new AccessControlList();
acl.grantPermission(GroupGrantee.AllUsers, Permission.Read);
s3Client.putObject(new PutObjectRequest(bucket, p_fileName , input,metadata).withAccessControlList(acl));
}
[/code]
It's not possible to append to an existing file on AWS S3. When you upload an object it creates a new version if it already exists:
If you upload an object with a key name that already exists in the
bucket, Amazon S3 creates another version of the object instead of
replacing the existing object
Source: http://docs.aws.amazon.com/AmazonS3/latest/UG/ObjectOperations.html
The objects are immutable.
It's also mentioned in these AWS Forum threads:
https://forums.aws.amazon.com/message.jspa?messageID=179375
https://forums.aws.amazon.com/message.jspa?messageID=540395
It's not possible to append to an existing file on AWS S3.
You can delete existing file and upload new file with same name.
Configuration
private string bucketName = "my-bucket-name-123";
private static string awsAccessKey = "AKI............";
private static string awsSecretKey = "+8Bo..................................";
IAmazonS3 client = new AmazonS3Client(awsAccessKey, awsSecretKey,
RegionEndpoint.APSoutheast2);
string awsFile = "my-folder/sub-folder/textFile.txt";
string localFilePath = "my-folder/sub-folder/textFile.txt";
To Delete
public void DeleteRefreshTokenFile()
{
try
{
var deleteFileRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = awsFile
};
DeleteObjectResponse fileDeleteResponse = client.DeleteObject(deleteFileRequest);
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
}
To Upload
public void UploadRefreshTokenFile()
{
FileInfo file = new FileInfo(localFilePath);
try
{
PutObjectRequest request = new PutObjectRequest()
{
InputStream = file.OpenRead(),
BucketName = bucketName,
Key = awsFile
};
PutObjectResponse response = client.PutObject(request);
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
}
One option is to write the new lines/information to a new version of the file. This would create a LARGE number of versions. But, essentially, whatever program you are using the file for could read ALL the versions and append them back together when reading it (this seems like a really bad idea as I write it out).
Another option would be to write a new object each time with a time stamp appended to the object name. my-log-file-date-time . Then whatever program is reading from it could append them all together after downloading my-log-file-*.
You would want to delete objects older than a certain time just like log rotation.
Depending on how busy your events are this might work. If you have thousands per second, I don't think this would work. But if you just have a few events per minute it may be reasonable.
You can do it with s3api put-object.
First download the version you want and use below commend. it will upload as the latest version.
ᐅ aws s3api put-object --bucket $BUCKET --key $FOLDER/$FILE --body $YOUR_LOCAL_DOWNLOADED_VERSION_FILE