I am successfully uploading multi-part files to AWS S3, but now I'm attempting to ad an MD5 checksum to each part:
static void sendPart(existingBucketName, keyName, multipartRepsonse, partNum,
sendBuffer, partSize, vertx, partETags, s3, req, resultClosure)
{
// Create request to upload a part.
MessageDigest md = MessageDigest.getInstance("MD5")
byte[] digest = md.digest(sendBuffer.bytes)
println(digest.toString())
InputStream inputStream = new ByteArrayInputStream(sendBuffer.bytes)
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(existingBucketName).withKey(keyName)
.withUploadId(multipartRepsonse.getUploadId()).withPartNumber(partNum)
.withInputStream(inputStream)
.withMD5Digest(Base64.getEncoder().encode(digest).toString())
.withPartSize(partSize);
// Upload part and add response to our list.
vertx.executeBlocking({ future ->
// Do the blocking operation in here
// Imagine this was a call to a blocking API to get the result
try {
println("Sending chunk for ${keyName}")
PartETag eTag = s3.uploadPart(uploadRequest).getPartETag()
partETags.add(eTag);
println("Etag: " + eTag.ETag)
req.response().write("Sending Chunk\n")
} catch(Exception e) {
}
def result = "success!"
future.complete(result)
}, resultClosure)
}
However I get the following error:
AmazonS3Exception: The XML you provided was not well-formed or did not
validate against our published schema (Service: Amazon S3; Status
Code: 400; Error Code: MalformedXML; Request ID: 91542E819781FDFC), S3
Extended Request ID:
yQs45H/ozn5+xlxV9lRgCQWwv6gQysT6A4ablq7/Epq06pUzy0qGvMc+YAkJjo/RsHk2dedH+pI=
What am I doing incorrectly?
Looks like I was converting the digest incorrectly.
static void sendPart(existingBucketName, keyName, multipartRepsonse, partNum,
sendBuffer, partSize, vertx, partETags, s3, req, resultClosure)
{
// Create request to upload a part.
MessageDigest md = MessageDigest.getInstance("MD5")
byte[] digest = md.digest(sendBuffer.bytes)
InputStream inputStream = new ByteArrayInputStream(sendBuffer.bytes)
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(existingBucketName).withKey(keyName)
.withUploadId(multipartRepsonse.getUploadId()).withPartNumber(partNum)
.withInputStream(inputStream)
.withMD5Digest(Base64.getEncoder().encodeToString(digest))
.withPartSize(partSize)
// Upload part and add response to our list.
vertx.executeBlocking({ future ->
try {
println("Sending chunk for ${keyName}")
PartETag eTag = s3.uploadPart(uploadRequest).getPartETag()
partETags.add(eTag);
req.response().write("Sending Chunk\n")
} catch(Exception e) {
}
def result = "success!"
future.complete(result)
}, resultClosure)
}
Related
I'm trying to put an image on a s3 bucket via an upload URL. Here's what I've done so far:
//Get Upload URL & Access the file details
string path = HttpUtility.UrlPathEncode("image/getUploadURL/" + imageFileDetails.Extension + "/" + imageFileDetails.Name + "/" + sessionID);
string uploadURL = " ";
string mimeType = MimeTypesMap.GetMimeType(imageFileDetails.FullName);
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
var getResponse = await response.Content.ReadAsStringAsync();
dynamic JSONObject = JObject.Parse(getResponse);
uploadURL = JSONObject.data.uploadURL;
if (uploadURL != "" && uploadURL != null)
{
using (var content = new MultipartFormDataContent())
{
Console.WriteLine("UploadURL: {0}", uploadURL);
Console.WriteLine("newFilePath: {0}", newFilePath);
FileStream fileStream = new FileStream(newFilePath, FileMode.Open);
content.Add(new StreamContent(fileStream), "data");
using (
var message = await client.PutAsync(uploadURL, content))
{
var putResponse = await message.Content.ReadAsStringAsync();
Console.WriteLine("PutResponse: {0} " , putResponse);
}
}
Console.ReadKey();
}
I continue to get the following error:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Should I be approaching this differently? I've tried with RestSharp as well and have not gotten any positive results yet. Any ideas, or workarounds would be appreciated!
Use the Amazon S3 Strongly typed C# client. To upload an object using a presigned URL, you can use this code.
namespace UploadUsingPresignedURLExample
{
using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Amazon.S3;
using Amazon.S3.Model;
/// <summary>
/// This example shows how to upload an object to an Amazon Simple Storage
/// Service (Amazon S3) bucket using a presigned URL. The code first
/// creates a presigned URL and then uses it to upload an object to an
/// Amazon S3 bucket using that URL. The example was created using the
/// AWS SDK for .NET version 3.7 and .NET Core 5.0.
/// </summary>
public class UploadUsingPresignedURL
{
public static void Main()
{
string bucketName = "doc-example-bucket";
string keyName = "samplefile.txt";
string filePath = $"source\\{keyName}";
// Specify how long the signed URL will be valid in hours.
double timeoutDuration = 12;
// If the AWS Region defined for your default user is different
// from the Region where your Amazon S3 bucket is located,
// pass the Region name to the S3 client object's constructor.
// For example: RegionEndpoint.USWest2.
IAmazonS3 client = new AmazonS3Client();
var url = GeneratePreSignedURL(client, bucketName, keyName, timeoutDuration);
var success = UploadObject(filePath, url);
if (success)
{
Console.WriteLine("Upload succeeded.");
}
else
{
Console.WriteLine("Upload failed.");
}
}
/// <summary>
/// Uploads an object to an S3 bucket using the presigned URL passed in
/// the url parameter.
/// </summary>
/// <param name="filePath">The path (including file name) to the local
/// file you want to upload.</param>
/// <param name="url">The presigned URL that will be used to upload the
/// file to the S3 bucket.</param>
/// <returns>A Boolean value indicating the success or failure of the
/// operation, based on the HttpWebResponse.</returns>
public static bool UploadObject(string filePath, string url)
{
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
var buffer = new byte[8000];
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
dataStream.Write(buffer, 0, bytesRead);
}
}
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
return response.StatusCode == HttpStatusCode.OK;
}
/// <summary>
/// Generates a presigned URL which will be used to upload an object to
/// an S3 bucket.
/// </summary>
/// <param name="client">The initialized S3 client object used to call
/// GetPreSignedURL.</param>
/// <param name="bucketName">The name of the S3 bucket to which the
/// presigned URL will point.</param>
/// <param name="objectKey">The name of the file that will be uploaded.</param>
/// <param name="duration">How long (in hours) the presigned URL will
/// be valid.</param>
/// <returns>The generated URL.</returns>
public static string GeneratePreSignedURL(
IAmazonS3 client,
string bucketName,
string objectKey,
double duration)
{
var request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.UtcNow.AddHours(duration),
};
string url = client.GetPreSignedURL(request);
return url;
}
}
}
You can find this example and other Amazon S3 examples in Github here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/dotnetv3/S3
After some extensive research, I was able to located the error in my code. When doing the GET call I was passing the incorrect file type, then, when trying to do the PUT call with the MIME type, it 403'd.
I had to update my initial GET path with the matching file type, and the appropriate encoding.
On my example below, it's hard coded, that'll be changing of course, but that was the error:
string path = HttpUtility.UrlPathEncode("image/getUploadURL/image%2Fpng/" + imageFileDetails.Name + "/" + sessionID);
I am trying to update my appsync client to authenticate with IAM credentials. In case of API_KEY I set the API_KEY_HEADER like so: request.addHeader(API_KEY_HEADER, this.apiKey); Is there a similar way to authenticate in a Java client with IAM credentials? Is there a header I can pass in to pass in the secret and access keys like here: https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js#iam? Or should I just be using a cognito user pool as a way to authenticate the request?
According to AWS Documentation we need to use sign requests using the process documented here: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html and steps listed here: https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html.
I also found an implementation here: https://medium.com/#tridibbolar/aws-lambda-as-an-appsync-client-fbb0c1ce927d. Using the code above:
private void signRequest(final Request<AmazonWebServiceRequest> request) {
final AWS4Signer signer = new AWS4Signer();
signer.setRegionName(this.region);
signer.setServiceName("appsync");
signer.sign(request, this.appsyncCredentials);
}
private Request<AmazonWebServiceRequest> getRequest(final String data) {
final Request<AmazonWebServiceRequest> request =
new DefaultRequest<AmazonWebServiceRequest>("appsync");
request.setHttpMethod(HttpMethodName.POST);
request.setEndpoint(URI.create(this.appSyncEndpoint));
final byte[] byteArray = data.getBytes(Charset.forName("UTF-8"));
request.setContent(new ByteArrayInputStream(byteArray));
request.addHeader(AUTH_TYPE_HEADER, AWS_IAM_AUTH_TYPE);
request.addHeader(HttpHeaders.CONTENT_TYPE, APPLICATION_GRAPHQL);
request.addHeader(HttpHeaders.CONTENT_LENGTH, String.valueOf(byteArray.length));
signRequest(request);
return request;
}
private HttpResponseHandler<String> getResponseHandler() {
final HttpResponseHandler<String> responseHandler = new HttpResponseHandler<String>() {
#Override
public String handle(com.amazonaws.http.HttpResponse httpResponse) throws Exception {
final String result = IOUtils.toString(httpResponse.getContent());
if(httpResponse.getStatusCode() != HttpStatus.SC_OK) {
final String errorText = String.format(
"Error posting request. Response status code was %s and text was %s. ",
httpResponse.getStatusCode(),
httpResponse.getStatusText());
throw new RuntimeException(errorText);
} else {
final ObjectMapper objectMapper = new ObjectMapper();
//custom class to parse appsync response.
final AppsyncResponse response = objectMapper.readValue(result, AppsyncResponse.class);
if(CollectionUtils.isNotEmpty(response.getErrors())){
final String errorMessages = response
.getErrors()
.stream()
.map(Error::getMessage)
.collect(Collectors.joining("\n"));
final String errorText = String.format(
"Error posting appsync request. Errors were %s. ",
errorMessages);
throw new RuntimeException(errorText);
}
}
return result;
}
#Override
public boolean needsConnectionLeftOpen() {
return false;
}
};
return responseHandler;
}
private Response<String> makeGraphQlRequest(final Request<AmazonWebServiceRequest> request) {
return this.httpClient.requestExecutionBuilder()
.executionContext(new ExecutionContext())
.request(request)
.execute(getResponseHandler());
}
I am trying to upload large images to AWS S3 using the Multipart Upload API. From UI, i am sending the chunks(blob) of an image and when the last part arrives, completing the upload and getting the upload file url. It is working very nicely.
Sample Code:
public UploadPartResponse UploadChunk(Stream stream, string fileName, string uploadId, List<PartETag> eTags, int partNumber, bool lastPart)
{
stream.Position = 0;
//Step 1: build and send a multi upload request
if (partNumber == 1)
{
var initiateRequest = new InitiateMultipartUploadRequest
{
BucketName = _settings.Bucket,
Key = fileName
};
var initResponse = _s3Client.InitiateMultipartUpload(initiateRequest);
uploadId = initResponse.UploadId;
}
//Step 2: upload each chunk (this is run for every chunk unlike the other steps which are run once)
var uploadRequest = new UploadPartRequest
{
BucketName = _settings.Bucket,
Key = fileName,
UploadId = uploadId,
PartNumber = partNumber,
InputStream = stream,
IsLastPart = lastPart,
PartSize = stream.Length
};
var response = _s3Client.UploadPart(uploadRequest);
//Step 3: build and send the multipart complete request
if (lastPart)
{
eTags.Add(new PartETag
{
PartNumber = partNumber,
ETag = response.ETag
});
var completeRequest = new CompleteMultipartUploadRequest
{
BucketName = _settings.Bucket,
Key = fileName,
UploadId = uploadId,
PartETags = eTags
};
try
{
var res = _s3Client.CompleteMultipartUpload(completeRequest);
return res.Location;
}
catch
{
//do some logging and return null response
return null;
}
}
response.ResponseMetadata.Metadata["uploadid"] = uploadRequest.UploadId;
return response;
}
Now, i need to get the thumbnail of the uploaded image and upload that image too in a Thumbnails directory.
So basically, when the last part(chunk) arrives for the original image, i am completing the upload and retrieving the file url. At that time, i need to upload the thumbnail also and get back the thumbnail url.
I saw that people are referring of lambda function but don't know how to incorporate into my multipart api code setup.
Can anyone give me some direction here? Thanks in advance.
I'm trying to upload a large video file (800mb) to my S3 bucket, but it appears to timeout. It works just fine for smaller files. My project is an ASP.Net Core 2.1 application.
This is the exception that is thrown:
An unhandled exception occurred while processing the request.
SocketException: The I/O operation has been aborted because of either a thread exit or an application request.
Unknown location
IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request.
System.Net.Sockets.Socket+AwaitableSocketAsyncEventArgs.ThrowException(SocketError error)
TaskCanceledException: The operation was canceled.
GamerPilot.Video.AWSS3Helper.UploadFileAsync(Stream stream, string key, S3CannedACL acl, bool useReducedRedundancy, bool throwOnError, CancellationToken cancellationToken) in AWSS3Helper.cs, line 770
My source code looks like this:
public async Task<IVideo> AddVideoAsync(int instructorId, int lectureId, string videoName, string filePath, CancellationToken cancellationToken = default(CancellationToken))
{
if (string.IsNullOrEmpty(filePath)) { throw new ArgumentNullException("filePath", "Video filepath is missing"); }
if (!File.Exists(filePath)) { throw new ArgumentNullException("filePath", "Video filepath does not exists"); }
//Test video file upload and db row insertion
using (var stream = File.OpenRead(filePath))
{
return await AddVideoAsync(instructorId, lectureId, videoName, stream, cancellationToken);
}
}
public async Task<IVideo> AddVideoAsync(int instructorId, int lectureId, string videoName, Stream videoFile, CancellationToken cancellationToken = default(CancellationToken))
{
var video = (Video) await GamerPilot.Video.Helper.Create(_awsS3AccessKey, _awsS3SecretKey, _awsS3BucketName, _awsS3Region)
.AddVideoAsync(instructorId, lectureId, videoName, videoFile, cancellationToken);
using (var db = new DbContext(_connectionString))
{
db.Videos.Add(video);
var count = await db.SaveChangesAsync();
}
return video;
}`
public async Task<IVideo> AddVideoAsync(int instructorId, int lectureId, string videoName, Stream videoFile, CancellationToken cancellationToken = default(CancellationToken))
{
if (string.IsNullOrEmpty(videoName)) { throw new ArgumentNullException("videoName", "Video name cannot be empty or null"); }
if (videoFile == null) { throw new ArgumentNullException("video", "Video stream is missing"); }
var videoNameCleaned = videoName.Replace(" ", "-").ToLower().Replace(".mp4", "");
var videoKey = string.Join('/', "videos", instructorId, lectureId, videoNameCleaned + ".mp4");
using (var aws = new AWSS3Helper(_awsS3AccessKey, _awsS3SecretKey, _awsS3BucketName, _awsS3Region))
{
try
{
//THIS FAILS ------
await aws.UploadFileAsync(videoFile, videoKey, Amazon.S3.S3CannedACL.PublicRead, true, true, cancellationToken);
}
catch (Exception ex)
{
throw;
}
}
return new Video
{
InstructorId = instructorId,
LectureId = lectureId,
Name = videoName,
S3Key = videoKey,
S3Region = _awsS3Region.SystemName,
S3Bucket = _awsS3BucketName,
Created = DateTime.Now
};
}
How can I work around this?
There is no general constraint on S3 itself which would prevent you from uploading an 800MB file. However, there are requirements for the handling of retries and timeouts when working with AWS. It is not clear from your question whether or not you are using Amazon's SDK, (I can't find the origin of GamerPilot.Video.AWSS3Helper.UploadFileAsync). However, Amazon's SDK for .NET should handle this for you if you use it in accordance with the following:
Programming with the AWS SDK for .NET - Retries and Timeouts
Using the AWS SDK for .NET for Multipart Upload (High-Level API)
I am trying to retrieve images from my bucket to send to my mobile apps, I currently have the devices accessing AWS directly, however I am adding a layer of security and having my apps (IOS and Android) now make requests to my server which will then respond with DynamoDB and S3 data.
I am trying to follow the documentation and code samples provided by AWS for .Net and they worked seamlessly for DynamoDB, I am running into problems with S3.
S3 .NET Documentation
My problem is that if I provide no credentials, I get the error:
Failed to retrieve credentials from EC2 Instance Metadata Service
This is expected as I have IAM roles set up and only want my apps and this server (in the future, only this server) to have access to the buckets.
But when I provide the credentials, the same way I provided credentials for DynamoDB, my server waits forever and doesn't receive any responses from AWS.
Here is my C#:
<%# WebHandler Language="C#" Class="CheckaraRequestHandler" %>
using System;
using System.Web;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System.IO;
using System.Threading.Tasks;
public class CheckaraRequestHandler : IHttpHandler
{
private const string bucketName = "MY_BUCKET_NAME";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USEast1;
public static IAmazonS3 client = new AmazonS3Client("MY_ACCESS_KEY", "MY_SECRET_KEY", RegionEndpoint.USEast1);
public void ProcessRequest(HttpContext context)
{
if (context.Request.HttpMethod.ToString() == "GET")
{
string userID = context.Request.QueryString["User"];
string Action = context.Request.QueryString["Action"];
if (userID == null)
{
context.Response.ContentType = "text/plain";
context.Response.Write("TRY AGAIN!");
return;
}
if (Action == "GetPhoto")
{
ReadObjectDataAsync(userID).Wait();
}
var client = new AmazonDynamoDBClient("MY_ACCESS_KEY", "MY_SECRET_KEY", RegionEndpoint.USEast1);
Console.WriteLine("Getting list of tables");
var table = Table.LoadTable(client, "TABLE_NAME");
var item = table.GetItem(userID);
if (item != null)
{
context.Response.ContentType = "application/json";
context.Response.Write(item.ToJson());
}
else
{
context.Response.ContentType = "text/plain";
context.Response.Write("0");
}
}
}
public bool IsReusable
{
get
{
return false;
}
}
static async Task ReadObjectDataAsync(string userID)
{
string responseBody = "";
try
{
string formattedKey = userID + "/" + userID + "_PROFILEPHOTO.jpeg";
//string formattedKey = userID + "_PROFILEPHOTO.jpeg";
//formattedKey = formattedKey.Replace(":", "%3A");
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = formattedKey
};
using (GetObjectResponse response = await client.GetObjectAsync(request))
using (Stream responseStream = response.ResponseStream)
using (StreamReader reader = new StreamReader(responseStream))
{
string title = response.Metadata["x-amz-meta-title"]; // Assume you have "title" as medata added to the object.
string contentType = response.Headers["Content-Type"];
Console.WriteLine("Object metadata, Title: {0}", title);
Console.WriteLine("Content type: {0}", contentType);
responseBody = reader.ReadToEnd(); // Now you process the response body.
}
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
}
}
}
When I debug, this line waits forever:
using (GetObjectResponse response = await client.GetObjectAsync(request))
This is the same line that throws the credentials error when I don't provide them. Is there something that I am missing here?
Any help would be greatly appreciated.
I suspect that the AWS .NET SDK has some isses with it specifically with the async call to S3.
The async call to dynamoDB works perfect, but the S3 one hangs forever.
What fixed my problem was simply removing the async functionality (even tho in the AWS docs, the async call is supposed to be used)
Before:
using (GetObjectResponse response = await client.GetObjectAsync(request))
After:
using (GetObjectResponse response = myClient.GetObject(request))
Hopefully this helps anyone else encountering this issue.