I'm trying to put an image on a s3 bucket via an upload URL. Here's what I've done so far:
//Get Upload URL & Access the file details
string path = HttpUtility.UrlPathEncode("image/getUploadURL/" + imageFileDetails.Extension + "/" + imageFileDetails.Name + "/" + sessionID);
string uploadURL = " ";
string mimeType = MimeTypesMap.GetMimeType(imageFileDetails.FullName);
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
var getResponse = await response.Content.ReadAsStringAsync();
dynamic JSONObject = JObject.Parse(getResponse);
uploadURL = JSONObject.data.uploadURL;
if (uploadURL != "" && uploadURL != null)
{
using (var content = new MultipartFormDataContent())
{
Console.WriteLine("UploadURL: {0}", uploadURL);
Console.WriteLine("newFilePath: {0}", newFilePath);
FileStream fileStream = new FileStream(newFilePath, FileMode.Open);
content.Add(new StreamContent(fileStream), "data");
using (
var message = await client.PutAsync(uploadURL, content))
{
var putResponse = await message.Content.ReadAsStringAsync();
Console.WriteLine("PutResponse: {0} " , putResponse);
}
}
Console.ReadKey();
}
I continue to get the following error:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Should I be approaching this differently? I've tried with RestSharp as well and have not gotten any positive results yet. Any ideas, or workarounds would be appreciated!
Use the Amazon S3 Strongly typed C# client. To upload an object using a presigned URL, you can use this code.
namespace UploadUsingPresignedURLExample
{
using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Amazon.S3;
using Amazon.S3.Model;
/// <summary>
/// This example shows how to upload an object to an Amazon Simple Storage
/// Service (Amazon S3) bucket using a presigned URL. The code first
/// creates a presigned URL and then uses it to upload an object to an
/// Amazon S3 bucket using that URL. The example was created using the
/// AWS SDK for .NET version 3.7 and .NET Core 5.0.
/// </summary>
public class UploadUsingPresignedURL
{
public static void Main()
{
string bucketName = "doc-example-bucket";
string keyName = "samplefile.txt";
string filePath = $"source\\{keyName}";
// Specify how long the signed URL will be valid in hours.
double timeoutDuration = 12;
// If the AWS Region defined for your default user is different
// from the Region where your Amazon S3 bucket is located,
// pass the Region name to the S3 client object's constructor.
// For example: RegionEndpoint.USWest2.
IAmazonS3 client = new AmazonS3Client();
var url = GeneratePreSignedURL(client, bucketName, keyName, timeoutDuration);
var success = UploadObject(filePath, url);
if (success)
{
Console.WriteLine("Upload succeeded.");
}
else
{
Console.WriteLine("Upload failed.");
}
}
/// <summary>
/// Uploads an object to an S3 bucket using the presigned URL passed in
/// the url parameter.
/// </summary>
/// <param name="filePath">The path (including file name) to the local
/// file you want to upload.</param>
/// <param name="url">The presigned URL that will be used to upload the
/// file to the S3 bucket.</param>
/// <returns>A Boolean value indicating the success or failure of the
/// operation, based on the HttpWebResponse.</returns>
public static bool UploadObject(string filePath, string url)
{
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
var buffer = new byte[8000];
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
dataStream.Write(buffer, 0, bytesRead);
}
}
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
return response.StatusCode == HttpStatusCode.OK;
}
/// <summary>
/// Generates a presigned URL which will be used to upload an object to
/// an S3 bucket.
/// </summary>
/// <param name="client">The initialized S3 client object used to call
/// GetPreSignedURL.</param>
/// <param name="bucketName">The name of the S3 bucket to which the
/// presigned URL will point.</param>
/// <param name="objectKey">The name of the file that will be uploaded.</param>
/// <param name="duration">How long (in hours) the presigned URL will
/// be valid.</param>
/// <returns>The generated URL.</returns>
public static string GeneratePreSignedURL(
IAmazonS3 client,
string bucketName,
string objectKey,
double duration)
{
var request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.UtcNow.AddHours(duration),
};
string url = client.GetPreSignedURL(request);
return url;
}
}
}
You can find this example and other Amazon S3 examples in Github here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/dotnetv3/S3
After some extensive research, I was able to located the error in my code. When doing the GET call I was passing the incorrect file type, then, when trying to do the PUT call with the MIME type, it 403'd.
I had to update my initial GET path with the matching file type, and the appropriate encoding.
On my example below, it's hard coded, that'll be changing of course, but that was the error:
string path = HttpUtility.UrlPathEncode("image/getUploadURL/image%2Fpng/" + imageFileDetails.Name + "/" + sessionID);
Related
In my project there is a need for creating share link for external users without aws user from my researching found out a couple ways for doing so
Bucket policy based on tag
Lambda that creates sign url every time some user request the file
The question is what is the best practice for doing so
I need the download to be available until the user sharing the file stopes it
Thank guys for any answers
Using the AWS SDK, you can use Amazon S3 Pre-sign functionality. You can perform this task in any of the supported programming languages (Java, JS, Python, etc).
The following code shows how to sign an object via the Amazon S3 Java V2 API.
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.time.Duration;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.presigner.model.GetObjectPresignRequest;
import software.amazon.awssdk.services.s3.presigner.model.PresignedGetObjectRequest;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.utils.IoUtils;
// snippet-end:[presigned.java2.getobjectpresigned.import]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class GetObjectPresignedUrl {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" GetObjectPresignedUrl <bucketName> <keyName> \n\n" +
"Where:\n" +
" bucketName - the Amazon S3 bucket name. \n\n"+
" keyName - a key name that represents a text file. \n\n";
if (args.length != 2) {
System.out.println(USAGE);
System.exit(1);
}
String bucketName = args[0];
String keyName = args[1];
Region region = Region.US_WEST_2;
S3Presigner presigner = S3Presigner.builder()
.region(region)
.build();
getPresignedUrl(presigner, bucketName, keyName);
presigner.close();
}
// snippet-start:[presigned.java2.getobjectpresigned.main]
public static void getPresignedUrl(S3Presigner presigner, String bucketName, String keyName ) {
try {
GetObjectRequest getObjectRequest =
GetObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
GetObjectPresignRequest getObjectPresignRequest = GetObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.getObjectRequest(getObjectRequest)
.build();
// Generate the presigned request
PresignedGetObjectRequest presignedGetObjectRequest =
presigner.presignGetObject(getObjectPresignRequest);
// Log the presigned URL
System.out.println("Presigned URL: " + presignedGetObjectRequest.url());
HttpURLConnection connection = (HttpURLConnection) presignedGetObjectRequest.url().openConnection();
presignedGetObjectRequest.httpRequest().headers().forEach((header, values) -> {
values.forEach(value -> {
connection.addRequestProperty(header, value);
});
});
// Send any request payload that the service needs (not needed when isBrowserExecutable is true)
if (presignedGetObjectRequest.signedPayload().isPresent()) {
connection.setDoOutput(true);
try (InputStream signedPayload = presignedGetObjectRequest.signedPayload().get().asInputStream();
OutputStream httpOutputStream = connection.getOutputStream()) {
IoUtils.copy(signedPayload, httpOutputStream);
}
}
// Download the result of executing the request
try (InputStream content = connection.getInputStream()) {
System.out.println("Service returned response: ");
IoUtils.copy(content, System.out);
}
} catch (S3Exception e) {
e.getStackTrace();
} catch (IOException e) {
e.getStackTrace();
}
// snippet-end:[presigned.java2.getobjectpresigned.main]
}
}
I'm using Flutters' aws_s3_upload plugin which I found on Github. I am able to upload images to my AWS s3 bucket. However, the images are missing the "image/jpeg" mime/type required so that I may view them in a browser window as images.
At the moment when clicking on the URL the image downloads instead of appearing in my browser. Can I update this code so that it is uploaded to my S3 bucket as an image?
library aws_s3_upload;
import 'dart:io';
import 'package:amazon_cognito_identity_dart_2/sig_v4.dart';
import 'package:http/http.dart' as http;
import 'package:path/path.dart' as path;
import './src/policy.dart';
class AwsS3 {
static Future<String> uploadFile(
{
String accessKey,
String secretKey,
String bucket,
String destDir,
String region = 'us-east-2',
File file,
String filename}) async {
final endpoint = 'https://$bucket.s3-$region.amazonaws.com';
final uploadDest = '$destDir/${filename ?? path.basename(file.path)}';
final stream = http.ByteStream(Stream.castFrom(file.openRead()));
final length = await file.length();
final uri = Uri.parse(endpoint);
final req = http.MultipartRequest("POST", uri);
final multipartFile = http.MultipartFile('file', stream, length, filename: path.basename(file.path));
final policy = Policy.fromS3PresignedPost(uploadDest, bucket, accessKey, 15, length, region: region);
final key = SigV4.calculateSigningKey(secretKey, policy.datetime, region, 's3');
final signature = SigV4.calculateSignature(key, policy.encode());
req.files.add(multipartFile);
req.fields['key'] = policy.key;
req.fields['acl'] = 'public-read';
req.fields['X-Amz-Credential'] = policy.credential;
req.fields['X-Amz-Algorithm'] = 'AWS4-HMAC-SHA256';
req.fields['X-Amz-Date'] = policy.datetime;
req.fields['Policy'] = policy.encode();
req.fields['X-Amz-Signature'] = signature;
try {
final res = await req.send();
if (res.statusCode == 204) return '$endpoint/$uploadDest';
} catch (e) {
print(e.toString());
}
}
}
So I went with using Minio like this;
await minio.fPutObject('mybucket', objectpath, croppedFile.path,
{
'x-amz-acl': 'public-read'
}
);
In order to do that I needed the following dependencies;
import 'package:minio/minio.dart';
import 'package:minio/io.dart';
I am using ASP.NET Core and AWSSDK.S3 nuget package.
I am able to upload file by providing accessKeyID, secretKey, bucketName and region
Like this:
var credentials = new BasicAWSCredentials(accessKeyID, secretKey);
using (var client = new AmazonS3Client(credentials, RegionEndpoint.USEast1))
{
var request = new PutObjectRequest
{
AutoCloseStream = true,
BucketName = bucketName,
InputStream = storageStream,
Key = fileName
}
}
But I am given only an URL to upload file
11.11.11.111:/aa-bb-cc-dd-useast1
How to upload file through the URL? I am new to AWS,I will be grateful to get some help.
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string filePath = "*** provide the full path name of the file to upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}
private static async Task UploadFileAsync()
{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
// Option 1. Upload a file. The file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
}
}
}
}
https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileDotNet.html
You can use the provided access point in place of the bucket name.
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/S3/TPutObjectRequest.html
I am trying to retrieve images from my bucket to send to my mobile apps, I currently have the devices accessing AWS directly, however I am adding a layer of security and having my apps (IOS and Android) now make requests to my server which will then respond with DynamoDB and S3 data.
I am trying to follow the documentation and code samples provided by AWS for .Net and they worked seamlessly for DynamoDB, I am running into problems with S3.
S3 .NET Documentation
My problem is that if I provide no credentials, I get the error:
Failed to retrieve credentials from EC2 Instance Metadata Service
This is expected as I have IAM roles set up and only want my apps and this server (in the future, only this server) to have access to the buckets.
But when I provide the credentials, the same way I provided credentials for DynamoDB, my server waits forever and doesn't receive any responses from AWS.
Here is my C#:
<%# WebHandler Language="C#" Class="CheckaraRequestHandler" %>
using System;
using System.Web;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System.IO;
using System.Threading.Tasks;
public class CheckaraRequestHandler : IHttpHandler
{
private const string bucketName = "MY_BUCKET_NAME";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USEast1;
public static IAmazonS3 client = new AmazonS3Client("MY_ACCESS_KEY", "MY_SECRET_KEY", RegionEndpoint.USEast1);
public void ProcessRequest(HttpContext context)
{
if (context.Request.HttpMethod.ToString() == "GET")
{
string userID = context.Request.QueryString["User"];
string Action = context.Request.QueryString["Action"];
if (userID == null)
{
context.Response.ContentType = "text/plain";
context.Response.Write("TRY AGAIN!");
return;
}
if (Action == "GetPhoto")
{
ReadObjectDataAsync(userID).Wait();
}
var client = new AmazonDynamoDBClient("MY_ACCESS_KEY", "MY_SECRET_KEY", RegionEndpoint.USEast1);
Console.WriteLine("Getting list of tables");
var table = Table.LoadTable(client, "TABLE_NAME");
var item = table.GetItem(userID);
if (item != null)
{
context.Response.ContentType = "application/json";
context.Response.Write(item.ToJson());
}
else
{
context.Response.ContentType = "text/plain";
context.Response.Write("0");
}
}
}
public bool IsReusable
{
get
{
return false;
}
}
static async Task ReadObjectDataAsync(string userID)
{
string responseBody = "";
try
{
string formattedKey = userID + "/" + userID + "_PROFILEPHOTO.jpeg";
//string formattedKey = userID + "_PROFILEPHOTO.jpeg";
//formattedKey = formattedKey.Replace(":", "%3A");
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = formattedKey
};
using (GetObjectResponse response = await client.GetObjectAsync(request))
using (Stream responseStream = response.ResponseStream)
using (StreamReader reader = new StreamReader(responseStream))
{
string title = response.Metadata["x-amz-meta-title"]; // Assume you have "title" as medata added to the object.
string contentType = response.Headers["Content-Type"];
Console.WriteLine("Object metadata, Title: {0}", title);
Console.WriteLine("Content type: {0}", contentType);
responseBody = reader.ReadToEnd(); // Now you process the response body.
}
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
}
}
}
When I debug, this line waits forever:
using (GetObjectResponse response = await client.GetObjectAsync(request))
This is the same line that throws the credentials error when I don't provide them. Is there something that I am missing here?
Any help would be greatly appreciated.
I suspect that the AWS .NET SDK has some isses with it specifically with the async call to S3.
The async call to dynamoDB works perfect, but the S3 one hangs forever.
What fixed my problem was simply removing the async functionality (even tho in the AWS docs, the async call is supposed to be used)
Before:
using (GetObjectResponse response = await client.GetObjectAsync(request))
After:
using (GetObjectResponse response = myClient.GetObject(request))
Hopefully this helps anyone else encountering this issue.
I am successfully uploading multi-part files to AWS S3, but now I'm attempting to ad an MD5 checksum to each part:
static void sendPart(existingBucketName, keyName, multipartRepsonse, partNum,
sendBuffer, partSize, vertx, partETags, s3, req, resultClosure)
{
// Create request to upload a part.
MessageDigest md = MessageDigest.getInstance("MD5")
byte[] digest = md.digest(sendBuffer.bytes)
println(digest.toString())
InputStream inputStream = new ByteArrayInputStream(sendBuffer.bytes)
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(existingBucketName).withKey(keyName)
.withUploadId(multipartRepsonse.getUploadId()).withPartNumber(partNum)
.withInputStream(inputStream)
.withMD5Digest(Base64.getEncoder().encode(digest).toString())
.withPartSize(partSize);
// Upload part and add response to our list.
vertx.executeBlocking({ future ->
// Do the blocking operation in here
// Imagine this was a call to a blocking API to get the result
try {
println("Sending chunk for ${keyName}")
PartETag eTag = s3.uploadPart(uploadRequest).getPartETag()
partETags.add(eTag);
println("Etag: " + eTag.ETag)
req.response().write("Sending Chunk\n")
} catch(Exception e) {
}
def result = "success!"
future.complete(result)
}, resultClosure)
}
However I get the following error:
AmazonS3Exception: The XML you provided was not well-formed or did not
validate against our published schema (Service: Amazon S3; Status
Code: 400; Error Code: MalformedXML; Request ID: 91542E819781FDFC), S3
Extended Request ID:
yQs45H/ozn5+xlxV9lRgCQWwv6gQysT6A4ablq7/Epq06pUzy0qGvMc+YAkJjo/RsHk2dedH+pI=
What am I doing incorrectly?
Looks like I was converting the digest incorrectly.
static void sendPart(existingBucketName, keyName, multipartRepsonse, partNum,
sendBuffer, partSize, vertx, partETags, s3, req, resultClosure)
{
// Create request to upload a part.
MessageDigest md = MessageDigest.getInstance("MD5")
byte[] digest = md.digest(sendBuffer.bytes)
InputStream inputStream = new ByteArrayInputStream(sendBuffer.bytes)
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(existingBucketName).withKey(keyName)
.withUploadId(multipartRepsonse.getUploadId()).withPartNumber(partNum)
.withInputStream(inputStream)
.withMD5Digest(Base64.getEncoder().encodeToString(digest))
.withPartSize(partSize)
// Upload part and add response to our list.
vertx.executeBlocking({ future ->
try {
println("Sending chunk for ${keyName}")
PartETag eTag = s3.uploadPart(uploadRequest).getPartETag()
partETags.add(eTag);
req.response().write("Sending Chunk\n")
} catch(Exception e) {
}
def result = "success!"
future.complete(result)
}, resultClosure)
}