How to generate an AWS S3 Pre-Signed URL - amazon-web-services

I try get the pre-signed URL to an Amazon S3 object using the Aws\S3\S3Client::createPresignedRequest() method:
$s3 = new S3Client($config);
$command = $s3->getCommand('GetObject', array(
'Bucket' => $bucket,
'Key' => $key,
'ResponseContentDisposition'=>'attachment; filename="' . $fileName . '"',
));
$request = $s3->createPresignedRequest($command, $time);
// Get the actual presigned-url
$this->signedUrl = (string)$request->getUri();
I get presigned-url like this:
https://s3.amazonaws.com/img/1c9a149e-57bc-11e5-9347-58743fdfa18a?X-Amz-Content-Sha256=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=13JZVPMFV04D8A3AQPG2%2F20150910%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20150910T181455Z&X-Amz-SignedHeaders=Host&X-Amz-Expires=1200&X-Amz-Signature=0d99ae98ea13e2974322575f95f5a19e94e13dc859b2509cecc21cd41c01c65d
and this url returned error:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
....

Generating a pre-signed URL is done entirely on the client side with no interaction with the S3 service APIs. As such, there is no validation that the object actually exists, at the time a pre-signed URL is created. (A pre-signed URL can technically even be created before the object is uploaded).
The NoSuchKey error means exactly that -- there is no such object with the specified key in the bucket, where key, in S3 parlance, refers to the path+filename (the URI) of the object. (It's referred to as a key as in the term key/value store -- which S3 is -- the path to the object is the "key," and the object body/payload is the "value.")
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html

Above error indicates the Key doesn't exist or file you are trying to generating presignedUrl doesn't exist. You can generate presignedURL even if object or key is not present.
I have faced multiple issues, similar like one above but i solved it by using await function in my code to wait until the s3 key is uploaded.
In my scenario, I was using Lambda to upload the file in S3 and generate presignedUrl and send to the client
Lambda Code:
const AWS = require('aws-sdk');
function uploadAndGeneratePreSignedUrlHandler () {
const s3 = new AWS.S3();
const basicS3ParamList = {
Bucket: "Bucket_NAME",
Key: "FILE_NAME", // PATH or FileName
};
const uploadS3ParamList = {
...basicS3ParamList,
Body: "DATA" // Buffer data or file
}
try {
await s3.upload(uploadS3ParamList).promise();
const presignedURL = s3.getSignedUrl('getObject', basicS3ParamList);
return presignedURL;
} catch (error) {
console.log('error')
}
}
Client side :
I created pop-up for download
// React
const popUpEventForDownload = (testParams) => {
try {
const fetchResponse = await axios({ method: 'GET', url: 'GATEWAY_URL', data: testParams });
const { presignedURL } = fetchResponse.data;
downloadCSVFile(presignedURL, 'test')
} catch(error) {
console.log('error');
}
}
downloadCSVFile = (sUrl, fileName) => {
//If in Chrome or Safari or Firefox - download via virtual link click
//Creating new link node.
var link = document.createElement('a');
link.href = sUrl;
if (link.download !== undefined) {
//Set HTML5 download attribute. This will prevent file from opening if supported.
link.download = `${fileName}.CSV`;
}
//Dispatching click event.
if (document.createEvent) {
var e = document.createEvent('MouseEvents');
e.initEvent('click', true, true);
link.dispatchEvent(e);
return true;
}
// Force file download (whether supported by server).
var query = '?download';
window.open(sUrl + query);
}

Related

AWS SDK generates wrong presigned url for reading an object

I am using S3 to store videos.
And now, I am using presigned urls to restrict access to them.
I am using these functions to generate the presigned url:
var getVideoReadSignedUrl = async function (url) {
const key = awsS3Helpers.parseVideoKeyFromVideoUrl(url);
return new Promise((resolve, reject) => {
s3.getSignedUrl(
"getObject",
{
Bucket: AWS_BUCKET_NAME,
Key: key,
Expires: 300,
},
(err, url) => {
console.log(
"🚀 ~ file: s3-config.js ~ line 77 ~ returnnewPromise ~ err",
err
);
console.log(
"🚀 ~ file: s3-config.js ~ line 77 ~ returnnewPromise ~ url",
url
);
if (err) {
reject(err);
} else {
resolve(url);
}
}
);
});
};
And this:
const parseVideoKeyFromVideoUrl = (object_url) => {
const string_to_remove =
"https://" + AWS_BUCKET_NAME + ".s3." + REGION + ".amazonaws.com/";
const object_key = object_url.replace(string_to_remove, "");
return object_key;
};
This is an example of a video url:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2F60e589xxxxxxxxx463c.mp4
So I call getVideoReadSignedUrl to get the signed url to give access to it:
getVideoReadSignedUrl(
"https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2F60e589xxxxxxxxx463c.mp4"
);
parseVideoKeyFromVideoUrl correctly parses the key from the url:
videos%2F60e589xxxxxxxxx463c.mp4
BUT, this is what getVideoReadSignedUrl generates:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%252F60e589xxxxxxxxx463c.mp4?X-Amz-Algorithm=AWS4-xxx-xxxxx&X-Amz-Credential=AKIARxxxxxMVEWKUW%2F20221120%2Feu-xxxx-3%2Fs3%2Faws4_request&X-Amz-Date=20221120xxxx19Z&X-Amz-Expires=300&X-Amz-Signature=0203efcfaxxxxxxc53815746f75a357ff9d53fe581491d&X-Amz-SignedHeaders=host
When I open that url in the browser it tells me that the key does not exist:
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>videos%2F60e589xxxxxxxxx463c.mp4</Key>
Even though in the error message the key is the same as in the original url.
But, I noticed a slight difference in the key between the original url and the presigned url:
Key in original url:
videos% |2F60| e589xxxxxxxxx463c.mp4
Key in signed url:
videos% |252F60| e589xxxxxxxxx463c.mp4
Not sure if this is causing the issue.
NOTE:
For the upload, I am using multipart upload and the reason the video url is structured like that is because of file_name here:
const completeMultiPartUpload = async (user_id, parts, upload_id) => {
let file_name;
file_name = `videos/${user_id}.mp4`;
let params = {
Bucket: AWS_BUCKET_NAME,
Key: file_name,
MultipartUpload: {
Parts: parts,
},
UploadId: upload_id,
};
return data;
};
However, it should be stored like this:
/videos/e589xxxxxxxxx463c.mp4
and not like this:
/videos%2F60e589xxxxxxxxx463c.mp4
I am not sure why it replaces / with %2F60 and this may be what's causing the whole issue.
After more investigation, I found that completeMultiPartUpload returns this:
{
Location:
"https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2Fxxxxxxxxxxxxxxxxx.mp4",
Bucket: "lodeep-storage-3",
Key: "videos/xxxxxxxxxxxxxxxxx.mp4",
ETag: '"ee6xxxxxxxxxxxxf11-1"',
}
And so the actualy object key is saved like this in S3:
"videos/xxxxxxxxxxxxxxxxx.mp4"
And this is the object url if I get it from AWS console:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos/xxxxxxxxxxxxxxxxx.mp4
But, in the database, this url gets saved like this since it's what completeMultiPartUpload function finally returns:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2Fxxxxxxxxxxxxxxxxx.mp4
Notice how / is replaced with %2F.
So, when I am generating the signed url to read the video, the function that parses the key from the url parseVideoKeyFromVideoUrl, instead of getting the correct url in the S3:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos/xxxxxxxxxxxxxxxxx.mp4
It gets the one stored in the database:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2Fxxxxxxxxxxxxxxxxx.mp4
And so instead of returning this as a key:
/videos/xxxxxxxxxxxxxxxxx.mp4
It returns this:
/videos%2Fxxxxxxxxxxxxxxxxx.mp4
Which is an incorrect key. And so the signed url to read the video is incorrect. And I get this key doesn't exist error.

Amazon S3 bucket upload by using createPresignedPost(). Is it safe to return result to client?

I have an api server which is calling the aws-sdk s3 function -> createPresignedPost(), then returning the result from the end point to the client which allows the client to perform the upload.
I need to know if the information which is returned is safe to return to the client or if there is any sensitive information returned here. See the fields returned below...
"url": "",
"key": "",
"bucket": "",
"X-Amz-Algorithm": "",
"X-Amz-Credential": "",
"X-Amz-Date": "",
"Policy": ""
"X-Amz-Signature": ""
Can I project any of these fields out? or are they all necessary to perform the upload?
Edit - If I remove any of the various other parameters, they each return an error similar to below
You are pre-signing an object to upload to an Amazon S3 bucket using the AWS SDK. For example, in Java, you use this:
public static void signBucket(S3Presigner presigner, String bucketName, String keyName) {
try {
PutObjectRequest objectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.contentType("text/plain")
.build();
PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.putObjectRequest(objectRequest)
.build();
PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);
String myURL = presignedRequest.url().toString();
System.out.println("Presigned URL to upload a file to: " +myURL);
System.out.println("Which HTTP method needs to be used when uploading a file: " +
presignedRequest.httpRequest().method());
// Upload content to the Amazon S3 bucket by using this URL
URL url = presignedRequest.url();
// Create the connection and use it to upload the new object by using the presigned URL
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestProperty("Content-Type","text/plain");
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
out.write("This text was uploaded as an object by using a presigned URL.");
out.close();
connection.getResponseCode();
System.out.println("HTTP response code is " + connection.getResponseCode());
} catch (S3Exception e) {
e.getStackTrace();
} catch (IOException e) {
e.getStackTrace();
}
}
As shown in this Java example, the only value you need to return to the client is the URL. A client can use this to upload an object. Also, notice that you can also control the time that the URL is valid via the signatureDuration value.

uplode image to amazon s3 using #aws-sdk/client-s3 ang get its location

i am trying upload an in image file to s3 but get this error says :
ERROR: MethodNotAllowed: The specified method is not allowed against this resource.
my code using #aws-sdk/client-s3 package to upload wth this code :
const s3 = new S3({
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
}
});
exports.uploadFile = async options => {
options.internalPath = options.internalPath || (`${config.s3.internalPath + options.moduleName}/`);
options.ACL = options.ACL || 'public-read';
logger.info(`Uploading [${options.path}]`);
const params = {
Bucket: config.s3.bucket,
Body: fs.createReadStream(options.path),
Key: options.internalPath + options.fileName,
ACL: options.ACL
};
try {
const s3Response = await s3.completeMultipartUpload(params);
if (s3Response) {
logger.info(`Done uploading, uploaded to: ${s3Response.Location}`);
return { url: s3Response.Location };
}
} catch (err) {
logger.error(err, 'unable to upload:');
throw err;
}
};
I am not sure what this error mean and once the file is uploaded I need to get his location in s3
thanks for any help
For uploading a single image file you need to be calling s3.upload() not s3.completeMultipartUpload().
If you had very large files and wanted to upload then in multiple parts, the workflow would look like:
s3.createMultipartUpload()
s3.uploadPart()
s3.uploadPart()
...
s3.completeMultipartUpload()
Looking at the official documentation, It looks like the new way to do a simple S3 upload in the JavaScript SDK is this:
s3.send(new PutObjectCommand(uploadParams));

Internal error Unable to get object metadata from S3. Check object key, region and/or access permissions in aws Textract awssdk.core

I am trying to run the Document analysis request with the use of an S3 bucket, but it is giving me an internal error. I extracted table values from a document. Here is my code. Please note and using the AWS SDK for .Net.
public async Task<IActionResult> Index()
{
var res = await StartDocumentAnalysis(BucketName, S3File, "TABLES");
return View();
}
public async Task<string> StartDocumentAnalysis(string bucketName, string key, string featureType)
{
var request = new StartDocumentAnalysisRequest();
var s3Object = new S3Object
{
Bucket = bucketName,
Name = key
};
request.DocumentLocation = new DocumentLocation
{
S3Object = s3Object
};
request.FeatureTypes = new List<string> { featureType };
var response = _textract.StartDocumentAnalysisAsync(request).Result;
WaitForJobCompletion(response.JobId, 5000);
return response.JobId;
}
Error message:
Internal error Unable to get object metadata from S3. Check object key, region and/or access permissions in aws Textract awssdk.core

Image Upload s3 bucket form deployed dot net cor AWS lambda serverless app not working (broken Image)

when i make post request to upload image file to aws s3 bucket form my local dot net core aws lambda serverless application it works but form my deployed application the image still uploded to s3 bucket but the image broken (shows black empty image)
here is the code
[HttpPut("PostImageFile")]
public async Task FileImageAsync(string Id)
{
var s3Client = new AmazonS3Client("*******", "*******", Amazon.RegionEndpoint.USEast1);
try
{
var httpRequest = HttpContext.Request;
//posted file
var file = httpRequest.Form.Files[0];
byte[] fileBytes = new byte[file.Length];
file.OpenReadStream().Read(fileBytes, 0, Int32.Parse(file.Length.ToString()));
var fileName = Guid.NewGuid() + file.FileName;
PutObjectResponse response = null;
using (var stream = new MemoryStream())
{
file.CopyTo(stream);
var request = new PutObjectRequest
{
BucketName = "imageakal",
Key = fileName,
InputStream = stream,
ContentType = file.ContentType,
CannedACL = S3CannedACL.PublicReadWrite
};
response = await s3Client.PutObjectAsync(request);
};
}
catch (Exception ex)
{
Console.Write("Upload Failed: " + ex.Message);
}
}
Without many more details, I would guess that your AWS settings could have a list of permitted/denied domains. I would check that your AWS instance is configured to allow requests from your domain.
Just put "multipart/form-data" at 'Binary Media Type' section in Api Gateway setting tab, and deploy it(don't forget).