TimeoutError while generating AWS s3 Presigned URL - amazon-web-services

I've created a route that generates a pre-signed URL to upload an object to AWS S3. It was working initially but lately, it's returning a Timeout Error.
Here is my controller code:
async (req, res, next)=>{
const BUCKET = req.params.bucket;
const KEY = 'myKey_'+ uuid.v4();
const EXPIRATION = 60 * 60 * 1000;
let signedUrl;
try{
// Also have a configuration file ~.aws/.credentials
const s3 = new S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
region: 'ap-south-1'
});
// creating a s3 presigner obj
const signer = new S3RequestPresigner({ ...s3.config });
// creating file upload request
const AWSUploadRequest = await createRequest(s3, new PutObjectCommand({
Bucket: BUCKET,
Key: KEY
}));
const expire = new Date(Date.now() + EXPIRATION);
// creating & formating presigned URL
signedUrl = formatUrl(await signer.presign(AWSUploadRequest, expire));
console.log(`Generated Signed URL: ${signedUrl}`);
}catch(err){
console.log(`Error creating presigned url ${err}`);
return next(
new ErrorResponse(
`Error while generating aws s3 presigned url`,
500
)
)}
res.status(200).json({
signedUrl
})
}
Here are my logs:
AWSUploadReq: {"method":"PUT","hostname":"s3.ap-south-1.amazonaws.com","query":{"x-id":"PutObject"},"headers":{"Content-Type":"application/octet-stream","user-agent":"aws-sdk-nodejs-v3-#aws-sdk/client-s3/1.0.0-rc.7 win32/v10.15.3"},"protocol":"https:","path":"/dammnn/myKey_file-c6f198d6-9e91-4892-8c88-8a49e15406c1"}
Error creating presigned url Error: TimeoutError
It's very vague as to why the request is getting timed out. Looking for some guidance on the same. Thanks.

Related

How to abort Multipart Upload while uploading using AWS SDK for JavaScript v3?

I'm trying to upload a large file using AWS SDK for JavaScript v3 multipart upload.
Basically I'm using Upload class from #aws-sdk/lib-storage to upload. But after sometime when the sessionToken expires, AWS start throwing 400 Bad Request error.
I'm calling uploadReq.abort() in catch block. And I was expecting that code in catch block will be executed immediately when AWS started throwing 400 error and no further part upload request will be trigger. Instead, it continues to trigger the upload part request and catch block is only called once all the subsequent part requests finished and failed. Is there a way to tell AWS s3 client to not trigger anymore part upload request when there is any error?
Here is the code I'm trying:
import {
AbortMultipartUploadCommandOutput,
CompleteMultipartUploadCommandOutput,
S3,
Tag
} from '#aws-sdk/client-s3';
import { Progress, Upload } from '#aws-sdk/lib-storage';
...
const s3Client = new S3({
region: config.region,
credentials: {
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
sessionToken: config.sessionToken
}
});
const uploadReq = new Upload({
client: s3Client,
params: {
Bucket: <bucketName>,
Key: <key>,
Body: <file_body>
},
tags: [], // optional tags
queueSize: 4, // optional concurrency configuration
partSize: 1024 * 1024 * 10, // (10MB) - optional size of each part, in bytes. e.g. 1024 * 1024 * 10
leavePartsOnError: false // optional manually handle dropped parts
});
const uploadReq$ = from(uploadReq.done()).pipe(
catchError(() => {
uploadReq.abort();
return of(null);
})
);

How do I make putObject request to presignedUrl using s3 AWS

I am working with AWS S3 Bucket, and trying to upload image from react native project managed by expo. I have express on the backend. I have created a s3 file on backend that handles getting the presigned url, and this works, and returns the url to the front end inside this thunk function from reduxjs toolkit. I used axios to send request to my server, this works. I have used axios and fetch to try the final put to the presigned url but when it reached the s3 bucket there is nothing in the file just an empty file with 200 bytes everytime. When I use the same presigned url from postman and upload and image in binary section then send the post request the image uploads to the bucket no problems. When I send binary or base64 to bucket from RN app it just uploads those values in text form. I attempted react-native-image-picker but was having problems with that too. Any ideas would be helpful thanks. I have included a snippet from redux slice. If you need more info let me know.
redux slice projects.js
// create a project
// fancy funtion here ......
export const createProject = createAsyncThunk(
"projects/createProject",
async (postData) => {
// sending image to s3 bucket and getting a url to store in d
const response = await axios.get("/s3")
// post image directly to s3 bucket
const s3Url = await fetch(response.data.data, {
method: "PUT",
body: postData.image
});
console.log(s3Url)
console.log(response.data.data)
// make another request to my server to store extra data
try {
const response = await axios.post('/works', postData)
return response.data.data;
} catch (err) {
console.log("Create projects failed: ", err)
}
}
)

AWS Lambda is returning truncated image ( binary response )

I am using the following architecture
Cloudfront is in the front of the APIGateway and APIgateway has Lambda proxy as the endpoint.
When a user request for an image, request goes to the cloudfront, which in turn goes to the APIgateway and to the Lambda proxy.
LambdaProxy makes the API call to the CDN and fetch a image.
Now, APIgateway is returning the truncated image everytime. I am not sure whats wrong with the below code
const axios = require('axios')
// const sharp = require('sharp')
const redis = require('redis')
const aws = require("aws-sdk")
var redisClient;
exports.handler = async (event,context,callback) => {
const result = await axios({
url: "https://cdn.pixabay.com/photo/2020/09/18/19/31/laptop-5582775_1280.jpg",
method:"GET" ,
responseType: 'arraybuffer',
// headers : lambdaEvent.headers
})
const buffer = Buffer.from(result.data)
const base64Buffer = buffer.toString("base64")
console.log( "Base64 encoded image is" + base64Buffer)
const imageResponse = {
"statusCode" : result.status,
"headers":{"Content-type":"image/jpeg"},
"body":base64Buffer,
"isBase64Encoded": true
}
return imageResponse
}
I have validated result.data is the correct image by creating a file in S3.
I have validated baseg64Buffer is the correct encoded string by online translating the base64Buffer to image. It shows me the expected image.
However, APIGateway response is always truncated to the browser.
Can any one help what wrong with this code.

How I know the signature of a signed url?

I have a lambda which generates a signed URL for users to upload files to s3 bucket. The code is in nodejs:
export const getSignedURL = async (): Promise<{ signedURL: string }> => {
try {
const s3 = new S3();
const params = {
Bucket: CONFIG.s3Bucket,
Key: `${CONFIG.s3PictureFolder}/${uuidv4()}`,
Expires: CONFIG.presignedURLExpires,
};
const signedURL = await s3.getSignedUrlPromise('putObject', params);
console.log(`generate signedURL url: ${signedURL}`);
return { signedURL };
} catch (err) {
console.error(err);
throw err;
}
};
I am able to get the url success. However, when I test it via curl:
curl -XPUT PRESIGNED_URL --data "my data"
I got this error:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXXX</AWSAccessKeyId><StringToSign>
It seems that this URL requires an access key. Does this key mean AWS credential key which is issued by IAM?
This URL was generated by the lambda function. How do I know which key it uses? And I'd like to generate a public presigned url for users to upload. Is there a way to do that?
After come debugging, I found that I need to add Content-Type when generate the presigned URL like below:
const params = {
Bucket: CONFIG.s3Bucket,
Key: `${CONFIG.s3PictureFolder}/${uuidv4()}`,
Expires: CONFIG.presignedURLExpires,
ContentType: 'text/plain'
};

Pre signed aws s3 url slowness

I am trying to secure s3 resources and i have created temporary credential using sts service in the same region where bucket exist . I am using these credential to create a pre signed url which will expire after one hour. These url are images and will be shown on a mobile app . But after implementation, we are experiencing slowness in the response from s3 . Does anyone experienced same behaviour ?
This is how i am generating pre signed url
try {
AmazonS3 s3Client = new AmazonS3Client(getTemporaryAwsCredentials());
// Generate the pre signed URL.
LOGGER.info("Generating pre-signed URL for " + unsignedUrl);
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, fileName).withMethod(HttpMethod.GET);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
LOGGER.info("Pre-Signed URL: " + url.toString());
return url.toString();
} catch (Exception e) {
LOGGER.error("Error while generating pre signed url ", e);
}
private AWSCredentials getTemporaryAwsCredentials() {
AWSSecurityTokenServiceClient sts_client = new AWSSecurityTokenServiceClient(getCredentials());
sts_client.setEndpoint("sts.us-west-2.amazonaws.com");
GetSessionTokenRequest session_token_request = new GetSessionTokenRequest();
session_token_request.setDurationSeconds(43200);
GetSessionTokenResult session_token_result = sts_client.getSessionToken(session_token_request);
Credentials session_creds = session_token_result.getCredentials();
BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
session_creds.getAccessKeyId(),
session_creds.getSecretAccessKey(),
session_creds.getSessionToken());
return sessionCredentials;
}