AWS S3 Can't do anything with one file - amazon-web-services

I'm having issues trying to remove a file from my s3 bucket with the following name: Patrick bla bla 1 PV#05-06-2018-19:42:01.jpg
If I try to rename it through the s3 console, it just says that the operation failed. If I try to delete it, the operation will "succeed" but the file will still be there.
I've tried removing it through the aws cli, when listing the object I get this back
{
"LastModified": "2018-06-05T18:42:05.000Z",
"ETag": "\"b67gcb5f8166cab8145157aa565602ab\"",
"StorageClass": "STANDARD",
"Key": "test/\bPatrick bla bla 1 PV#05-06-2018-19:42:01.jpg",
"Owner": {
"DisplayName": "dev",
"ID": "bd65671179435c59d01dcdeag231786bbf6088cb1ca4881adf3f5e17ea7e0d68"
},
"Size": 1247277
},
But if I try to delete or head it, the cli won't find it.
s3api head-object --bucket mybucket --key "test/\bPatrick bla bla 1 PV#05-06-2018-20:09:37.jpg"
An error occurred (404) when calling the HeadObject operation: Not Found
Is there any way to remove, rename or just move this image from the folder?
Regards

It looks like your object's key begins with a backspace (\b) character. I'm sure there is a way to manage this using the awscli but I haven't worked out what it is yet.
Here's a Python script that works for me:
import boto3
s3 = boto3.client('s3')
Bucket ='avondhupress'
Key='test/\bPatrick bla bla 1 PV#05-06-2018-19:42:01.jpg'
s3.delete_object(Bucket=bucket, Key=key)
Or the equivalent in node.js:
const aws = require('aws-sdk');
const s3 = new aws.S3({ region: 'us-east-1', signatureVersion: 'v4' });
const params = {
Bucket: 'avondhupress',
Key: '\bPatrick bla bla 1 PV#05-06-2018-19:42:01.jpg',
};
s3.deleteObject(params, (err, data) => {
if (err) console.error(err, err.stack);
});

Related

I wan't to create presigned url post, but always failed

thanks for greate packages!
I have problem when i create development with localstack using S3 service to create presignedurl post.
I have run localstack with SERVICES=s3 DEBUG=1 S3_SKIP_SIGNATURE_VALIDATION=1 localstack start
I have settings AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_DEFAULT_REGION=us-east-1 AWS_ENDPOINT_URL=http://localhost:4566 S3_Bucket=my-bucket
I make sure have the bucket
> awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-bucket",
"CreationDate": "2021-11-16T08:43:23+00:00"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
I try create presigned url, and running in console with this
s3_client_sync.create_presigned_post(bucket_name=settings.S3_Bucket, object_name="application/test.png", fields={"Content-Type": "image/png"}, conditions=[["Expires", 3600]])
and have return like this
{'url': 'http://localhost:4566/kredivo-thailand',
'fields': {'Content-Type': 'image/png',
'key': 'application/test.png',
'AWSAccessKeyId': 'test',
'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19',
'signature': 'LfFelidjG+aaTOMxHL3fRPCw/xM='}}
And i test using insomnia
and i have read log in localstack
2021-11-16T10:54:04:DEBUG:localstack.services.s3.s3_utils: Received presign S3 URL: http://localhost:4566/my-bucket/application/test.png?AWSAccessKeyId=test&Policy=eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19&Signature=LfFelidjG%2BaaTOMxHL3fRPCw%2FxM%3D&Expires=3600
2021-11-16T10:54:04:WARNING:localstack.services.s3.s3_utils: Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1
2021-11-16T10:54:04:INFO:localstack.services.s3.s3_utils: Presign signature calculation failed: <Response [403]>
what i missing, so i cannot create the presignedurl post ?
The problem is with your AWS configuration -
AWS_ACCESS_KEY_ID=test // Should be an Actual access Key for the IAM user
AWS_SECRET_ACCESS_KEY=test // Should be an Actual Secret Key for the IAM user
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT_URL=http://localhost:4566 // Endpoint seems wrong
S3_Bucket=my-bucket // Actual Bucket Name in AWS S3 console
For more information, try to read here and setup your environment with correct AWS credentials - Setup AWS Credentials

AWS SDK v3 Assume role for a client

I'm writing a Node JS app using AWS JS SDK v3. I am correctly using STS to assume role and retrieve credentials. The assume call response is such:
{
"$metadata": {
"httpStatusCode": 200,
"requestId": "xxx",
"attempts": 1,
"totalRetryDelay": 0
},
"Credentials": {
"AccessKeyId": "xxx",
"SecretAccessKey": "xxx",
"SessionToken": "xxx",
"Expiration": "2021-02-26T15:40:17.000Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "xxx",
"Arn": "arn:aws:sts::xxx"
}
}
I am taking the credentials and passing them to a DynamoDBClient constructor (along with region).
import { DynamoDBClient, ListTablesCommand } from '#aws-sdk/client-dynamodb';
public getTablesList(region: string) {
const credentials = await getCredentialsFromSTS(region);
const clientParams = Object.assign(credentials, region); // This just merges two JSONs into one
const dbClient = new DynamoDBClient(clientParams);
const command = new ListTablesCommand({});
const response = await dbClient.send(command);
console.log(response);
}
What I get in the response is a list of only the tables of the account, that runs my code. The tables in the account where I am assuming the role are not present. Maybe I am assuming the role wrongly? I tried to package the credentials in a "Credentials" key, but that didn't help either.
const clientParams = Object.assign({ Credentials: credentials }, region);
Any idea what I am doing wrong here? Maybe the DDBClient is not assuming it as it should, but is there a way to check the assumed role for the DBClient? I found a namespace called RoleInfo in the documentation, but I don't know how to invoke it.
Please read the documentation of the DynamoDBClient parameter object: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB.html#constructor-property
The credentials fields are lower-case whereas in the STS result they are upper-case.
There are also more errors in your code, such as the Object.assign(credentials, region); call. region here is a string, so by assigning it into credentials you will not get a property named "region" with the value of that region variable, but you will get numeric properties, each with one character of the region string.
The correct initialization should be:
const credentials = await getCredentialsFromSTS(region);
const dbClient = new DynamoDBClient({
region: region,
credentials: {
accessKeyId: credentials.AccessKeyId,
secretAccessKey: credentials.SecretAccessKey,
sessionToken: credentials.SessionToken,
}
});

Empty S3 bucket | Access point ARN region is empty

I am trying to list the contents of my S3 bucket. I went through all the configuration steps and created & authenticated a user (I'm using Amplify). For record
Auth.currentCredentials(); gives
Object {
"accessKeyId": "ASIA6NIS4CMFGX3FWTNF",
"authenticated": true,
"expiration": 2020-10-14T13:30:49.000Z,
"identityId": "eu-west-2:6890ebd2-e3f3-4e1d-9725-9f9218241f60",
"secretAccessKey": "CGZsahSl53ulG9BJqGueM78xlMGhKcOs33UP2GUC",
"sessionToken": "IQoJb3JpZ2luX2VjEIX//////////wEaCWV1LXdlc3QtM ...
}
My code:
async function listObjects() {
Storage.list('s3://amplify-mythirdapp-dev-170201-deployment/',
{
level:'public',
region:'eu-west-2',
bucket:'arn:aws:s3:::amplify-mythirdapp-dev-170201-deployment'
})
.then(result => console.log(result))
.catch(err => console.log(err));
}
Which throws an error: Access point ARN region is empty
If I instead do bucket:amplify-mythirdapp-dev-170201-deployment it just returns Array []
But aws s3api list-objects --bucket amplify-mythirdapp-dev-170201-deployment correctly lists objects
What am I missing?
FYI I've also asked this question at aws-amplify/amplify-js/issues/2828.

Read and Copy S3 inventory data from SNS topic trigger with AWS lambda function

I am a data analyst and new to AWS lambda functions. I have an s3 bucket where I store the Inventory data from our data-lake which is generated using Inventory feature under S3 Management tab.
So lets say the inventory data (reports) looks like this:
s3://my-bucket/allobjects/data/report-1.csv.gz
s3://my-bucket/allobjects/data/report-2.csv.gz
s3://my-bucket/allobjects/data/report-3.csv.gz
Regardless of the file contents, I have an Event setup for s3://my-bucket/allobjects/data/ which notifies an SNS topic during any event like GET or PUT. (I cant change this workflow due to strict governance)
Now, I am trying to create a Lambda Function with this SNS topic as a trigger and simply move the inventory-report files generated by the S3 Inventory feature under
s3://my-bucket/allobjects/data/
and repartition it as follows:
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-1.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-2.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-3.csv.gz
How can I achieve this using the lambda function (node.js or python is fine) reading an SNS topic? Any help is appreciated.
I tried something like this based on some smaple code i found online but it didnt help.
console.log('Loading function');
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
exports.handler = function(event, context) {
console.log("\n\nLoading handler\n\n");
var sns = new AWS.SNS();
sns.publish({
Message: 'File(s) uploaded successfully',
TopicArn: 'arn:aws:sns:_my_ARN'
}, function(err, data) {
if (err) {
console.log(err.stack);
return;
}
console.log('push sent');
console.log(data);
context.done(null, 'Function Finished!');
});
};
The preferred method would be for the Amazon S3 Event to trigger the AWS Lambda function directly. But since you cannot alter this port, the flow would be:
The Amazon S3 Event will send a message to an Amazon SNS topic.
The AWS Lambda function is subscribed to the SNS topic, so it is triggered and receives the message from S3.
The Lambda function extracts the Bucket and Key, then calls S3 to copy_object() to another location. (There is no move command. You will need to copy the object to a new bucket/key.)
The content of the event field is something like:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "...",
"Sns": {
"Type": "Notification",
"MessageId": "1c3189f0-ffd3-53fb-b60b-dd3beeecf151",
"TopicArn": "...",
"Subject": "Amazon S3 Notification",
"Message": "{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"ap-southeast-2\",\"eventTime\":\"2019-01-30T02:42:07.129Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"AWS:AIDAIZCFQCOMZZZDASS6Q\"},\"requestParameters\":{\"sourceIPAddress\":\"54.1.1.1\"},\"responseElements\":{\"x-amz-request-id\":\"...",\"x-amz-id-2\":\"..."},\"s3\":{\"s3SchemaVersion\":\"1.0\",\"configurationId\":\"...\",\"bucket\":{\"name\":\"stack-lake\",\"ownerIdentity\":{\"principalId\":\"...\"},\"arn\":\"arn:aws:s3:::stack-lake\"},\"object\":{\"key\":\"index.html\",\"size\":4378,\"eTag\":\"...\",\"sequencer\":\"...\"}}}]}",
"Timestamp": "2019-01-30T02:42:07.212Z",
"SignatureVersion": "1",
"Signature": "...",
"SigningCertUrl": "...",
"UnsubscribeUrl": "...",
"MessageAttributes": {}
}
}
]
}
Thus, the name of the uploaded Object needs to be extracted from the Message.
You could use code like this:
import json
def lambda_handler(event, context):
for record1 in event['Records']:
message = json.loads(record1['Sns']['Message'])
for record2 in message['Records']:
bucket = record2['s3']['bucket']['name'])
key = record2['s3']['object']['key'])
# Do something here with bucket and key
return {
'statusCode': 200,
'body': json.dumps(event)
}

Create Lambda function which will parse the emails which uploaded to S3 with SES receipt rule

I would like to create Lambda function which will parse the emails which uploaded to S3 bucket through SES receipt rule.
Uploading to S3 bucket through SES receipt rule works fine. So, its tested already and confirmed that it uploads the file correctly.
My Amazon Lambda function:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var bucketName = 'bucket_name/folder/destination';
exports.handler = function(event, context, callback) {
console.log('Process email');
var sesNotification = event.Records[0].ses;
console.log("SES Notification:\n", JSON.stringify(sesNotification, null, 2));
// Retrieve the email from your bucket
s3.getObject({
Bucket: bucketName,
Key: sesNotification.mail.messageId
}, function(err, data) {
if (err) {
console.log(err, err.stack);
callback(err);
} else {
console.log("Raw email:\n" + data.Body);
// Custom email processing goes here
callback(null, null);
}
});
};
When there is a file upload it triggers the lambda but I get an [SignatureDoesNotMatch] error:
{
"errorMessage": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
"errorType": "SignatureDoesNotMatch",
"stackTrace": [
"Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:524:35)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:615:14)",
"Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)",
"AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)",
"/var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10",
"Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)",
"Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:617:12)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)"
]
}
If anyone can help me to approach this problem, it will be great!
Thanks
Okay I solved it. The bucketName should contain only the bucket's name. But the key can contain the rest of the route to your file's exact "directory".
So, basically bucketName should not contain subfolders but key should.
Thanks
For reference: AWS Lambda S3 GET/POST - SignatureDoesNotMatch error