I'm using AWS SES service to send emails to my customers, I wonder if there's any solution available to attach files directly into my email using SES and Lambda functions. I did a research and ended up in finding solutions which recommends to include a link to S3 files, not attaching the file as it is. I want to attach files as it is from SE, which is downloadable from the email itself. Not a link or reference to the attachment.
As folks mentioned in the comments above, there's no way to automatically send a file "directly" from S3 via SES. It sounds like you will need to write a Lambda function which performs the following steps:
Fetch file object from S3 into memory
Build multi-part MIME message with text body and file attachment
Send your raw message through SES
Step 1 is a simple matter of using S3.getObject with the appropriate Bucket/Key parameters.
I do not know which language you are using, but in Node.js step #2 can be accomplished using the npm package mailcomposer like so:
const mailOptions = {
from: 'no-reply#example.tld',
to: 'whoever#example.tld',
subject: 'The Subject Line',
text: 'Body of message. File is attached...\n\n',
attachments: [
{
filename: 'file.txt',
content: fileData,
},
],
};
const mail = mailcomposer(mailOptions);
mail.build(<callback>);
Step 3 is again a simple matter of using SES.sendRawEmail with the RawMessage.Data parameter set to the message you built in step 2.
Nodemailer comes to mind.
There is a good medium tutorial covering how to do it here.
Related
I am using AWS lambda to process a csv file and send out a message in chime to my team on a daily basis. I need to make a part of the message a hyperlink.
Current output: ID: https://www.example.com/ID=123456
Required output: ID: 123456
and when one clicks on the ID, it should take the user to a link like "https://www.google.com/ID=123456"
I am using urllib3 to send the output from aws lambda to chime group. I believe chime only offers Markdown or code type formatting. I would like to know if it is possible to implement a solution in the aws lambda function itself.
I have a task to send email with multiple attachments. S3 bucket will receive 2 files in same time approximately.
By using S3 bucket Put event, I am able to send email with single attachment by using lambda + SES.
Now the task is like,
I am getting 2 files in S3 like "XXXYYYZZZ" and "XXXYYYZZZ.20190712111820".
Prefix is same for both files and second file has name with its timestamp (20190712111820)
Here i need to send single email with above 2 files as attachment.
How to achieve it? I can understand Put Event will work on every new file gets created in S3.
I was able to achieve solution for the same by attaching multiple MimeBodyPart into the message.
For each attachment, create MimeBodyPart attachment, read it and add it to the MimeMultipart.
It worked fine for me.
I'm trying to use Go to send objects in a S3 bucket to Textract and collect the response.
I'm using the aws go sdk package and able to connect to my S3 bucket and list all the objects contained within. So far so good. I now need to be able to send one of those objects (a .pdf file) to Textract and collect the response(s).
The AWS Go SDK content for interacting with Textract seem to be quite extensive but I cannot find a good example for how to do this.
I would be very grateful for a sample or advice on how to do this.
To start a job, you invoke StartDocumentTextDetection, using a DocumentLocation to specify the file, and you specify a SNS topic where Textract will publish a notification when it has finished to process your job.
You have now two possibilities:
Subscribe to the SNS topic, and when you receive a message retrieve the result
Create a lambda function triggered by the SNS topic, which retrieves the result.
The second option is IMO better 'cause it use less computation time (doesn't run until the job hasn't finished).
To retrieve the job, you use GetDocumentTextDetection
If anyone else reaches this site searching for an answer:
I understood the documentation as if I could just call the StartDocumentAnalysis function through the textract SDK but in fact what was missing is the fact that you need to create a new Session first and do the calls based on the session:
https://docs.aws.amazon.com/sdk-for-go/api/service/textract/#New
I'm fairly new to GraphQL and AWS AppSync, and I'm running into an issue downloading files (PDFs and PNGs) from a public S3 bucket via AWS AppSync. I've looked at dozens of tutorials and dug through a mountain of documentation, and I'm just not certain what's going on at this point. This may be nothing more than a misunderstanding about the nature of GraphQL or AppSync functionality, but I'm completely stumped.
For reference, I've heavily sourced from other posts like How to upload file to AWS S3 using AWS AppSync (specifically, from the suggestions by the accepted answer author), but none of the solutions (or the variations I've attempted) are working.
The Facts
S3 bucket is publicly accessible – i.e., included folders and files are not tied to individual users with Cognito credentials
Files are uploaded to S3 outside of AppSync (so there's no GraphQL mutation); it's a manual file upload
Schema works for all other queries and mutations
We are using AWS Cognito to authenticate users and queries
Abridged Schema and DynamoDB Items
Here's an abridged version of the relevant GraphQL schema types:
type MetroCard implements TripCard {
id: ID!
cardType: String!
resIds: String!
data: MetroData!
file: S3Object
}
type MetroData implements DataType {
sourceURL: String!
sourceFileURL: String
metroName: String!
}
type S3Object {
bucket: String!
region: String!
key: String!
}
Metadata about the files is stored in DynamoDB and looks something like this:
{
"data": {
"metroName": "São Paulo Metro",
"sourceFileURL": "http://www.metro.sp.gov.br/pdf/mapa-da-rede-metro.pdf",
"sourceURL": "http://www.metro.sp.gov.br/en/your-trip/index.aspx"
},
"file": {
"bucket": "test-images",
"key": "some_folder/sub_folder/bra-sbgr-metro-map.pdf",
"region": "us-east-1"
},
"id": "info/en/bra/sbgr/metro"
}
VTL Request/Response Resolvers
For our getMetroCard(id: ID!): MetroCard query, the mapping templates are pretty vanilla. The request template is a standard query on a DynamoDB table. The response template is a basic $util.toJson($ctx.result).
For the field-level resolver on MetroCard.file, we've attached a local data source with an empty {} payload for the request and the following for the response (see referenced link for reasoning):
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file)) // we've played with this bit in a couple of ways, including simply returning $context.result but no change
Results
All of the query fields resolve appropriately; however, the file field inevitably always returns null no matter what the field-level resolver is mapped to. Interestingly, I've noticed in the CloudWatch logs the value of context.result does change from null to {} with the above mapping template.
Questions
Given the above, I have several questions:
Does AppSync file download require files to be uploaded to S3 with user credentials through a mutation with a complex object handler in order to make them retrievable?
What should a successful response look like in the AppSync console return – i.e., I have no client implementation (like a React Native app) to test successful file downloads? More directly, is it actually retrieving the files, and I just don't know it? (Note: I actually have tested it briefly with a React Native client, but nothing rendered so I've just been using the AppSync console returns as direction ever since.)
Does it make more sense to remove the file download process entirely from our schema? (I'm assuming the answers I need reveal that AppSync just wasn't built for file transfer like this, and so we'll need to rethink our approach.)
Update
I've started playing around with the data source for MetroCard.file per the suggestion of this recent post https://stackoverflow.com/a/52142178/5989171. If I make the data source the same as the database storing the file metadata, I now get the error mentioned in the ref but his solution doesn't seem to be working for me. Specifically, I now get the following:
"message": "Value for field '$[operation]' not found."
Our Solution
For our use case, we've decided to go ahead and use the AWS Amplify Storage module as suggested here: https://twitter.com/presbaw/status/1040800650790002689. Despite that, I'm keeping this question open and unanswered, because I'm just genuinely curious about what I'm not understanding here, and I have a feeling I'm not the only one!
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
You can only use this if your DynamoDB save file field as format: {"s3":{"key":"file.jpg","bucket":"bucket_name/folder","region":"us-east-1"}}
I'm looking to allow multiple clients can upload files to an S3 bucket (or buckets). The S3 create event would trigger a notification that would add a message to an SNS topic. This works, but I'm having issues deciding how to identify which client uploaded the file. I could get this to work by explicitly checking the uploaded file's subfolder/S3 name, but I'd much rather automatically add the client identifier as an attribute to the SNS message.
Is this possible? My other thought is using a Lambda function as a middle man to add the attribute and pass it along to the SNS Topic, but again I'd like to do it without the Lambda function if possible.
The Event Message Structure sent from S3 to SNS includes a field:
"userIdentity":{
"principalId":"Amazon-customer-ID-of-the-user-who-caused-the-event"
},
However, this also depends upon the credentials that were used when the object was uploaded:
If users have their individual AWS credentials, then the Access Key will be provided
If you are using a pre-signed URL to permit the upload, then the Access Key will belong to the one used in the pre-signed URL and your application (which generated the pre-signed URL) would be responsible for tracking the user who requested the upload
If you are generating temporary credentials for each client (eg by calling AssumeRole, then then Role's ID will be returned
(I didn't test all the above cases, so please do test them to confirm the definition of Amazon-customer-ID-of-the-user-who-caused-the-event.)
If your goal is to put your own client identifier in the message, then the best method would be:
Configure the event notification to trigger a Lambda function
Your Lambda function uses the above identifier to determine which user identifier within your application triggered the notification (presumably consulting a database of application user information)
The Lambda function sends the message to SNS or to whichever system you wish to receive the message (SNS might not be required if you send directly)
You can add user-defined metadata to your files before you upload the file like below:
private final static String CLIENT_ID = "client-id";
ObjectMetadata meta = new ObjectMetadata();
meta.addUserMetadata(CLIENT_ID, "testid");
s3Client.putObject(<bucket>, <objectKey>, <inputstream of the file>, meta);
Then when downloading the S3 files:
ObjectMetadata meta = s3Client.getObjectMetadata(<bucket>, <objectKey>);
String clientId = meta.getUserMetaDataOf(CLIENT_ID);
Hope this is what you are looking for.