AWS S3 bucket for Lambda Functions - amazon-web-services

When I upload a Lambda function to AWS, via Eclipse/STS, it picks up an S3 bucket dynamically & uploads it to that bucket.
In some cases it picks up an S3 bucket which I would have created for (say) only media storage.
In such cases is it ok to change the location of the lambda to a preferred S3 bucket ?
What would happen if at one instance I upload a lambda to S3 bucket 'A', then during a later instance, I upload the Lambda to another S3 bucket 'B'?
Will this create any reference issues ?
Will the Lambda be stored in both the buckets , latest in both ? or older version in A & latest version on B ?

The Lambda function deployment file is just stored in S3 so it can be in a location that the Lambda service can load it from. Once the Lambda service loads it from S3 once, the file in S3 is never used again and can safely be deleted.
It is definitely safe, and preferable, to change the the S3 bucket being used to the bucket you prefer. I don't use Eclipse, but I find it ridiculous that it would just pick a bucket randomly. Surely there is a setting somewhere to tell it what bucket to use.

Related

Syncing files over different accounts buckets

I’m trying to sync one aws bucket to an another bucket across different iam accounts.
How can I do it periodically so any file written to the source bucket will automatically transforms to the destination? Do I need to use lambdas to execute aws cli sync command?
Thanks
Option 1: AWS CLI Sync
You could run aws s3 sync on a regular basis, which will only copy new/changed files. This makes it very efficient. However, if there is a large number of files (10,000+) then it will take a long time trying to determine which files need to be copied. You will also need to schedule the command to run somewhere (eg a cron job).
Option 2: AWS Lambda function
You could create an AWS Lambda function that is triggered by Amazon S3 whenever a new object is created. The Lambda function will be passed details of the Bucket & Object via the event parameter. The Lambda function could then call CopyObject() to copy the object immediately. The advantage of this method is that the objects are copied as soon as they are created.
(Do not use an AWS Lambda function to call the AWS CLI. The above function would be called for each file individually.)
Option 3: Amazon S3 Replication
You can configure Amazon S3 Replication to automatically replicate newly-created objects between the buckets (including buckets between different AWS Accounts). This is the simplest option since it does not require any coding.
Permissions
When copying S3 objects between accounts, you will need to use a single set of credentials that has both Read permission on the source bucket and Write permission on the target bucket. This can be done in two ways:
Use credentials (IAM User or IAM Role) from the source account that have permission to read the source bucket. Create a bucket policy on the target bucket that permits those credentials to PutObject into the bucket. When copying, specify ACL=public-read to grant object ownership to the destination account.
OR
Use credentials from the target account that have permission to write to the target bucket. Create a bucket policy on the source bucket that permits those credentials to GetObject from the bucket.

Can you trigger an AWS lambda function when a file is uploaded to a specific folder in S3?

I wish to trigger an AWS lambda function I upload a file to a specific folder in S3. There are multiple folders in the s3 bucket now. Is this possible and how do i do so?
Yes, you can Configure Amazon S3 event notifications, filtering on object key prefixes (and/or suffixes).
See Configuring notifications with object key name filtering. A prefix could be dogs/, for example. That way, all uploads to a key beginning with dogs/, e.g. dogs/alsatian.png would notify.
Note that you probably don't actually have any folders in your S3 bucket, just objects, unless you created them using the AWS Console. There really aren't any folders in S3.

Recurrent auto-sync of assets from one S3 bucket to another in separate account

The problem:
I have an old S3 bucket: bucket A and a new S3 bucket: bucket B. These buckets are in separate accounts. Up until now, I have been serving assets from bucket A. Moving forward, I want to serve assets from bucket B. I must still support pushing to bucket A. However, those assets pushed to bucket A must be retrievable from bucket B.
Possible solutions:
On every new push to bucket A (PutObject), I must sync that object from bucket A to bucket B. As I understand it, there are two ways to achieve this:
Using AWS Lambda with Amazon S3
Using DataSync <-- preferred solution
Issue with solution 2:
I have a feeling the path using DataSync will be less complex. However, it's not clear to me how to accomplish this, or if it is even possible. The examples I see in the documentation (granted there is a lot to sift through) are not quite the same as this use-case. In the console, it does not seem to allow a task across multiple AWS accounts.
The disconnect I'm seeing here is, the documentation implies it is possible. However, when you navigate to DataSync Locations in the AWS Console, there is only the option to add locations in your AWS accounts S3 bucket list.

AWS S3 copy object downloads it locally?

I am interested to know whether, when copying an S3 object from 1 bucket to another, the object gets downloaded to the client, even temporarily?
I am using AWS javascript SDK: s3.copyObject(...)
Thanks
They are not downloaded localy. They are executed on the AWS side.
They are free within the same region:
Transfers between S3 buckets or from Amazon S3 to any service(s) within the same AWS Region are free.
Recent AWS blog post explains copying between buckets:
How can I copy objects between Amazon S3 buckets?

How to Sync Amazon S3 Bucket with Akamai NetStorage using python and lambda function?

I want to automate the whole process, whenever a new image or video file comes into my s3 bucket I want to move those files to akamai netstorage using lambda and python boto or whatever best possible way.
You can execute a lambda based on s3 notifications (including file creation or deletion).
See aws walkthrough: https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
Indeed, the lambda function can be automatically executed as your file is dropped into the s3 bucket - there is a boto3 template and a trigger configurable at the lambda creation. You can further access the content from s3 bucket and propagate it to Akamai Netstorage, using this API: https://pypi.org/project/anesto/