Granting write access to S3 in python cdk - amazon-web-services

I was able to follow this example1 and let my ec2 instance read from S3.
In order to write to the same bucket I thought changing line 572 from grant_read() to grant_read_write()
should work.
...
# Userdata executes script from S3
instance.user_data.add_execute_file_command(
file_path=local_path
)
# asset.grant_read(instance.role)
asset.grant_read_write(instance.role)
...
Yet the documented3 function cannot be accessed according to the error message.
>> 57: Pyright: Cannot access member "grant_read_write" for type "Asset"
What am I missing?
1 https://github.com/aws-samples/aws-cdk-examples/tree/master/python/ec2/instance
2 https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ec2/instance/app.py#L57
3 https://docs.aws.amazon.com/cdk/latest/guide/permissions.html#permissions_grants

This is the documentation for Asset:
An asset represents a local file or directory, which is automatically
uploaded to S3 and then can be referenced within a CDK application.
The method grant_read_write isn't provided, as it is pointless. The documentation you've linked doesn't apply here.

an asset is just a Zip file that will be uploded to the bootstraped CDK s3 bucket, then referenced by Cloudformation when deploying.
if you have an script you want ot put into an s3 bucket, you dont want to use any form of asset cause that is a zip file. You would be better suited using a boto3 command to upload it once the bucket already exists, or making it part of a codePipeline to create the bucket with CDK then the next step in the pipeline uploads it.
the grant_read_write is for aws_cdk.aws_s3.Bucket constructs in this case.

Related

Automate Lambda deployments using S3 buckets zip bundled code

Details - I have a CircleCI job that makes a zip of my lambda code and uploads it to S3 (We just keep updating the version of same s3 object for e.g. code.zip we dont change name).
Now i have CDK AWS code where i am defining the body of my lambda and making use of the s3 zip file using this url https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.Code.html#static-fromwbrbucketbucket-key-objectversion.
Issue - Now i want something automated deployment that whenever there is new version of code.zip file gets uploaded to S3, my all lambdas using should be automatically updated with the latest code.
Please suggest !!!
I can think of 2 solution for this
Have a step after you update the latest code in S3 to update your lambda function like below
aws lambda update-function-code
--function-name your_function_name
--s3-bucket --s3-key your_code.zip
Create another lamda function and create S3 create object or whatever event suits for you and even you can filter by .zip
And in you lambda function which will be triggered by S3 upload you can again use same AWS cli command to update your lambda function

How to download dataset from AWS Data Exchange?

I subscribed to the free synthetic dataset.
Now I have "Revision ID", "Revision ARN", "Data set ID" and 28 CSV files which I can not download in a pack. I must manually download them one after another or I can export them all to the AWS e3 (I do not want to do that).
Is there a way to download it all in a single archive or somehow automate the process via AWS S3 CLI?
I've tried
./venv/bin/awscliv2 s3 cp s3://arn:aws:dataexchange:us-east-1::data-sets/b0b14e86c092855166507c15e045b844/revisions/6011536d595840f7bd4412fca59e0f6b/assets/7cd4a5cbedb0c5c83e37c20f668b3708 ./
fatal error: Parameter validation failed:
Invalid bucket name "arn:aws:dataexchange:us-east-1::data-sets": Bucket
name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$" or be an ARN
matching the regex "^arn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[/:]
[a-zA-Z0-9\-]{1,63}$|^arn:(aws).*:s3-outposts:[a-z\-0-9]+:
[0-9]{12}:outpost[/:][a-zA-Z0-9\-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9\-]
{1,63}$"
UPD: I've found a python snippet, which uses boto3 and creates a temporary bucket in the process.
UPD 2:: From https://docs.aws.amazon.com/data-exchange/latest/userguide/jobs.html#exporting-assets
There are two ways you can export assets from a published revision of a product:
To an Amazon S3 bucket that you have permissions to access.
By using a signed URL.
Therefore, I can't do that stuff without bucket through AWS CLI, but I can use EXPORT_ASSET_TO_SIGNED_URL
UPD 3: I've created a gist for downloading a dataset from AWS DataExchange via signed urls
I dont know why u using such complicated aws s3 cp. 2 line script can be something like
#syncing all files in folder to local directory ~/csvs/
aws s3 sync s3://bucketname/<folder>/ ~/csvs/
#you can zip or tar full foder whatever u want
zip ~/csvs csvs.zip

Storing AWS Lambda Function code directly in S3 Bucket

AWS Lambda Functions have an option to enter the code uploaded as a file from S3. I have a successfully running lambda function with the code taken as a zip file from an S3 Bucket, however, any time you would like to update this code you would need to either manually edit the code inline within the lambda function or upload a new zip file to S3 and go into the lambda function and manually re-upload the file from S3. Is there any way to get the lambda function to link to a file in S3 so that it will automatically update its function code when you update the code file (or zip file) contained in S3?
Lambda doesn't actually reference the S3 code when it runs--just when it sets up the function. It is like it takes a copy of the code in your bucket and then runs the copy. So while there isn't a direct way to get the lambda function to automatically run the latest code in your bucket, you can make a small script to update the function code using SDK methods. I don't know which language you might want to use, but for example, you could write a script to call the AWS CLI to update the function code. See https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html
Updates a Lambda function's code.
The function's code is locked when you publish a version. You can't
modify the code of a published version, only the unpublished version.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
Synopsis
update-function-code
--function-name [--zip-file ] [--s3-bucket ] [--s3-key ] [--s3-object-version ] [--publish |
--no-publish] [--dry-run | --no-dry-run] [--revision-id ] [--cli-input-json ] [--generate-cli-skeleton ]
You could do similar things using Python or PowerShell as well, such as using
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.update_function_code
You can set up an AWS Code deploy pipeline to get your code build and deployed on code commit in your code repository(github,bitbucket,etc)
CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances, serverless
Lambda functions, or Amazon ECS services.
Also, wanted to add if you want to go on a more unattended route of deploying your Updated code to the Lambda use this flow in your code Pipeline
Source -> Code Build (npm installs and zipping etc.) -> S3 Upload (sourcecode.zip in S3 bucket) -> Code Build (another build just for aws lambda update-funtion-code)
Make sure the role for the last stage has both S3 getObject and Lambda UpdateFunctionCode policies attached to it.

Can I disable autocreate S3 Bucket in Zappa Init?

I want to get started with zappa framework, but when I'm running init zappa I got this notification:
How to disable autocreate s3 bucket?
You can't. Zappa first uploads your .zip file to a bucket and from there does the deployment.
From the official repo:
Zappa will automatically package up your application, [...] upload the archive to S3, create and manage the necessary Amazon IAM policies and roles, register it as a new Lambda function, create a new API Gateway resource, create WSGI-compatible routes for it, link it to the new Lambda function, and finally delete the archive from your S3 bucket. Handy!
So your option is to dig into Zappa and circumvent this on your own or perhaps try Chalice that does the upload directly.
#mislav is correct that Zappa does need an S3 bucket. But one only gets auto-created if you don't specify one. Simply provide a valid bucket name at the prompt, and Zappa will use that bucket instead of creating one for you.

Deploy Lambda function to S3 bucket subdirectory

I am trying to deploy a Lambda function to AWS from S3.
My organization currently does not provide the ability for me to upload files to the root of an S3 bucket, but only to a folder (ie: s3://application-code-bucket/Application1/).
Is there any way to deploy the Lambda function code through S3, from a directory other than the bucket root? I checked the documentation for Lambda's CreateFunction AWS command and could not find anything obvious.
You need to zip your lambda package and upload to S3 in any folder.
You can then provide an https S3 url of the file to upload to lambda
function.
The S3 bucket needs to be in the same region as that of the lambda
function.
Make sure you zip from the folder, i.e when the package is unzipped,
the files should be extracted in the same directory as the unzip
command, and should not create a new directory for the contents.
I have this old script of mine that I used to automate lambda deployments.
It needs to be refactored a bit, but still usable.
It gets as input the lambda name and the zip file path located locally on your PC.
It uploads it to S3 and publishes to the AWS Lambda.
You need to set AWS credentials with IAM roles that allows:
S3 upload permission
AWS Lambda update permission
You need to modify the bucket name and the path you want your zip to be uploaded to. (lines 36-37).
That's it.