I'm having some trouble with CDK Pipeline/ CodePipeline in AWS. When I run the pipeline (git commit) the Assets section always runs even if I don't change the files that it is building and every pipeline execution creates an S3 bucket with pipeline assets so we have loads of s3 buckets. This behaviour while odd does seem to work but it takes a long time to run and doesn't seem right. Is this to be expected and if not what may be the issue?
Update
We sometimes see the below error msg in the build logs which may be related but it doesn't cause failure:
Failed to store notices in the cache: Error: ENOENT: no such file or directory, open '/root/.cdk/cache/notices.json'
If you create an S3 bucket and then reference that bucket in your Codepipeline, the output will always be in that S3 bucket, and the artifacts will be sub directories of that specific S3 bucket. That way you will get new build assets, but they will be placed inside of the same bucket, and you only have one S3 bucket.
Related
I have a build pipeline that has a source of AWS Code Commit. When there is a commit, this runs a build script in AWS Code Build that builds the project, builds a docker image and pushes into ECR. The final stage deploys the docker image into an ECS cluster in a different region which fails with the following error:
Replication of artifact 'BuildArtifact' failed: Failed replicating artifact from bucket 1 in region 1 to bucket 2 in region 2: Check source and destination artifact buckets exist and pipeline role has permission to access it.
Bucket 1 does have the artifact in it, but bucket 2 is empty. I have tried giving the code pipeline role full access to S3, but didn't change anything. There is nothing in cloud trail regarding the error. This question discuses a similar issue but I believe this is no longer relevant as the way cross region deployments work has changed since then. I have tried re-creating the pipeline (with the same parameters) but this still gives the same error. Perhaps there is some additional permission it needs that AWS didn't create.
If anybody could tell me how to fix, or debug this issue, it would be appreciated.
Thanks,
Adam
I was able to follow this example1 and let my ec2 instance read from S3.
In order to write to the same bucket I thought changing line 572 from grant_read() to grant_read_write()
should work.
...
# Userdata executes script from S3
instance.user_data.add_execute_file_command(
file_path=local_path
)
# asset.grant_read(instance.role)
asset.grant_read_write(instance.role)
...
Yet the documented3 function cannot be accessed according to the error message.
>> 57: Pyright: Cannot access member "grant_read_write" for type "Asset"
What am I missing?
1 https://github.com/aws-samples/aws-cdk-examples/tree/master/python/ec2/instance
2 https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ec2/instance/app.py#L57
3 https://docs.aws.amazon.com/cdk/latest/guide/permissions.html#permissions_grants
This is the documentation for Asset:
An asset represents a local file or directory, which is automatically
uploaded to S3 and then can be referenced within a CDK application.
The method grant_read_write isn't provided, as it is pointless. The documentation you've linked doesn't apply here.
an asset is just a Zip file that will be uploded to the bootstraped CDK s3 bucket, then referenced by Cloudformation when deploying.
if you have an script you want ot put into an s3 bucket, you dont want to use any form of asset cause that is a zip file. You would be better suited using a boto3 command to upload it once the bucket already exists, or making it part of a codePipeline to create the bucket with CDK then the next step in the pipeline uploads it.
the grant_read_write is for aws_cdk.aws_s3.Bucket constructs in this case.
I'm using S3 as source when creating the codepipline on "Add source stage".
During the "Add deploy stage" of codepiline i'm including the "Object URL" of the file as the artifactname but when i try to create the pipeline its failing with the error "File does not exist in artifact [SourceArtifact]" though the file is available in s3
Cloudformation deployment tasks in CodePipeline expect source artifacts to be in .zip format. The reference to the file within the artifact would be the path to the script within the zip file.
Per AWS documentation:
When you use Amazon Simple Storage Service (Amazon S3) as the source repository, CodePipeline requires you to zip your source files before uploading them to an S3 bucket. The zipped file is a CodePipeline artifact that can contain an AWS CloudFormation template, a template configuration file, or both.
Therefore, to correctly reference and process a Cloudformation script, follow the following steps:
Add your CloudFormation script (i.e. cf.yaml) to a .zip file (i.e. cf.zip)
Upload your zip file to S3
Set the .zip file as the path to the source S3 artifact (i.e. cf.zip)
Reference the source artifact in your deployment stage, but for the filename, reference the text file within the zip (i.e. cf.yaml)
Execute the pipeline
See Edit the artifact and upload it to an S3 Bucket
I am trying to deploy a Lambda function to AWS from S3.
My organization currently does not provide the ability for me to upload files to the root of an S3 bucket, but only to a folder (ie: s3://application-code-bucket/Application1/).
Is there any way to deploy the Lambda function code through S3, from a directory other than the bucket root? I checked the documentation for Lambda's CreateFunction AWS command and could not find anything obvious.
You need to zip your lambda package and upload to S3 in any folder.
You can then provide an https S3 url of the file to upload to lambda
function.
The S3 bucket needs to be in the same region as that of the lambda
function.
Make sure you zip from the folder, i.e when the package is unzipped,
the files should be extracted in the same directory as the unzip
command, and should not create a new directory for the contents.
I have this old script of mine that I used to automate lambda deployments.
It needs to be refactored a bit, but still usable.
It gets as input the lambda name and the zip file path located locally on your PC.
It uploads it to S3 and publishes to the AWS Lambda.
You need to set AWS credentials with IAM roles that allows:
S3 upload permission
AWS Lambda update permission
You need to modify the bucket name and the path you want your zip to be uploaded to. (lines 36-37).
That's it.
I have been trying to upload a static website to s3 with the following cli command:
aws s3 sync . s3://my-website-bucket --acl public-read
It successfully uploads every file in the root directory but fails on the nested directories with the following:
An error occurred (InvalidRequest) when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
I have found references to this issue on GitHub but no clear instruction of how to solve it.
s3 sync command recursively copies the local folders to folder like s3 objects.
Even though S3 doesn't really support folders, the sync command creates the s3 objects with a format which will have the folder names in their keys.
As reported on the following amazon support thread "forums.aws.amazon.com/thread.jspa?threadID=235135" the issue should be solved by setting the region correctly.
S3 has no concept of directories.
S3 is an object store where each object is identified by a key.
The key might be a string like "dir1/dir2/dir3/test.txt"
AWS graphical user interfaces on top of S3 interpret the "/" characters as a directory separator and present the file list "as is" it was in a directory structure.
However, internally, there is no concept of directory, S3 has a flat namespace.
See http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html for more details.
This is the reason directories are not synced as there is no directories on S3.
Also the feature request is open in https://github.com/aws/aws-cli/issues/912 but has not been added yet.