Getting an error "File does not exist in artifact [SourceArtifact]" when working on codepipeline - amazon-web-services

I'm using S3 as source when creating the codepipline on "Add source stage".
During the "Add deploy stage" of codepiline i'm including the "Object URL" of the file as the artifactname but when i try to create the pipeline its failing with the error "File does not exist in artifact [SourceArtifact]" though the file is available in s3

Cloudformation deployment tasks in CodePipeline expect source artifacts to be in .zip format. The reference to the file within the artifact would be the path to the script within the zip file.
Per AWS documentation:
When you use Amazon Simple Storage Service (Amazon S3) as the source repository, CodePipeline requires you to zip your source files before uploading them to an S3 bucket. The zipped file is a CodePipeline artifact that can contain an AWS CloudFormation template, a template configuration file, or both.
Therefore, to correctly reference and process a Cloudformation script, follow the following steps:
Add your CloudFormation script (i.e. cf.yaml) to a .zip file (i.e. cf.zip)
Upload your zip file to S3
Set the .zip file as the path to the source S3 artifact (i.e. cf.zip)
Reference the source artifact in your deployment stage, but for the filename, reference the text file within the zip (i.e. cf.yaml)
Execute the pipeline
See Edit the artifact and upload it to an S3 Bucket

Related

AWS CDK Pipelines Creates Multiple Artifact Buckets

I'm having some trouble with CDK Pipeline/ CodePipeline in AWS. When I run the pipeline (git commit) the Assets section always runs even if I don't change the files that it is building and every pipeline execution creates an S3 bucket with pipeline assets so we have loads of s3 buckets. This behaviour while odd does seem to work but it takes a long time to run and doesn't seem right. Is this to be expected and if not what may be the issue?
Update
We sometimes see the below error msg in the build logs which may be related but it doesn't cause failure:
Failed to store notices in the cache: Error: ENOENT: no such file or directory, open '/root/.cdk/cache/notices.json'
If you create an S3 bucket and then reference that bucket in your Codepipeline, the output will always be in that S3 bucket, and the artifacts will be sub directories of that specific S3 bucket. That way you will get new build assets, but they will be placed inside of the same bucket, and you only have one S3 bucket.

How can I move one AWS resource (say, S3 bucket) created by terraform file of one project into another project, without deleting the resource?

I have a S3 bucket created by one terraform project. There is another project containing terraform files. I want to move the terraform code for S3 bucket from first project to another. Is there any way to do this without deleting the S3 bucket and also without taking some backup.
You can use terraform state mv command to move a resource from one project to another
Follow the below steps
Suppose /home/terraform1 is the first project and /home/terraform2 is the second project and BucketStorage is the name of resource
Go to /home/terraform1 terraform directory
Run,
terraform state mv -state-out=/home/terraform2/terraform.tfstate aws_s3_bucket.BucketStorage aws_s3_bucket.BucketStorage
Copy-paste terraform file for s3 from /home/terraform1 to /home/terraform2
Run terraform apply/plan, you can see the magic.

Deploy Lambda function to S3 bucket subdirectory

I am trying to deploy a Lambda function to AWS from S3.
My organization currently does not provide the ability for me to upload files to the root of an S3 bucket, but only to a folder (ie: s3://application-code-bucket/Application1/).
Is there any way to deploy the Lambda function code through S3, from a directory other than the bucket root? I checked the documentation for Lambda's CreateFunction AWS command and could not find anything obvious.
You need to zip your lambda package and upload to S3 in any folder.
You can then provide an https S3 url of the file to upload to lambda
function.
The S3 bucket needs to be in the same region as that of the lambda
function.
Make sure you zip from the folder, i.e when the package is unzipped,
the files should be extracted in the same directory as the unzip
command, and should not create a new directory for the contents.
I have this old script of mine that I used to automate lambda deployments.
It needs to be refactored a bit, but still usable.
It gets as input the lambda name and the zip file path located locally on your PC.
It uploads it to S3 and publishes to the AWS Lambda.
You need to set AWS credentials with IAM roles that allows:
S3 upload permission
AWS Lambda update permission
You need to modify the bucket name and the path you want your zip to be uploaded to. (lines 36-37).
That's it.

Upload nested directories to S3 with the AWS CLI?

I have been trying to upload a static website to s3 with the following cli command:
aws s3 sync . s3://my-website-bucket --acl public-read
It successfully uploads every file in the root directory but fails on the nested directories with the following:
An error occurred (InvalidRequest) when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
I have found references to this issue on GitHub but no clear instruction of how to solve it.
s3 sync command recursively copies the local folders to folder like s3 objects.
Even though S3 doesn't really support folders, the sync command creates the s3 objects with a format which will have the folder names in their keys.
As reported on the following amazon support thread "forums.aws.amazon.com/thread.jspa?threadID=235135" the issue should be solved by setting the region correctly.
S3 has no concept of directories.
S3 is an object store where each object is identified by a key.
The key might be a string like "dir1/dir2/dir3/test.txt"
AWS graphical user interfaces on top of S3 interpret the "/" characters as a directory separator and present the file list "as is" it was in a directory structure.
However, internally, there is no concept of directory, S3 has a flat namespace.
See http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html for more details.
This is the reason directories are not synced as there is no directories on S3.
Also the feature request is open in https://github.com/aws/aws-cli/issues/912 but has not been added yet.

Download Log files from Amazon S3

I am having my bucket on Amazon S3 where I have uploaded certains files.
I have a sort public visitation on the page.
Is there any way to get the all visitations in log file
or Can I download the log file from the Amazon?
You can create logging for S3 bucket at the creation of bucket itself also after creating bucket. You need to specify the path for log file to store.You can refer http://docs.aws.amazon.com/AmazonS3/latest/UG/ManagingBucketLogging.html for steps.