jenkinsfile - copy files to s3 and make public - amazon-web-services

I am uploading a website to an s3 bucket for hosting, I upload from a jenkins build job using this in the jenkins file
withAWS(credentials:'aws-cred') {
sh 'npm install'
sh 'ng build --prod'
s3Upload(
file: 'dist/topic-creation',
bucket: 'bucketName',
acl:'PublicRead'
)
}
After this step I go to the s3 bucket and get the URL (I have configured the bucket for hosting), when i go to the endpoint url I get a 403 error. When i go back to bucket and give all the items that got uploaded public access, then the URL brings me to my website.
I don't want to make the bucket public, I want to give the files public access, I thought adding the line acl:'PublicRead' which can be seen above would do this but it does not.
Can anyone tell me how I can upload the files and give public access from a jenkins file?
Thanks

Install S3Publisher plugin on your Jenkins instance: https://plugins.jenkins.io/s3/
In order to upload the local artifacts with public access onto your S3 bucket , use the following command (You can also use the Jenkins Pipeline Syntax):
def identity=awsIdentity();
s3Upload acl: 'PublicRead', bucket: 'NAME_OF_S3_BUCKET', file: 'THE_ARTIFACT_TO_BE_UPLOADED_FROM_JENKINS', path: "PATH_ON_S3_BUCKET", workingDir: '.'
In case of a Free-style build, here's a sample:

Related

CloudFront Error: This XML file does not appear to have any style information associated with it - Im ussing Vue and Vite

i have a vue app in a S3 Bucket inside AWS, and with my github workflow i can do a npm run build to create dist folder and copy it into the S3 so with that i have the compiled folder ready to production, and i have a CloudFront configured with that S3 bucket and the index.html works well, see the image below:
That page works well and i have a router with the following directory: /home and in localhost the page works well:
But in the cloudfront url the page returns the following error:
¿How can i solve this?
This is my github action to copy the contents inside the S3 Bucket:

Downloading s3 bucket to local directory but files not copying?

There are many, many examples of how to download a directory of files from an s3 bucket to a local directory.
aws s3 cp s3://<bucket>/<directory> /<path>/<to>/<local>/ --recursive
However, I run this command from my AWS CLI that I've connected to and see confirmation in the terminal like:
download: s3://mybucket/myfolder/data1.json to /my/local/dir/data1.json
download: s3://mybucket/myfolder/data2.json to /my/local/dir/data2.json
download: s3://mybucket/myfolder/data3.json to /my/local/dir/data3.json
...
But then I check /my/local/dir for the files, and my directory is empty. I've tried using the sync command instead, I've tried copying just a single file - nothing seems to work right now. In the past I did successfully run this command and downloaded the files as expected.
Why are my files not being copied now, despite seeing no errors?
For testing you can go to your /my/local/dir folder and execute following command:
aws s3 sync s3://mybucket/myfolder .

Text Compression when serving from a Github Trigger

I'm trying to figure out how to serve my js, css and html as compressed gzip from my Google Cloud Storage bucket. I've set up my static site properly, and also built a Cloud Build Trigger to sync the contents from the repository on push. My problem is that I don't want to have gzips of these files on my repository, but rather just serve them from the bucket.
I might be asking too much for such a simple setup, but perhaps there is a command I can add to my cloudbuild.yaml to make this work.
At the moment it is just this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", ".", "gs://my-site.com"]
As far as I'm aware this just syncs the bucket to the repo. Is there another command that could ensure that the aforementioned files are transferred as gzip? I've seen use of the gsutil cp
but not within this specific Cloud Build pipeline setup from Github.
Any help would be greatly appreciated!
The gsutil command setmeta lets you add metadata information to the files that overwrites the default http server. Which is handy for Content-Type, and Cache-* options.
gsutil setmeta -h "Content-Encoding: gzip" gs://bucket_name/folder/*
For more info about Transcoding with gzip-uploaded files: https://cloud.google.com/storage/docs/transcoding

How to disable encryption on AWS CodeBuild artifacts?

I'm using AWS CodeBuild to build an application, it is configured to push the build artifacts to an AWS S3 bucket.
On inspecting the artifcats/objects in the S3 bucket I realised that the objects has been encrypted.
Is it possible to disable to encryption on the artifcats/objects?
There is now a checkbox named "Disable artifacts encryption" under the artifacts section which allows you to disable encryption when pushing artifacts to S3.
https://docs.aws.amazon.com/codebuild/latest/APIReference/API_ProjectArtifacts.html
I know this is an old post but I'd like to add my experience in this regard.
My requirement was to get front end assets from a code commit repository, build them and put them in s3 bucket. s3 bucket is further connected with cloudfront for serving the static front end content (written in react in my case).
I found that cloudfront is unable to serve KMS encrypted content as I found KMS.UnrecognizedClientException when I hit the cloudfront Url. I tried to fix that and disabling encryption on aws codebuild artifacts seemed to be the easiest solution when I found this
However, I wanted to manage this using aws-cdk. This code snippet in TypeScript may come handy if you're trying to solve the same issue using aws-cdk
Firstly, get your necessary imports. For this answer it'd be the following:
import * as codecommit from '#aws-cdk/aws-codecommit';
import * as codebuild from '#aws-cdk/aws-codebuild';
Then, I used the following snippet in a class that extends to cdk Stack
Note: The same should work if your class extends to a cdk Construct
// replace these according to your requirement
const frontEndRepo = codecommit.Repository
.fromRepositoryName(this, 'ImportedRepo', 'FrontEnd');
const frontendCodeBuild = new codebuild.Project(this, 'FrontEndCodeBuild', {
source: codebuild.Source.codeCommit({ repository: frontEndRepo }),
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands: [
'npm install && npm run build',
],
},
},
artifacts: {
files: 'build/**/*'
}
}),
artifacts: codebuild.Artifacts.s3({
bucket: this.bucket, // replace with s3 bucket object
includeBuildId: false,
packageZip: false,
identifier: 'frontEndAssetArtifact',
name: 'artifacts',
encryption: false // added this to disable the encryption on codebuild
}),
});
Also to ensure that everytime I push a code in the repository, a build is triggered, I added the following snippet in the same class.
// add the following line in your imports if you're using this snippet
// import * as targets from '#aws-cdk/aws-events-targets';
frontEndRepo.onCommit('OnCommit', {
target: new targets.CodeBuildProject(frontendCodeBuild),
});
Note: This may not be a perfect solution, but it's working well for me till now. I'll update this answer if I find a better solution using aws-cdk
Artifact encryption cannot be disabled in AWS CodeBuild

File In Storage Bucket looses "Share publicly" permission after automated build

I'm a bit new to Google Cloud and am using a storage bucket to host a static website.
I've integrated automated builds via a build trigger when my master branch gets updated. I'm successfully able to see the changes when I push to GitHub, but when a preexisting file such as index.html gets updated, the file looses the permission to "Share publicly"
I've followed the tutorial below with the only difference being you the object permissions are now handled at the individual file level on the platform rather then a the top level for the bucket.
https://cloud.google.com/community/tutorials/automated-publishing-container-builder
This is my cloudbuild.yaml file
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", ".", "gs://www.mysite.com"]
If you don't configure at the bucket level to have all objects in that bucket publicly readable by default, you'll need to re-apply the permission to the newly uploaded file.
If you know all your updated files need to be set as publicly readable, you can use the -a option with your rsync command and use the canned_acl named "public-read". Your cloudbuild.yaml file would look like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-a", "public-read", "-r", "-c", "-d", ".", "gs://www.mysite.com"]
If you don't want to set all object publicly readable at once, you'll need to set permissions on a per object basis by listing objects and applying permissions with the following command:
gsutil acl ch -u AllUsers:R gs://nameBucket/dir/namefile.ext