i'm using ember-cli-deploy and ember-deploy-s3-index.
Following this article i managed to deploy the index to a bucket with static web hosting and another bucket holding the assets.
I want to automate (CI) the deploy process but there are two problems:
Each deploy adds an index file with a new name (test:b2907fa.html for example), and i need to manually change the index document to match the latest deploy in my s3 configuration.
I need to add permissions to the file on each deploy.
I would like to have a fixed name (override existing on deploy) for my index file, and that the file will have view permissions by default.
Is this possible?
Thanks.
Turn out you don't need to change the index document.
After deploy you need to run ember:deploy:activate --revision test:b2907fa --environment production and it will change it in the s3 bucket.
A simpler alternative with no add-ons/dependencies:
Deploying an ember cli app is as simple as syncing the contents of the dist/ folder to your server (after building with --production flag) These files can then be statically served
Here is a script I wrote to automate my deploy process:
printf "** Depoying application**\n"
cd ~/Desktop/Project/ember_test/censored
printf "\n** Building static files **\n"
ember build --environment=production
printf "\n** Synchronizing distribution folder to frontend.censored.co.za **\n"
rsync -rv ~/Desktop/Project/ember_test/censored/dist frontend#frontend.censored.co.za:/var/www/html/censored --exclude ".*/" --exclude ".*" --delete
printf "\n** Removing production build from local repository **\n"
rm -rv ~/Desktop/Project/ember_test/censored/dist/*
printf "\n** Deployment done. **\n"
This deploys to a linux server where you want to deploy to s3
So instead of my 3rd command (rsync) you would use s3cmd to put your folder into s3 (It would probably be a s3cmd put command)
Related
I have .env in my code. I copy it to s3. I want to delete it from my GitHub and beanstalk download it when it starts. which directory I should use?
I see my code is on
/var/app/current/
/var/www/html/
,...
I want to use .ebextensions
commands:
01_get_env_vars:
command: aws s3 cp s3://test/.env DIRECTORY
does it have a better solution?
Probably the best way would be to use container_commands instead of commands.
The reason is that conatiner_commands run in the staging folder /var/app/staging:
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server.
Thus your code could be:
container_commands:
01_get_env_vars:
command: aws s3 cp s3://test/.env .
where DIRECTORY is replaced by ..
I'm trying to figure out how to serve my js, css and html as compressed gzip from my Google Cloud Storage bucket. I've set up my static site properly, and also built a Cloud Build Trigger to sync the contents from the repository on push. My problem is that I don't want to have gzips of these files on my repository, but rather just serve them from the bucket.
I might be asking too much for such a simple setup, but perhaps there is a command I can add to my cloudbuild.yaml to make this work.
At the moment it is just this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", ".", "gs://my-site.com"]
As far as I'm aware this just syncs the bucket to the repo. Is there another command that could ensure that the aforementioned files are transferred as gzip? I've seen use of the gsutil cp
but not within this specific Cloud Build pipeline setup from Github.
Any help would be greatly appreciated!
The gsutil command setmeta lets you add metadata information to the files that overwrites the default http server. Which is handy for Content-Type, and Cache-* options.
gsutil setmeta -h "Content-Encoding: gzip" gs://bucket_name/folder/*
For more info about Transcoding with gzip-uploaded files: https://cloud.google.com/storage/docs/transcoding
I have a problem downloading entire folder in GCP. How should I download the whole bucket? I run this code in GCP Shell Environment:
gsutil -m cp -R gs://my-uniquename-bucket ./C:\Users\Myname\Desktop\Bucket
and I get an error message: "CommandException: Destination URL must name a directory, bucket, or bucket subdirectory for the multiple source form of the cp command. CommandException: 7 files/objects could not be transferred."
Could someone please point out the mistake in the code line?
To download an entire bucket You must install google cloud SDK
then run this command
gsutil -m cp -R gs://project-bucket-name path/to/local
where path/to/local is your path of local storage of your machine
The error lies within the destination URL as specified by the error message.
I run this code in GCP Shell Environment
Remember that you are running the command from the Cloud Shell and not in a local terminal or Windows Command Line. Thus, it is throwing that error because it cannot find the path you specified. If you inspect the Cloud Shell's file system/structure, it resembles more that of a Unix environment in which you can specify the destination like such instead: ~/bucketfiles/. Even a simple gsutil -m cp -R gs://bucket-name.appspot.com ./ will work since Cloud Shell can identify the ./ directory which is the current directory.
A workaround to this issue is to perform the command on your Windows Command Line. You would have to install Google Cloud SDK beforehand.
Alternatively, this can also be done in Cloud Shell, albeit with an extra step:
Download the bucket objects by running gsutil -m cp -R gs://bucket-name ~/ which will download it into the home directory in Cloud Shell
Transfer the files downloaded in the ~/ (home) directory from Cloud Shell to the local machine either through the User Interface or by running gcloud alpha cloud-shell scp
Your destination path is invalid:
./C:\Users\Myname\Desktop\Bucket
Change to:
/Users/Myname/Desktop/Bucket
C: is a reserved device name. You cannot specify reserved device names in a relative path. ./C: is not valid.
There is not a one-button solution for downloading a full bucket to your local machine through the Cloud Shell.
The best option for an environment like yours (only using the Cloud Shell interface, without gcloud installed on your local system), is to follow a series of steps:
Downloading the whole bucket on the Cloud Shell environment
Zip the contents of the bucket
Upload the zipped file
Download the file through the browser
Clean up:
Delete the local files (local in the context of the Cloud Shell)
Delete the zipped bucket file
Unzip the bucket locally
This has the advantage of only having to download a single file on your local machine.
This might seem a lot of steps for a non-developer, but it's actually pretty simple:
First, run this on the Cloud Shell:
mkdir /tmp/bucket-contents/
gsutil -m cp -R gs://my-uniquename-bucket /tmp/bucket-contents/
pushd /tmp/bucket-contents/
zip -r /tmp/zipped-bucket.zip .
popd
gsutil cp /tmp/zipped-bucket.zip gs://my-uniquename-bucket/zipped-bucket.zip
Then, download the zipped file through this link: https://storage.cloud.google.com/my-uniquename-bucket/zipped-bucket.zip
Finally, clean up:
rm -rf /tmp/bucket-contents
rm /tmp/zipped-bucket.zip
gsutil rm gs://my-uniquename-bucket/zipped-bucket.zip
After these steps, you'll have a zipped-bucket.zip file in your local system that you can unzip with the tool of your choice.
Note that this might not work if you have too much data in your bucket and the Cloud Shell environment can't store all the data, but you could repeat the same steps on folders instead of buckets to have a manageable size.
I configured my yocto project to use an auto-scaled gitlab-runner to run on AWS and now I noticed as the project has grown, that the cache fails to upload every time.
Uploading cache.zip to https://build-yocto.s3.amazonaws.com/project/default
WARNING: Retrying...
Uploading cache.zip to https://build-yocto.s3.amazonaws.com/project/default
FATAL: Received: 400 Bad Request
Failed to create cache
The cache contains the sstate-cache directory to speed up rebuilds which worked in the beginning like a charm but fails now since (at least thats my conclusion) the sstate-directory has grown to > 10GB.
I saw that S3 has the option for a multipart upload but can't find any options for gitlab-runner to enable this.
Is there any workaround for that issue? like preprocessing the sstate-cache and upload multiple caches?
Gitlab is currently not supporting multipart uploads to S3 so it can only handle caches up to 5GB. But check this issue/feature proposal on that topic before continue reading!
Therefore I built myself a dirty workaround but be warned! Anyone running a build on that runner can simply print the AWS AccessKey/SecretKey to the build-log!
Basically I just replicated the pulling and pushing of the cache from S3 and do it manually before and after my buildjob.
In my gitlab runner config.toml I added the following line in the [[runners]] section:
environment = ["AWS_ACCESS_KEY_ID=<AccessKey>", "AWS_SECRET_ACCESS_KEY=<SecretKey>", "AWS_DEFAULT_REGION=<region>", "AWS_DEFAULT_OUTPUT=<json,text or table>"]
That way the evrionment variables are set and aws cli has everything it needs.
In my Dockerfile I needed to add these packages:
# Install AWS CLI and tools
RUN apt-get install -y awscli tar pigz
The download script:
#!/bin/bash
mkdir <path to cache>
aws s3 cp s3://<bucket name>/cache - | pigz -dc | tar -xf - -C <path to cache>
The upload script:
#!/bin/bash
tar cf - -C <path to cache> . | pigz | aws s3 cp - s3://<bucket name>/cache --expected-size 7516192768
--expected-size is the aproximate size of the cache. This is required as aws cp s3 needs to pick a size of the parts of the cache and if it would pick a too small size for the upload it would exceed the maximum limit of parts of the multipart upload. My example used 7GB.
My .gitlab-ci.yaml looks now like this:
build:
script:
- ./download_cache.sh
- ./build.sh
- ./upload_cache.sh
I want to deploy war from Jenkins to Cloud.
Could you please let me know how to deploy war file from Jenkins on my local to AWS Bean Stalk ?
I tried using a Jenkins post-process plugin to copy the artifact to S3, but I get the following error:
ERROR: Failed to upload files java.io.IOException: put Destination [bucketName=https:, objectName=/s3-eu-west-1.amazonaws.com/bucketname/test.war]:
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connect to s3.amazonaws.com/s3.amazonaws.com/ timed out at hudson.plugins.s3.S3Profile.upload(S3Profile.java:85) at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:143)
Some work has been done on this.
http://purelyinstinctual.com/2013/03/18/automated-deployment-to-amazon-elastic-beanstalk-using-jenkins-on-ec2-part-2-guide/
Basically, this is just adding a post-build task to run the standard command line deployment scripts.
From the referenced page, assuming you have the post-build task plugin on Jenkins and the AWS command line tools installed:
STEP 1
In a Jenkins job configuration screen, add a “Post-build action” and choose the plugin “Publish artifacts to S3 bucket”, specify the Source (in our case, we use Maven so the source is target/.war and destination is your S3 bucket name)
STEP 2
Then, add a “Post-build task” (if you don’t have it, this is a plugin in Maven repo) to the same section above (“Post-build Actions”) and drag it below the “Publish artifacts to S3 bucket”. This is important that we want to make sure the war file is uploaded to S3 before proceeding with the scripts.
In the Post-build task portion, make sure you check the box “Run script only if all previous steps were successful”
In the script text area, put in the path of the script to automate the deployment (described in step 3 below). For us, we put something like this:
<path_to_script_file>/deploy.sh "$VERSION_NUMBER" "$VERSION_DESCRIPTION"
The $VERSION_NUMBER and $VERSION_DESCRIPTION are Jenkins’ build parameters and must be specified when a deployment is triggered. Both variables will be used for AEB deployment
STEP 3
The script
#!/bin/sh
export AWS_CREDENTIAL_FILE=<path_to_your aws.key file>
export PATH=$PATH:<path to bin file inside the "api" folder inside the AEB Command line tool (A)>
export PATH=$PATH:<path to root folder of s3cmd (B)>
//get the current time and append to the name of .war file that's being deployed.
//This will create a unique identifier for each .war file and allow us to rollback easily.
current_time=$(date +"%Y%m%d%H%M%S")
original_file="app.war"
new_file="app_$current_time.war"
//Rename the deployed war file with the new name.
s3cmd mv "s3://<your S3 bucket>/$original_file" "s3://<your S3 bucket>/$new_file"
//Create application version in AEB and link it with the renamed WAR file
elastic-beanstalk-create-application-version -a "Hoiio App" -l "$1" -d "$2" -s "<your S3 bucket>/$new_file"