How to deploy ember-cli app to S3 - ember.js

I have an ember-cli app that is deployed in S3. It works well, and I have Travis set up to deploy changes when there is a merge into the master branch in GitHub.
But sometimes I want to test a change in the deployment environment without a commit -- perhaps because it can only be tested in that environment, like a fix to a mobile-only defect.
So I tried:
ember build --environment=production
followed by:
aws s3 cp dist/ s3://my_bucket/ --recursive
which uploaded things to my bucket. But the page didn't work, my browser told me there was a redirect loop. It wasn't a code issue, because when I pushed the changes to master, Travis successfully deployed them to S3.
Is there something clearly wrong with what I did, copying the dist folder to my bucket?
I'm using Ember-cli 1.3.1, if that matters.

I suggest you use ember-cli-deploy works pretty and support several plugin in order deliver your code much more easier.
I haven been using it for a while a work pretty well.
http://ember-cli.github.io/ember-cli-deploy/plugins/

Related

Repository as dependency doesn't affect changes

My nextjs front-end app on AWS has a back-end dependency in package.json linked it in this way:
"api-client": "git+https://username:password#bitbucket.org/username/api_client_dev.git".
When I update my backend repository with changes, locally (npm run dev) everything works, but the app on AWS (with Amplify), when building recognizes an error type about a variable referring to something I haven't done yet.
My front-end doesn't recognize the updated repository.
If I check my repo on bitbucket is updated.
No problems with branches.
I don't understand why. Any suggestion?
Thank you
The problem was in amplify.yml
Adding the script npm update on pre-build, force amplify to refresh cached dependencies on node_modules, my dependency included.

Automatic scheduled git pulls on a GCP server running a flask website

I had a few questions about automatic git pulls on a remote server. I am aware there are several questions like this, but I wasn't sure what steps to take exactly, and I don't want to mess up my current setup with a mistake :/
To wit, the environment is on a Google Cloud VM. I am running a flask-based website that renders each page with the render_template() function.
The website resides inside its git folder, i.e. I never set up a bare repo and copied stuff. When I set it up a couple years ago, I just did git clone repo-url, then inside the repo directory, did flask run. Then I set up nginx to connect to the site's socket created with uwsgi inside the repo directory.
--
It has been working fine. I make changes locally to the content, push to github, then log in to the VM, and perform a git pull.
I want to do this automatically. I tried adding a cron job to do this, where the job basically ran a script, and the script did the git pull. Script content was:
cd /repo
git pull
Running the script in the server worked, but cron never managed to do the pull.
--
I have been reading about web hooks, and there is a bunch of stuff about post-receive hooks, post-update hooks, and making bare repos. At this point, I am embarrassed to say I have no idea what I should be doing.
Any help is greatly appreciated.
Another option would be to consider a GitHub Action, which, from GitHub, could interract with your Google cloud VM.
For example, actions-hub/gcloud.
- uses: actions-hub/gcloud#master
env:
PROJECT_ID: test
APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
with:
args: cp your-file.txt gs://your-bucket/
cli: gsutil

Is there a way to push changed to AWS Beanstalk instead of uploading an entire zip file on each deploy?

Im migrating a Play! application from Heroku to AWS Beanstalk.
Heroku is really straight forward when it comes to deploying: Just push changes to a remote git repository on Heroku and the build occurs on the server side.
This is very convenient because it is not necessary to upload the whole project for each tiny change (Including all libraries!).
Basically for each change we are generating a huge 140 MB Docker zipped file that takes at least 10 minutes to upload.
Surely there must be a better way but a long search on Google only returned options to automize the file generation with scripts and alternatives like Jenkins but this does not solve the problem, it just automates the problem.
Does anyone have a better solution?
You can set up a AWS CodeCommit repository, and use that as a remote for your local git repository. Next you can set up AWS CodePipeline to build your application and deploy to Elastic Beanstalk whenever there is a new commit to the AWS CodeCommit repository.
This way you don't have to upload everything every time. Whenever you do git push, only the changed files are uploaded to the AWS CodeCommit repository, and then AWS CodePipeline takes care of building your application and deploying it to Elastic Beanstalk.
So I got curious about this question too and had a conversation with an AWS specialist about different options here. Each option has it's downsides tho.
The first option is to bake your application code, create an AMI out of it and carry out deployment using baked AMI. More on that
You have to test this approach first before adopting. The downside is that you would have to regularly maintain the AMI. You might also miss out critical patches from Beanstalk since AMI has been locked down
A good read on this topic
The next approach would be to move out of Beanstalk and use CloudFormation where you can just upload your application folder to S3. Your CloudFormation template has to take care of spinning up all the resources required and using AWS::CloudFormation::Init and cfn-signals, it would be possible to install and setup software.Changes within the resource Metadata can be detected by making use of the proper CloudFormation signal and we can also run user-specified actions when a change is detected on the template specification.
(AWS::CloudFormation::Init)
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html (set of helper scripts that can be used with CloudFormation)
Although these are not exactly a solution to what you asked for, they can be a good alternative. At least I made sure that you are not missing out any available options at Beanstalk.
Also one advice I got from them was to consider splitting up application into multiple components and sub-components. This would reduce your application size considerably.
Hope this helped.
Short answer: No.
Long Answer: I ended up packaging the app with activator and not using Docker.
Crate a folder named "dist" in the root of the project.
Include a file named Procfile with the following line:
web: ./bin/YOUR_APP_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
Make sure to replace YOUR_APP_NAME with the name of your app as configured in build.sbt.
Package the Play app with the following command:
activator clean dist
That will generate a zip file inside target/universal/ folder in the project.
Deploy that zip file to AWS Elastic Beanstalk.

How do I setup a Heroku app with app.json?

I am learning about Heroku's app.json and app-setups features. I added an app.json to my repo's root directory and configured it to setup add-ons, env vars, etc.
Now I am trying to figure out the steps for someone who has cloned my repo locally from GitHub and made some edits, to deploy it to Heroku, and for the app.json to be processed.
Heroku's help article gives an example, but I feel maybe I am missing a simpler way, because (1) it uses cURL, which many of my users might not have installed, (2) it relies on the repo being at a publicly accessible URL, rather than locally, and (3) it's more verbose than typical Heroku commands.
Is there a simpler way?
It's been a while since this question was asked, but you can look at Heroku Button docs. It is definitely simpler. It does not use curl and can be used with private repositories.

Django Deployment Advice

I have a multi-step deployment system setup, where I develop locally, have a staging app with a copy of the production db, and then the production app. I use SVN for version control.
When deploying my production app I have been just moving the urls.py and settings.py files up a directory, deleting my django app directory with rm -rf command and then doing an svn export from the repository which creates a new django app directory with my updated code. I then move my urls.py and settings.py files back into place and everything works great.
My new problem is that I am now storing user uploads in a folder inside of my django app, so I can't just remove the whole app dir anymore or I would loose all of my users files.
What do you think my best approach is now? Would svn export --force work, since it should just be overwriting all of my changed files? Should I take an entirely new approach? I am open to advice?
You may want to watch this presentation by Jacob. It can help you improve your deployment process.
I use Bitbucket as my repo and I can simply perform push on my Dev box and run pull/update on Stage/Prod box. Actually I don't run them manually, I use fabric to do them for me :).
Your could use rsync or something similar to backup your uploaded files and use this backup when you deploy your project.
For deployment you could try to use buildout:
http://www.buildout.org/
http://pypi.python.org/pypi/djangorecipe
http://jacobian.org/writing/django-apps-with-buildout/
For other deployment methods see this question:
Django deployment tools
You can move your files to S3 servers (http://aws.amazon.com/s3/), so you will not ever have to care about moving them with your project.