I've seen the documentation on how to deploy to AWS S3 but am at a bit of a loss still (I also used this unfinished tutorial.
Do I need to setup an EC2 account? Where do I put the package.json file? Do only the out files (and package.json) need to be uploaded to Amazon? Does all of this go into the bucket MyWebsite.com?
Note - I'm new to web programming.
As you pointed out my tutorial is unfinished, but I hope it helped a little. Hopefully I can clarify a few things:
You're spot on with --env static. You need to add that to everything you do with DocPad so that what you see locally is the same as what you plan to deploy. I.e. you need to use DocPad run --env static too.
My 'tutorial' was on how to point AWS CloudFront at a root domain. This can be a frustrating experience particularly if you're just starting out with static sites. If you're using CloudFront, consider switching to just use S3. You can do this easily by changing the configuration in Route 53.
My tutorial misses an important step; You need to create another bucket with the name of your domain with www. in front of it if you would like to use www as well as your root domain. You then configure that bucket as a website and redirect it to the bucket with your website. This is all configuration.
You only need to upload everything in the out folder. package.json is not needed once the site has been generated.
Check out the source to my site and copy what you need. My whole deployment to S3 is automated through Grunt.
Have fun!
I finally found the answer to my question.
Basically what I needed to do was install the Docpad plugin cleanurls. I then needed to use the command line:
docpad generate --env static
And then upload to the AWS S3 bucket.
Here's the stack overflow answer where I gathered this information from.
Related
Currently I'm trying to solve some mystery changing settings in an Elastic Beanstalk environment. I will try to explain the situation and what is happening.
We have a Jenkins server which we can command to create a build (artifact) to upload to an S3 bucket. The S3 bucket is watched by a cloudwatch event what will trigger a codepipeline and deploys the artifact to multiple ec2 instances.
Now on the EBS environment there are some settings, for example the rolling updates part. It seems that sometimes the DeploymentPolicy is 'magically' updated from rolling to allAtOnce. Me and my colleagues don't understand how this setting is changed, because no-one touches this part of the gui.
I've tried putting these settings into config files, but that didn't work out so well. I tried adding a env.yaml in the project root. The EBS is reading this file, but I get errors that the syntax is incorrect. I tried following this document, but I can't get it to work.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-cfg-manifest.html
Also I tried adding a file in .ebextensions/rolling_updates.config with the following content:
option_settings:
aws:elasticbeanstalk:command:
BatchSizeType: Percentage
DeploymentPolicy: Immutable
But that also didn't work. (I tried some other values just to check if the update worked)
Is there somebody who has done this before and can help me explain what I should do to fix this?
One thought that has crossed my mind is putting a config file in an S3 bucket and install AWS-cli on the Jenkins server. Then every deployment we can update the EBS config before uploading the artifact to S3. But I'm not sure that's a correct approach as well.
Thanks all!
I am currently putting together an angular application which I would like to deploy as a static website using Amazon S3.
What is the best way of doing this? If the website is live and I would like to deploy a new version, should I deploy the new version to a new bucket and change the DNS redirect? Or should I push it to a new folder inside the bucket and modify the index file to refer to that folder?
I would appreciate any further advice from anyone that has experience doing this.
There are multiple ways to handle this, but best is to test your angular app/site locally and then redeploy it on S3. It will result in little downtime but that should be fine. Other ways like deploy somewhere else and then point it there require resources and do not guarantee success (what if you missed some config at new bucket etc).
Here is a blog for deploying app to S3 -
https://medium.com/wolox-driving-innovation/deploy-your-angularjs-app-to-aws-s3-with-ssl-3635a62533ab
Hope this helps!
I have searched high and low for an answer to this question but have been unable to find one.
I am building an Angular 2 app that I would like hosted on an S3 bucket. There will be an EC2 (possibly) backend but that's another story. Ideally, I would like to be able to check my code into Bitbucket, and by some magic that alludes me I would like S3, or EC2, or whatever to notice via a hook, for instance, that the source has changed. Of course the source would have to be built using webpack and the distributables deployed correctly.
Now this seems like a pretty straightforward request but I can find no solution exception something pertaining to WebDeploy which I shall investigate right now.
Any ideas anyone?
Good news, AWS Lambda created for you.
You need to create following scenario and code to achieve your requirement.
1-Create Lambda function, this function should do the following steps:
1-1- Clone your latest code from GitHub or Bitbucket.
1-2- install grunt or another builder for your angular app.
1-3- install node modules.
1-4- build your angular app.
1-5- copy new build to your S3 bucket.
1-6- Finish.
2-Create AWS API gateway with one resource and one method point to your Lambda function.
3-Goto your GitHub or Bitbucket settings and add webhook with your API gateway.
4-Enjoy life with AWS.
;)
Benefits:
1-You only charge when you have the new build.
2-Not need any machine or server (EC2).
3-You only maintain one function on Lambda.
for more info:
https://aws.amazon.com/lambda/
https://aws.amazon.com/api-gateway/
S3 isn't going to listen for Git hooks and fetch, build and deploy your code. BitBucket isn't going to build and deploy your code to S3. What you need is a service that sits in-between BitBucket and S3 that is triggered by a Git hook, fetches from Git, builds, and then deploys your code to S3. You need to search for Continuous Integration/Continuous Deployment services which are designed to do this sort of thing.
AWS has CodePipeline. You could setup your own Jenkins or TeamCity server. Or you could look into a service like CodeShip. Those are just a few of the many services out there that could accomplish this task. I think any of these services will require a bit of scripting on your part in order to get them to perform the actual webpack and copy to S3.
I am developing a web-app which will be hosted in an AWS EB. So far, I am done with the entire back-end, and am currently working on the build for deployment.
One thing bothers me however: my back-end accesses data from an AWS S3 Bucket, downloads the files locally (!), works it's magic on it and uploads it to a different bucket.
Since the data being downloaded is being downloaded in a local folder on my computer which won't be available once deployed, how can or should I modify this to make it run in the AWS EB instance? I absolutely need to DL the files (archives) to extract and modify them. How can I achieve this in the cloud ? Or am I looking at it the wrong way?
Any help would be appreciated, please be gentle I'm kind of new to the entire EE world ...
I'm working on my project which is placed on AWS EC2 instance. I used CodeDeploy to deploy app from GitHub to EC2. But I want to store public data as stylesheets, JS, images etc on S3. It's even possible to deploy app on EC2 and S3 in one step? Or should I place all files to EC2 instance only?
I've been learning AWS documentation about Elastic Beanstal, CodeDeploy, CodePipeline, Ops Works and others for two days, but I confused.
It sounds like you want to have two steps in your deployment. One where you update your static assets in S3 and another where you update your servers and dynamic content on EC2 instances.
Here are some options:
Since they are static, just have every EC2 host upload the S3 assets to your bucket as a BeforeInstall script. You would need to include the static content as part of your bundle you use with CodeDeploy.
Use a leader election algorithm to do (1) from a single host. You could deploy something like Zookeeper as part of your CodeDeploy deployment.
Upload your static assets as a separately from your CodeDeploy deployment. You might want to look ad CodePipeline as a solution for a more complex multistage deployment (which can use CodeDeploy for your server deployment).
In either case, you will want to make sure that you aren't just overwriting your static assets or you'll end up in the situation where you old server code is trying to use new static assets. You should always be careful that you can run both versions of your code side by side during a deployment.
I won't complicate it. I'll put all files to EC2 include CSS and JS by CodeDeploy from GitHub, because there is no simple and ideal solution for this.