Opsworks unable to deploy - amazon-web-services

I've currently got an opsworks stack using Chef 12 but I seem unable to deploy to it, even with a publicly available git repo that Amazon provide by themselves.
The deploy says it completed successfully but the app code does not exist on the server. The only error I see in the logs (even though it's not labelled as an error) is a 404 page not found which I can only assume is when it tries to grab the repo.
According to this question Opsworks deploy to custom layer do I need to use https://github.com/aws/opsworks-cookbooks/tree/release-chef-11.4/deploy as well in my custom cookbook for the deploy to work? Is it just a case of referencing say the php deploy recipe as part of my deploy phase in my layer?
Thanks

Related

AWS code deploy to deploy

Right now I am manually deploying WAR files onto wildfly server(which is hosted on an ec2 instance) but I want to automate this and get rid of the manual deployments.
I build the application using jenkins (from another EC2 instance) and after that i want to deploy to the wildfly server and since i am also planning to user codepipeline, can anyone please tell me how to deploy an applcation on wildfly server using AWS CodeDeploy?
I am new to codedeploy so not that familiar with its usage.
Thank you,
Ajit
Hi #Ajith Code Deploy is the perfect tool for deploying applications in AWS. But it has some limitations, for example to give a source repository for the Code Build/ Code Deploy you can only choose these 3 as the code repository.
AWS Code Commit
AWS S3 Bucket
Guthub
You cant provide your own custom repository or Bitbucket. Please go through the below Examples for deploying applications using Code Deploy. I can't explain all the steps because it have a lot of steps. here they are explaining app deployment using tomcat, replace this with your wildfly scripts.
Deploy Applications From S3 using Code Deploy.

How to obtain working template from CloudFormation Management Console?

I am working to extend this solution https://github.com/adieuadieu/serverless-chrome to my needs.
I am using serverless (on my laptop with Debian 9) to deploy it to AWS Lambda. I would like to use AWS-Sam-local https://github.com/awslabs/aws-sam-local to run it locally for developing.
I would like to use AWS-Sam-local because I believe that there is difference between running this solution via serverless webpack serve --function run and sam local start-api. The difference I think, is event object which I want to make contain POST or binary data (multipart files transfer). For that I have to allow binary transfer via API Gateway.
But correct me if I am wrong because I am totally green in the AWS and Serverless field and this is my first time with this technologies.
The problem I get is aws-sam-local needs the CloudFormation template to know how to run serverless-chrome project. If I make deploy to AWS and go to CloudFormation Console I can copy that template after selecting it in "Stacks" table and clicking "Template" tab. Then I use cfn-flip to convert JSON into YAML. In the end I got template.yml, but running sam local start-api gives me error:
2017/10/06 11:03:23 Connected to Docker 1.32
ERROR: No Serverless functions were found in your SAM template.
Please tell me what to do to make serverless-chrome run locally as it would run on AWS Lambda.
The templates Serverless uses to deploy are available in two places:
Remotely, in the S3 deployment bucket
locally, in .serverless/

Deploying an Angular 2 app built with webpack using Bitbucket

I have searched high and low for an answer to this question but have been unable to find one.
I am building an Angular 2 app that I would like hosted on an S3 bucket. There will be an EC2 (possibly) backend but that's another story. Ideally, I would like to be able to check my code into Bitbucket, and by some magic that alludes me I would like S3, or EC2, or whatever to notice via a hook, for instance, that the source has changed. Of course the source would have to be built using webpack and the distributables deployed correctly.
Now this seems like a pretty straightforward request but I can find no solution exception something pertaining to WebDeploy which I shall investigate right now.
Any ideas anyone?
Good news, AWS Lambda created for you.
You need to create following scenario and code to achieve your requirement.
1-Create Lambda function, this function should do the following steps:
1-1- Clone your latest code from GitHub or Bitbucket.
1-2- install grunt or another builder for your angular app.
1-3- install node modules.
1-4- build your angular app.
1-5- copy new build to your S3 bucket.
1-6- Finish.
2-Create AWS API gateway with one resource and one method point to your Lambda function.
3-Goto your GitHub or Bitbucket settings and add webhook with your API gateway.
4-Enjoy life with AWS.
;)
Benefits:
1-You only charge when you have the new build.
2-Not need any machine or server (EC2).
3-You only maintain one function on Lambda.
for more info:
https://aws.amazon.com/lambda/
https://aws.amazon.com/api-gateway/
S3 isn't going to listen for Git hooks and fetch, build and deploy your code. BitBucket isn't going to build and deploy your code to S3. What you need is a service that sits in-between BitBucket and S3 that is triggered by a Git hook, fetches from Git, builds, and then deploys your code to S3. You need to search for Continuous Integration/Continuous Deployment services which are designed to do this sort of thing.
AWS has CodePipeline. You could setup your own Jenkins or TeamCity server. Or you could look into a service like CodeShip. Those are just a few of the many services out there that could accomplish this task. I think any of these services will require a bit of scripting on your part in order to get them to perform the actual webpack and copy to S3.

Github to Aws code deploy

Can anybody please guide me how to easily deploy code from github to aws using aws cpde deploy ? I gave tried my best to deploy my code and it is not working as it gives error every time it launch deploy revision.
You mentioned deployment is failing. So I'm assuming github automatically kicking off the deployment part is working. Now for the deployment failure, do you see the instance being deployed marked failed? Can you also look into the instance and see if the codedeploy agent is running fine and paste the log here if possible?
-Surya

Deploy to elasticbeanstalk via CLI deploy command with Dockerrun.aws.json

I am running an elasticbeanstalk application, with multiple environments. This particular application is hosting docker containers which host a webservice.
To upload and deploy a new version of the application to one of the environments, I can go through the web client and click on "Upload and Deploy" and from the file option I select my latest Dockerrun.aws.json file, which references the latest version of the container that is privately hosted. The upload and deploy works fine and without issue.
To make it simpler for myself and others to deploy I'd like to be able to use the CLI to upload and deploy the Dockerrun.aws.json file. If I use the cli eb deploy command without any special configuration the normal process of zipping up the whole application and sending it to the host occurs and fails (it cannot reason out that it only needs to read the Dockerrun.aws.json file).
I found a documentation tidbit about controlling what is uploaded using the .elasticbeanstalk/config.yml file.
Using this syntax:
deploy:
artifact: Dockerrun.aws.json
The file is uploaded and actually deploys successfully to the first batch of instances, and then always fails to deploy to the second set of instances.
The failure error is of the flavor: 'container exited unexpectedly...'
Can anyone explain, or provide link to the canonical approach for using the CLI to deploy single docker container applications?
So it turns out that the method I listed about with the config.yml was correct. The reason I was seeing a partially successful deployment was because the previously running docker container on the hosts was not being stopped by EB.
I think that what was happening was that EB is sending something like
sudo docker kill --signal=SIGTERM $CONTAINER_ID instead of the more common sudo docker stop $CONTAINER_ID
The specific container I was running didn't respond to SIGTERM and so it would just sit there. When I tested it locally with SIGKILL it would (obviously) stop properly, but SIGTERM alone wouldn't stop it.
The issue wasn't the deployment methodology but rather confusion in the output that EB generated and my misinterpretation.
Since you have asked for a link, I am providing a link which I initially used to successfully test and deploy docker using elasticbeanstalk cli.
Kindly see if this helps you as well: https://fangpenlin.com/posts/2014/11/25/running-docker-with-aws-elastic-beanstalk/