Serverless and AWS deploy issue - amazon-web-services

I have to update a website on aws using serverless deploy.
This website were not created by me, it's the first time I work with serverless and AWS solutions.
I have the source code, deploy files, etc, from the last person in charge.
I run a before-deploy.js script to create all local files, check them to see if the updates went ok. Everything's fine.
But anytime I try to deploy using the simple command "serverless deploy", it fails printing this error :
CREATE_FAILED: MainStaticSite (AWS::S3::Bucket)
“mywebsite.com” already exists
I don’t really understand this error, as I know the website already exists but I just want to update it.
I tried more specific commands like :
serverless deploy -v --stage production --region eu-west-1
But this one only shows this output :
Framework Core: 3.10.1
Plugin: 6.2.0
SDK: 4.3.2
PS
And doesn't updates the website.
I changed the keys on AWS, maybe it's because of this ?
Looks like he doesn’t want to overwrite the existing files, but no idea why.
If someone has an answer or a lead.
Thank you :)

Related

Created a pipeline using AWS copilot, original push worked but when I make changes to code and push them to github they don't show up

would appreciate any help with this:
I've followed the guide for AWS copilot here: https://aws.github.io/copilot-cli/docs/getting-started/first-app-tutorial/ and then the guide for creating a pipeline and connecting it to github here: https://aws.github.io/copilot-cli/docs/concepts/pipelines/. That all appears to have worked and I can view the react app I'm working on at the url indicated in aws.
My problem is that when I make changes to my code and then push it to the tracked github branch, the changes don't appear when viewing the app at the url. However, when I make the push to github, the pipeline does register that a change has occured. It indicates that a change has been made and goes through the flow of creating a new build. But whatever I try, the changes don't seem to actually show up.
I assume that I'm missing something simple here, and that for some reason, docker is building the app based on the original code. But I can't figure out why that would be. Maybe something is weird with my DockerFile?
My docker file looks like this:
FROM node:16.14
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm i
COPY . ./
CMD ["npm", "run", "server"]
My understanding of how this should work, is that I push up new code to github, that is sent to the aws pipeline and a new image is generated based on that code, which is then used to create a container that is hosted on ECS. But clearly I am missing something.
copilot deploy does work. I'm unsure if
the problem is that my pipeline is successfully building (as it does not throw an error in the console) and then just not hosting it at the same url as copilot deploy. Or
the pipeline is hitting an error that just doesn't show up in the pipeline console. Digging into the logs I find this:
echo "Cloudformation stack and config files were not generated. Please check build logs to see if there was a manifest validation error." 1>&2;
Which seems to point towards the second option. Any suggestions on how resolve whatever it going on in the container if that is the problem?
The error suggests that I check build logs but these are the build logs. Are there more granular build logs I can examine?
When running containers in ECS, unless your container is already crashing because of an error, it often won't pick up code changes from your new image unless you force a new deployment. You can do this from the command line using the AWS CLI with the following:
aws ecs update-service --cluster <cluster_name> --service <service_name> --force-new-deployment --profile <aws_profile_name>
Note that the profile is optional if you're using your default aws cli configuration profile.

AWS cloudformation: How to run cfn-nag locally in Windows

I have a cloud formation template where I have all the resources and details for the project.
I have the cfn-lint setup locally and it is running perfectly fine. However when I push the code changes, build fails at deployment stage due to cfn-nag stating some simple changes which could be fixed.
I'm using windows machine and I need a way to run this cfn-nag locally so that I could check this just like cfn-lint and fix them locally instead of waiting 40 minutes for build till it reaches deployment stage.
I referred several posts online, found below two helpful
https://stelligent.com/2018/03/23/validating-aws-cloudformation-templates-with-cfn_nag-and-mu/
https://github.com/stelligent/cfn_nag
What is the difference between cfn-nag and cfn-lint and why lint is not failing on what cfn-nag is complaining about?
The above links have some instructions on Ruby and Brew but I'm using Nodejs, felt lost. Please help.
CFN-Nag looks for patterns in AWS CloudFormation templates that may indicate insecure infrastructure,
Ex:
IAM rules that are too permissive (wildcards),
Security group rules that are too permissive (wildcards),
Access logs that aren’t enabled,
Encryption that isn’t enabled,
CFN-Lint scans the AWS CloudFormation template by processing a collection of Rules, where every rule handles a specific function check or validation of the template. It validates against AWS CloudFormation Resource specification.
This collection of rules can be extended with custom rules using the --append-rules argument.
Ex: Whitespaces, alignment(YAML), type checks, valid values for resource properties, and other best practices.
Those two links you previded above have all the information needed, just not directly for a Nodejs developer using a Windows machine.
Step1: Pull the docket image stelligent/cfn-nag
Step2: Add the script to your package.json for cfn-nag
Ex:
"scripts" : {
"cfn:nag": "cfn-nag"
}
If you're using docker-compose.yml
Add the cfn-nag image details to your docker-compose.yml like below
cfn-nag:
image: "stelligent/cfn-nag"
volumes:
-./path_of_cfn_file_to_copy: /path_to_copy_to
command: ${COMMAND: -/path_to_copy_tp/cfn_file}
Just set the scripts in package.json to run via docker-compose
"cfn:nag": "docker-compose run --rm cfn-nag"

Error when deploying to codedeploy in AWS ec2

error : /scripts/execute-deploy.sh Script at specified location:
/scripts/execute-deploy.sh failed with error Errno::ENOENT with
message No such file or directory -
/opt/codedeploy-agent/deployment-root/0e164065-68f3-4cac-b540-6b70eaea7b0d/d-RSJV81S50/deployment-archive/scripts/execute-deploy.sh
Project on Github
I am trying to upload projects to an AWS ec2 instance, build them, and deploy them.
Right now, you can see the structure in the picture below.
I checked that the .zip file is saved without error in s3.
An error like this occurs when its building in codedeploy:
I tried googling. I tried to create a codedeploy application. I tried searching. Nothing has worked so far.
It says it could not locate the file, but there is actually a file in the directory.
This is my appspec.yml:
I really want to find a solution. Any help will be appreciated. I've been trying to solve it by myself for 4 days now.
Have you tried to manually execute your deployment in CodeDeploy applications via AWS Console ?

How to delete AWS ECS repositories which contain images using Ansible

I want to delete an AWS ECS repository using Ansible.
My Ansible version is 2.4.1.0 and it "should" support this as you can lookup here: http://docs.ansible.com/ansible/latest/ecs_ecr_module
However it doesn't work as intended because my repository still contains docker images.
Here's the code snippet:
- name: destroy-ecr-repos
ecs_ecr: name=jenkins-app state=absent
The resulting error message is:
...
The error was: RepositoryNotEmptyException: An error occurred (RepositoryNotEmptyException) when calling the DeleteRepository operation: The repository with name 'jenkins-app' in registry with id 'xyz' cannot be deleted because it still contains images
...
In the AWS Console it works perfectly fine. There's just a warning text which reminds you that there are still images left in the repository. But you're still able to force the deletion.
And now my question(s):
Is it somehow possible to force the deletion of the repository including its images?
... OR ...
Can I delete them with another tool separately before deleting the repository?
Maybe there simply is no implementation from the ansible side and I have to use the 'shell' module instead (and maybe open a feature request for that).
I'm very grateful for any advise.
First things first: Thanks to #vikas027
Solution from his/her/its answer:
https://docs.aws.amazon.com/cli/latest/reference/ecr/delete-repository.html#examples
History:
Ok, now I figured out, that there currently is no ansible functionality which supports the implicit deletion of images when deleting repositories on ecs.
BUT
I've implemented a workaround that despite its ugliness works for me.
I simply delete the image per shell module using the aws cli before actually removing the ecs repo.
Here's the snippet to do so:
- name: Delete remaining images in our repositories
shell: |
aws ecr list-images --repository-name jenkins-app --query 'imageIds[*]' --output text | while read imageId; do aws ecr batch-delete-image --repository-name jenkins-app --image-ids imageDigest=$imageId; done
- name: destroy-ecr-repo jenkins-app
ecs_ecr: name=jenkins-app state=absent
Hope that helps someone who faces this issue before ansible implements a possibility to delete images via built-in module.

AWS Elastic Beanstalk - Starting SWF Background Workers

I have been trying to find out the best way to run background jobs using PHP on AWS Elastic beanstalk, and after many hours searching on Google and SO, I believe that one good solution is using SWF and activity workers.
I found this example buried in the aws-sdk-for-php: https://github.com/amazonwebservices/aws-sdk-for-php/tree/master/_samples/AmazonSimpleWorkflow/cron
The read-me file says:
To run this sample, you need to execute three scripts from the command line in separate terminal/console windows
and
Note that the start_cron_example_workflow.php script will exit quickly
while the decider and activity worker scripts keep running until you
manually terminate them.
the decider and activity worker will loop "forever", and trying to run these in EB is what I'm having trouble doing.
In my .ebextensions directory I have a file that executes these files:
container_commands:
01background_task:
command: "php -f start_cron_example_activity_workers.php"
02background_task:
command: "php -f start_cron_example_workflow_workers.php"
But I get the following error messages:
ERROR
Failed to deploy application version.
ERROR
Some instances have not responded to commands. Responses were not received from [i-a5417ed4].
Any way I can do this using config files? How can I make this work in AWS EB without introducing a single point of failure?
Thank you.
You might consider using a service like IronWorker — this is specifically designed for what you are trying to do and will probably work better than putting together your own solution on a micro instance.
I have not used Iron.io yet, but was evaluating it as I am looking to move my stuff over to AWS so I need to have cron jobs handled as well.
Have you taken a look at the Fat Controller ? It can daemonise anything. There's documentation and examples on the website: http://fat-controller.sourceforge.net