How to see Test result in Codedeploy - amazon-web-services

I want to have a test stage in codepipeline. to do that I create a codedeploy as a stage of codepipeline, the appspec.yml is:
version: 0.0
os: linux
files:
- source: test
destination: /mycodedeploy/test
hooks:
AfterInstall:
- location: test/run_test.sh
- timeout: 3600
the code deploy completes successfully, except I do not see test result of test/run_test.sh in AWS console.
Where can I see the test result like?
"Ran 1 test in 0.000s
OK"
?

You won't be able to see the logs from your script in the AWS console unless you configure your instance to publish the logs to CloudWatch.
You should be able to see the logs on the host here: /opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log. If you don't publish them to CloudWatch, you'll have to manually look on the host. Here's more information on CodeDeploy agent logging.

Related

before install: CodeDeploy agent was not able to receive the lifecycle event. Check the CodeDeploy agent logs on your host and make sure the agent is

I have set up a pipeline, but i get the following error during deployment:
before install CodeDeploy agent was not able to receive the lifecycle event. Check the CodeDeploy agent logs on your host and make sure the agent is running and can connect to the CodeDeploy server.
Code Agent is running, but i do not know, what the problem is. I checked the logs of codedeploy:
[ec2-user#ip-172-31-255-11 ~]$ sudo cat /var/log/aws/codedeploy-agent/codedeploy-agent.log
2022-09-27 00:00:02 INFO [codedeploy-agent(3694)]: [Aws::CodeDeployCommand::Client 200 45.14352 0 retries] poll_host_command(host_identifier:"arn:aws:ec2:us-east-1:632547665100:instance/i-01d3b4303d7c9c948")
2022-09-27 00:00:03 INFO [codedeploy-agent(3694)]: Version file found in /opt/codedeploy-agent/.version with agent version OFFICIAL_1.4.0-2218_rpm.
2022-09-27 00:00:03 INFO [codedeploy-agent(3694)]: Version file found in /opt/codedeploy-agent/.version with agent version OFFICIAL_1.4.0-2218_rpm.
Also was unlucky enough to meet this problem today.
Please use this guide and look at the CodeDeploy agent logs of your compute platform instance (EC2, probably).
in my case, it turned out that I did not have an AppSpec file added to the project.

Google Cloud Build show no logs

I have a Google Cloud Trigger that triggers cloud build on Github push.
The problem is that the Cloud Build shows no logs. I followed this doc but can not find any logs on neither the Cloud Build log nor the Logs Explorer (see the image below)
This is my cloudbuild.yaml
steps:
# install dependencies
- name: node:16
entrypoint: yarn
args: []
# create .env file
- name: 'ubuntu'
args: ['bash', './makeEnv.sh']
env:
- 'GCP_SHOPIFY_STOREFRONT_ACCESS_TOKEN=$_GCP_SHOPIFY_STOREFRONT_ACCESS_TOKEN'
- 'GCP_SHOPIFY_DOMAIN=$_GCP_SHOPIFY_DOMAIN'
# build code
- name: node:16
entrypoint: yarn
args: ["build"]
# deploy to gcp
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --promote']
timeout: "1600s"
options:
logging: CLOUD_LOGGING_ONLY
The build failed but it actually create a subsequence App Engine build that successfully deploy a version to App Engine. But that version is not auto-promoted (see the image below)
I do not have all the details, so trying to help with all the above information mentioned.
As I can see you are using CLOUD_LOGGING_ONLY and not been able to see the log in the log explorer and considering you have all the permissions to access the logs.
I would suggest you to look into the service account that you are using for cloud build must at least have the role:
role/logging.logWriter or permission:logging.logEntries.create permission if it is not the default cloud build SA project-number#cloudbuild.gserviceaccount.com.
Hope this helps :)
In my case, looking at the Google Cloud Build Service Account (project-number#cloudbuild.gserviceaccount.com) in the Google Cloud IAM console, it was missing the role Cloud Build Service Account. I was also missing logs.
This fixed symptom of a cloud function deploy with the message:
(gcloud.functions.deploy) OperationError: code=3, message=Build failed: {
"metrics":{},
"error":{
"buildpackId":"",
"buildpackVersion":"",
"errorType":"OK",
"canonicalCode":"OK",
"errorId":"",
"errorMessage":""
}
}

AWS CodePipeline Action execution failed

I'm trying to hook my GitHub repo with S3 so every time there's a commit, AWS CodePipeline will deploy the ./<path>/public folder to a specified S3 bucket.
So far in my pipeline, the Source works (hooked to GitHub and picks up new commits) but the Deploy failed because: Action execution failed
BundleType must be either YAML or JSON.
This is how I set them up:
CodePipeline
Action name: Source
Action provider: GitHub
Repository: account/repo
Branch: master
GitHub webhooks
CodeDeploy
Compute type: AWS Lambda
Service role: myRole
Deployment settings: CodeDeployDefault.LambdaAllAtOnce
IAM Role: myRole
AWS Service
Choose the service that will use this role: Lambda / CodeDeploy
Select your use case: CodeDeploy
Policies: AWSCodeDeployRole
I understand that there must be a buildspec.yml file in the root folder. I've tried using a few files I could find but they don't seem to work. What did I do wrong or how should I edit the buildspec file to do what I want?
Update
Thanks to #Milan Cermak. I understand I need to do:
CodePipeline:
Stage 1: Source: hook with GitHub repo. This one is working.
Stage 2: Build: use CodeBuild to grab only the wanted folder using a buildspec.yml file in the root folder of the repo.
Stage 3: Deploy: use
Action Provider: S3
Input Artifacts: OutputArtifacts (result of stage 2).
Bucket: the bucket that hosts the static website.
CodePipeline works. However, the output contains only files (.html) not folders nested inside the public folder.
I've checked this and figured how to remove path of a nested folder with discard-paths: yes but I'm unable to get all the sub-folders inside the ./<path>/public folder. Any suggestion?
CodeBuild use buildspec, but CodeDeploy use appspec.
Is there any appspec file?
You shouldn't use CodeDeploy, as that's a service for automation of deployments of applications, but rather CodeBuild, which executes commands and prepares the deployment artifact for further use in the pipeline.
These commands are in thebuildspec.yml file (typically in the root directory of the repo, but it's configurable). For your use case, it won't be too complicated, as you're not compiling anything or running tests, etc.
Try this as a starting point:
version: 0.2
phases:
build:
commands:
- ls
artifacts:
files:
- public/*
The phases section is required, that's why it's included (at least, thanks to the ls command, you'll see what files are present in the CodeBuild environment), but it's not interesting for your case. What is interesting is the artifacts section. That's where you define what is the output of the CodeBuild phase, i.e. what gets passed further to the next step in the pipeline.
Depending on how you want to have the files structured (for example, do you want to have the public directory also in the artifact or do you only want to have the files themselves, without the parent dir), you might want to use other configuration that's possible in the artifacts section. See the buildspec reference for details.
Remember to use the output artifact of the CodeBuild step as the input artifact of the Deploy to S3 step.
Buildspec is for CodeBuild as t_yamo pointed out.
You are using CodeDeploy which uses an appspec.yml file, which looks something like this for my config.
version: 0.0
os: linux
files:
- source: /
destination: /path/to/destination
hooks:
BeforeInstall:
- location: /UnzipResourceBundle.sh
ApplicationStart:
- location: /RestartServer.sh
timeout: 3600
UnzipResourceBundle.sh is just a bash script which can be used to do any number of things.
#!/bin/bash
// Do something
You can find a sample for the AppSpec.yml file from Amazon Documentation here - https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-example.html#appspec-file-example-lambda
CodePipeline recently announced a deploy to S3 action: https://aws.amazon.com/about-aws/whats-new/2019/01/aws-codepipeline-now-supports-deploying-to-amazon-s3/

Logs to AWS Cloudwatch from Docker Containers

I have a few docker containers running with docker-compose on an AWS EC2 instance. I am looking to get the logs sent to AWS CloudWatch. I was also having issues getting the logs from docker containers to AWS CloudWatch from my Mac running Sierra so I've moved over to EC2 instances running Amazon AMI.
My docker-compose file:
version: '2'
services:
scraper:
build: ./Scraper/
logging:
driver: "awslogs"
options:
awslogs-region: "eu-west-1"
awslogs-group: "permission-logs"
awslogs-stream: "stream"
volumes:
- ./Scraper/spiders:/spiders
When I run docker-compose up I get the following error:
scraper_1 | WARNING: no logs are available with the 'awslogs' log driver
but the container is running. No logs appear on the AWS CloudWatch stream. I have assigned an IAM role to the EC2 container that the docker-containers run on.
I am at a complete loss now as to what I should be doing and would apprecaite any advice.
The awslogs works without using ECS.
you need to configure the AWS credentials (the user should have IAM roles appropriate [cloudwatch logs]).
I used this tutorial, it worked for me: https://wdullaer.com/blog/2016/02/28/pass-credentials-to-the-awslogs-docker-logging-driver-on-ubuntu/
I was getting the same error but when I checked the cloudwatch logs, I was able to see the logs in cloudwatch. Did you check that if you have the logs group created in cloudwatch. Docker doesn't support console logging when we use the custom logging drivers.
The section on limitations here says that docker logs command is only available for json-file and journald drivers, and that's true for built-in drivers.
When trying to get logs from a driver that doesn't support reading, nothing hangs for me, docker logs prints this:
Error response from daemon: configured logging driver does not support reading
There are 3 main steps involved it to it.
Create an IAM role/User
Install CloudAgent
Modify docker-compose file or docker run command
I have referred an article here with steps to send the docker logs to aws cloudwatch.
The AWS logs driver you are using awslogs is for use with EC2 Container Service (ECS). It will not work on plain EC2. See documentation.
I would recommend creating a single node ECS cluster. Be sure the EC2 instance(s) in that cluster have a role, and the role provides permissions to write to Cloudwatch logs.
From there anything in your container that logs to stdout will be captured by the awslogs driver and streamed to Cloudwatch logs.

AWS can't deregister EC2 from ELB during deploy

I'm using codeDeploy to deploy new code to my EC2 instances.
Here my appspec file:
version: 0.0
os: linux
files:
- source: v1
destination: /somewhere/v1
hooks:
BeforeInstall:
- location: script/deregister_from_elb.sh
timeout: 400
But I'm getting the following error:
LifecycleEvent - BeforeInstall
Script - v1/script/deregister_from_elb.sh
[stderr]Running AWS CLI with region: eu-west-1
[stderr]Started deregister_from_elb.sh at 2017-03-17 11:44:30
[stderr]Checking if instance i-youshouldnotknow is part of an AutoScaling group
[stderr]/iguessishouldhidethis/common_functions.sh: line 190: aws: command not found
[stderr]Instance is not part of an ASG, trying with ELB...
[stderr]Automatically finding all the ELBs that this instance is registered to...
[stderr]/iguessishouldhidethis/common_functions.sh: line 552: aws: command not found
[stderr][FATAL] Couldn't find any. Must have at least one load balancer to deregister from.
Any ideas why is this happening? I suspect that the message "aws: command not found" could be the issue, but I have awscli installed
~$ aws --version
aws-cli/1.11.63 Python/2.7.6 Linux/3.13.0-95-generic botocore/1.5.26
Thanks very much for your help