After installing Cloudwatch Agent on Amazon Linux 2 EC2, I ran cloudwatch-agent-ctl status
This command shows the status as stopped
I tried running 'cloudwatch-agent-ctl status` and got the following message:
cwagent-otel-collector will not be started "as it has not been configured yet"
Am not sure if the above message is causing CWAgent to not start. Any pointers?
Any pointers on how to find why my CWAgent won't start?
Before you can start your CW agent, you must configure it. From docs:
Before running the CloudWatch agent on any servers, you must create a CloudWatch agent configuration file.
You can follow the docs how to setup the config files, before running the agent.
Related
I have a python script that runs from an ec2 server. What is the easiest way for me to see print statements from that script? I tried viewing the system log but I don't see anything there and I can't find anything in cloudwatch. Thanks!
Standard output from arbitrary applications running on EC2 don't appear in CloudWatch Logs.
You can install the CloudWatch Logs Agent, configure it to collect logs from given locations, and then configure your app to log to one of those locations.
It is possible to send log of application running on EC2 to Cloudwatch directly for that you need to do following step.
Create IAM Role with relevant permission and attach to Linux instance.
Install the CloudWatch agent in the instances.
Prepare the configuration file in the instance.
Start the CloudWatch agent service in the instance.
Monitor the logs using CloudWatch web console.
For your reference:-
http://medium.com/tensult/to-send-linux-logs-to-aws-cloudwatch-17b3ea5f4863
After Spinnaker deployment on EC2, clouddriver doesn't start. tried the same on local machine and the result is the same. Trying to run 1.6.1 on ubuntu 16.04.
I am using s3 as storage aws as cloudprovider.
After deployment spinnaker UI is accesable, when creating new application the windows hangs and error message appears in browser's console regarding localhost:8084/ credentials and 7002 port.
tried to send curl request to localhost:7002 from the server, but connection refused. 7002 port isn't being listened but all other services ports are. clouddriver start and then enters failed state (for about after 30 seconds).
For deployment I've followed this guide on official website.
Also I can't find logs of services in /var/log/spinnaker/any service/ path, there are logs only in /var/log/spinnaker/halyard/ path.
All policies/roles/users have been made in aws properly as described in official setup guide. double checked. Still facing issue.
Maybe I am missing anything?
Here is the error from browser console when trying to create new application
GET http://localhost:8084/credentials?expand=true 500 () angular.js:14525 Possibly unhandled rejection: {"data":{"error":"Internal Server Error","exception":"com.google.common.util.concurrent.UncheckedExecutionException","message":"retrofit.RetrofitError: Failed to connect to localhost/127.0.0.1:7002","status":500,"timestamp":1523484058259},"status":500,"config":{"method":"GET","transformRequest":[null],"transformResponse":[null],"jsonpCallbackParam":"callback","url":"http://localhost:8084/credentials","cache":true,"params":{"expand":true},"timeout":65000,"headers":{"X-RateLimit-App":"deck","Accept":"application/json, text/plain, */*"},"withCredentials":true},"statusText":""} undefined
Have done some tests later. Here are results.
1deployed spinnaker without s3 storage and any cloud provider - clouddriver works
2added s3 as persistent storage - clouddriver works again. Opened UI created dummy project and saw that files have been created in the s3 bucket under front50 folder. everything fine.
3added aws configurations - created user in aws, and ran this command with appropriate changes
hal config provider aws edit --access-key-id ${ACCESS_KEY_ID} \ --secret-access-key
and ran this command with appropriate changes
hal config provider aws account add $AWS_ACCOUNT_NAME \ --account-id ${ACCOUNT_ID} \ --assume-role role/spinnakerManaged
after checking aws configs with hal config provider aws the value of defaultAssumeRole=0
and after hal deploy apply again clouddriver doesn't start and I cannot create an application from UI. the window loads infinitely.
This is the option for dev spinnaker. localdebian type never worked for me as it contains extra dependencies.
Please use a Kubernetes cluster for the installation or user Minnaker for quick PoC of OSS. It runs on a K3S cluster.
As part of cloud-watch log cleanup , i stopped the agent on all servers and am deleting the log groups.
Command used:
sudo service awslogs stop
But the logs appear on the console even after stopping the agent.
Is this an expected behaviour?
How can i delete all the log groups in this case?
The logs aren't going to be automatically deleted just because you stopped sending new log data. You'll have to manually delete them in the web console, use the AWS CLI tool.
I can see the logs in the AWS Console under Codedeploy, when I select the deployment and then click choose events, but they appear to be truncated. If I SSH into the instance, where are those codedeploy deployment logs located?
I see logs in /var/log/aws/codedeploy-agent, but the logs there don't match what's in CodeDeploy.
I'm running on Amazon Linux.
I've figured it out. The deployment logs are found in:
/opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log
Each deployment also keeps its logs in:
/opt/codedeploy-agent/deployment-root/88f9d1cf-4ee4-4b0c-9458-b1d41b8d4b48/d-TTUV9E8BG/logs/script.log where 88f9d1cf-4ee4-4b0c-9458-b1d41b8d4b48/d-TTUV9E8BG is different for each deployment.
On windows this appears to be:
C:\ProgramData\Amazon\CodeDeploy<DEPLOYMENT-GROUP-ID><DEPLOYMENT-ID>\logs\scripts.log
Source: https://github.com/aws/aws-codedeploy-agent/issues/8
Linux Deployment Logs (Not the same as original answer):
/var/log/aws/codedeploy-agent/codedeploy-agent.log
Linux Script Logs:
/opt/codedeploy-agent/deployment-root/deployment-group-ID/deployment-ID/logs/scripts.log
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-view-logs.html
If you've found this question and you're looking for Windows logs, they are next to the userdata logs, in
C:\ProgramData\Amazon\CodeDeploy\log\
C:\ProgramData\Amazon\CodeDeploy\deployment-logs\codedeploy-agent-deployments.log
The \log\ folder contains the logs for the agent itself, showing that it's running and checking for updates. The deployment-logs contains the output of the deployment scripts, that's probably the one you want.
(programData is a hidden folder which requires administrative permissions)
log in to your ec2 instance with the command
ssh -i {KeyPair.pem-locations to keys file here} ec2-user#10.xxx.xx.xxx{your instance ip here}
go to below location, you will have logs here
/opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log
use the command
cat codedeploy-agent-deployments.log
with this you can open the log file in the commandline itself if your ec2 is a linux instance and if you are working on linux.
copy it and paste it somewhere in your local machine so you can further explore the logs without any hassle.
`
I am trying to set up AWS CodeDeploy for my PHP web app. I have created a CodeDeploy app and a deployment group on the AWS console. I have created the necessary revision bundle with the appspec yaml file. The revision bundle is stored on Amazon S3.
When I click 'Deploy this revision' button on the AWS console it gives me 'no hosts succeeded' error. I went through the Technical FAQ and could not find any answers. How can I counter this error?
UPDATE: I now understand that this error has something to do with Minimum Healthy Hosts count. But still I am not able to understand how does AWS calculate the healthiness of a host.
Basically what its saying is "The codedeploy service on your ec2 instance is not running"...
For why a deployment failed host health is fairly simple. A host is healthy if that host succeeded in deploying the last deployment to it. A host is unhealthy if it failed. A host is unknown if it was Skipped and had no previous deployment.
There are other aspects of host health that affect what order they are deployed to in the next deployment, but that's not going to affect you deployment failing with "No hosts succeeded".
A host can fail it's individual deployment if any of it's lifecycle events failed. A lifecycle event can fail due to service side timeout waiting for the agent to respond or because the host agent reports an error executing the command. You can check the host agent log for more details in exactly why the host agent reported a failure.
If you are hitting the server side timeouts, you should check that the host agent is running and is able to poll for commands correctly. You might have accidentally restricted access in your VPC configuration or didn't grant appropriate permissions to the instance to poll for commands in the instance profile.
This error message means you are not running CodeDeploy service at the EC2 instance targeted by your deployment group.
1) Download latest version of codedeploy from S3 (choose your region)
PS> Read-S3Object -BucketName aws-codedeploy-eu-west-1 -Key latest/codedeploy-agent.msi -File c:\temp\codedeploy-agent.msi
2) Install codedeploy
cmd> c:\temp\codedeploy-agent.msi /quiet /l c:\temp\host-agent-install-log.txt
3) Start codeploy
PS> Start-Service -Name codedeployagent
AWS CodeDeploy guide: http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-install-windows
I just ran into this issue myself. My solution was to run:
ntpdate-debian
If you are running centos it's something like
ntpdate pool.ntp.org
For me the time was off and was causing issues with the codedeploy agent.
Now, if this doesn't solve your problem. First make sure your problem is that your CodeDeploy agent is not registering. I have had this issue before and it was because one of my instances was in a failed state from a botched deployment so be sure to double check. (ELB status, tests, etc)
Then you should enable logging for your CodeDeploy agent by setting log_aws_wire and verbose to true in /etc/codedeploy-agent/conf/codedeployagent.yml and then restart the CodeDeploy. Tail the logs and you should see the reason for your problems.