I can see the logs in the AWS Console under Codedeploy, when I select the deployment and then click choose events, but they appear to be truncated. If I SSH into the instance, where are those codedeploy deployment logs located?
I see logs in /var/log/aws/codedeploy-agent, but the logs there don't match what's in CodeDeploy.
I'm running on Amazon Linux.
I've figured it out. The deployment logs are found in:
/opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log
Each deployment also keeps its logs in:
/opt/codedeploy-agent/deployment-root/88f9d1cf-4ee4-4b0c-9458-b1d41b8d4b48/d-TTUV9E8BG/logs/script.log where 88f9d1cf-4ee4-4b0c-9458-b1d41b8d4b48/d-TTUV9E8BG is different for each deployment.
On windows this appears to be:
C:\ProgramData\Amazon\CodeDeploy<DEPLOYMENT-GROUP-ID><DEPLOYMENT-ID>\logs\scripts.log
Source: https://github.com/aws/aws-codedeploy-agent/issues/8
Linux Deployment Logs (Not the same as original answer):
/var/log/aws/codedeploy-agent/codedeploy-agent.log
Linux Script Logs:
/opt/codedeploy-agent/deployment-root/deployment-group-ID/deployment-ID/logs/scripts.log
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-view-logs.html
If you've found this question and you're looking for Windows logs, they are next to the userdata logs, in
C:\ProgramData\Amazon\CodeDeploy\log\
C:\ProgramData\Amazon\CodeDeploy\deployment-logs\codedeploy-agent-deployments.log
The \log\ folder contains the logs for the agent itself, showing that it's running and checking for updates. The deployment-logs contains the output of the deployment scripts, that's probably the one you want.
(programData is a hidden folder which requires administrative permissions)
log in to your ec2 instance with the command
ssh -i {KeyPair.pem-locations to keys file here} ec2-user#10.xxx.xx.xxx{your instance ip here}
go to below location, you will have logs here
/opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log
use the command
cat codedeploy-agent-deployments.log
with this you can open the log file in the commandline itself if your ec2 is a linux instance and if you are working on linux.
copy it and paste it somewhere in your local machine so you can further explore the logs without any hassle.
`
Related
I have a python script that runs from an ec2 server. What is the easiest way for me to see print statements from that script? I tried viewing the system log but I don't see anything there and I can't find anything in cloudwatch. Thanks!
Standard output from arbitrary applications running on EC2 don't appear in CloudWatch Logs.
You can install the CloudWatch Logs Agent, configure it to collect logs from given locations, and then configure your app to log to one of those locations.
It is possible to send log of application running on EC2 to Cloudwatch directly for that you need to do following step.
Create IAM Role with relevant permission and attach to Linux instance.
Install the CloudWatch agent in the instances.
Prepare the configuration file in the instance.
Start the CloudWatch agent service in the instance.
Monitor the logs using CloudWatch web console.
For your reference:-
http://medium.com/tensult/to-send-linux-logs-to-aws-cloudwatch-17b3ea5f4863
I have read the AWS docs on Elasticbeanstalk logging and the Cloudwatch agent and it seems the cloudwatch agent should be reporting memory usage (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/metrics-collected-by-CloudWatch-agent.html) but this dosn't seem to be happening for me.
when i go into the Cloudwatch -> metrics -> ec2 i can't see anything related to memory. cpu, network etc is collected but not memory.
The platform version i am using is "PHP 7.2 running on 64bit Amazon Linux/2.8.7"
All the googling seems to indicate that you need to run custom scripts (perl) to get that info, but the article linked above seems to contradict that.
in my .ebextensions folder i have a .config file that turns on the logs. i am also able to send custom application logs without issue.
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
am i missing an argument somewhere?
Edit: After a bit more research i dont think the "enable log streaming" option i have set actually uses the cloudwatch agent, /usr/bin/aws logs... is running on the server. so i guess that option enables log pushing via the aws cli?
i have done some googling and can not find an exampled of how to install the cloud watch agent using .ebextentions. i could try my self but if no one else is doing it that way am i thinking about it wrong?
I'm about to install and use Amazon Inspector. We have many EC2 instances behind ELB. Plus some EC2 instances are opened via Auto-Scale.
My question: Is the Amazon Inspector doing its work locally or globally, meaning is the monitoring being made on the instance that it is installed on or it can be configured to include all the instances of the infrastructure?
If Inspector should be applied on every EC2 instance, can the Auto-Scale be configured to open the new instances with Inspector already installed on them and if yes, how can i do that?
I asked a similar question on the Amazon forum but got no response.
In the end I used the following feature to customise the EC2 instances that my application gets deployed to:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Basically off the root of your .war file you need a folder named '.ebextensions' and in there a .config file containing some commands to install the Inspector client.
So my file 'inspector-agent.config' looks like this:
# Errors get logged to /var/log/cfn-init.log. See Also /var/log/eb-tools.log
commands:
# Download the agent installation script
"01-agent-repository":
command: sudo wget https://inspector-agent.amazonaws.com/linux/latest/install
# Run the installation script
"02-run-installation-script":
command: sudo bash install
I've found the answer and the solution, You have to install Amazon Inspector on each EC2 in order to inspect them all using Amazon Inspector.
About the Auto-Scale, I've applied Amazon Inspector on the main EC2 servers and took an image from them (after inspecting all the EC2s and fix all the issues). Then I've configured the Auto-Scale to lunch to lunch from the new AMIs (The Inspected AMIs).
I'm an application developer with very limited knowledge of infrastructure. At my last job we frequently deployed Java web services (built as WAR files) to Elastic Beanstalk, and much of the infrastructure had already been set up before I ever started there, so I got to focus primarily on the code and not how things were tied together. One feature of Elastic Beanstalk that often came in handy was the button to "Request Logs," where you can select either the "Last 100 Lines" or the "Full Logs." What I'm used to seeing when clicking this button is to directly view the logs generated by my web service.
Now, at the new job, the infrastructure requirements are a little different, as we have to Dockerize everything before deploying it. I've been trying to stand up a Spring Boot web app inside a Docker container in Elastic Beanstalk, and have been running into trouble with that. And I also noticed a bizarre difference in behavior when I went to "Request Logs." Now when I choose one of those options, instead of dropping me into the relevant log file directly, it downloads a ZIP file containing the entire /var/log directory, with quite a number of disparate and irrelevant log files in there. I understand that there's no way for Amazon to know, necessarily, that I don't care about X log file but do care about Y log file, but was surprised that the behavior is different from what I was used to. I'm assuming this means the EB configuration at the last job was set up in a specific way to filter the one relevant log file, or something like that.
Is there some way to configure an Elastic Beanstalk application to only return one specific log file when you "Request Logs," rather than a ZIP file of the /var/log directory? Is this done with ebextensions or something like that? How can I do this?
Not too sure about the Beanstalk console, but using the EBCLI, if you enable CloudWatch log streaming (note that this would cost you to store logs in CloudWatch) for your Beanstalk instances, you can perform:
eb logs --stream --log-group <CloudWatch logGroup name>
The above command basically gives you the logs for your instance specific to the file/log group you specified. In order for the above command to work, you need to enable CloudWatch log streaming:
eb logs --stream enable
As an aside, to determine which log groups your environment presently has, perform:
aws logs describe-log-groups --region <region> | grep <beanstalk environment name>
I am using AWS SEK for java. I create and run an EC2 instance with a user data script which gets a .jar from a S3 bucket and runs it. When I run the instance it shows me that it is running but nothing happens. The .jar should create a SimpleDB table and a SQS queue. How do see whats wrong whithout connecting through ssh to the instance or is it the only what to see the logs?
Kind regards,
Snafu
Some of the user-data output may be found in the system log (on EC2 dashboard, right-click on the instance and choose System logs)
you could put a piece of java code \ shell script and\or cron job to upload your logs to S3, but it's best to SSH to see what's in there at least at the first time you run your code.
You can use mind-term java applet to connect directly from EC2 dashboard (there's a button labeled 'connect' at the top, it's easy and you don't need to download ssh client). I would highly recommend getting used to work with SSH because it's the easiest way to see what's inside.