Log print statements from script running on ec2 server? - amazon-web-services

I have a python script that runs from an ec2 server. What is the easiest way for me to see print statements from that script? I tried viewing the system log but I don't see anything there and I can't find anything in cloudwatch. Thanks!

Standard output from arbitrary applications running on EC2 don't appear in CloudWatch Logs.
You can install the CloudWatch Logs Agent, configure it to collect logs from given locations, and then configure your app to log to one of those locations.

It is possible to send log of application running on EC2 to Cloudwatch directly for that you need to do following step.
Create IAM Role with relevant permission and attach to Linux instance.
Install the CloudWatch agent in the instances.
Prepare the configuration file in the instance.
Start the CloudWatch agent service in the instance.
Monitor the logs using CloudWatch web console.
For your reference:-
http://medium.com/tensult/to-send-linux-logs-to-aws-cloudwatch-17b3ea5f4863

Related

Stream stdout logs from Go EC2 Environment on AWS

I have a Go application running in an EC2 instance. Log streaming is enabled and I am able to view the default ed-activity.log, nginx/access.log and nginx/error.log in the CloudWatch console.
However, the application is logging to stdout but I cannot see these logs within CloudWatch. I know they're because stored as I can download the full logging bundle and see them.
The question is how can I get these logs into CloudWatch? I have a .NET application and it's stdout logs appear in CloudWatch fine.

If you are using AWS to autoscale spot instances of your application, how do you handle logging?

Looking into adding autoscaling of a portion of our application using AWS simple message queuing which would launch EC2 on-demand or spot instances based on queue backlog.
One question I had, is how do you deal with collecting logs from autoscaled instances? New instances are spun up based on an image, but then they are shut down when complete. Currently, if there is an issue with one of our services, which causes it to crash, we have a system to automatically restart the service, but the logs and core dump files are there to review. If we switch to an autoscaling system, where new instance are spun up, how do you get logs and core dump files when there is a failure? Particularly if the instance is spun down.
Good practice is to ship these logs and aggregate them somewhere else, and there are many services such as DataDog and Rapid7 which will do this for you at a cost.
AWS however provides CloudWatch logs, which gives you a central place to store and view logs. It also allows you then to give users access to logs on the AWS console without them having to ssh onto a server.
Shipping your logs to CloudWatch logs requires the installation of the CloudWatch agent on your server and specifying in the config which logs to ship.
You could install the CloudWatch agent once and create an AMI of that server to use in your autoscaling group, or install and configure the CloudWatch agent in userdata for every time a server is spun up.
All the information you need to get started can be found here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html

Monitor an AWS RDS DB intance with Zabbix

Currently, I'm trying to monitor an AWS RDS DB Instance (MySQL MariaDB) with Zabbix but I'm experiencing several troubles:
I have the script placed into the externalscripts folder on Zabbix Server and template (https://github.com/datorama/zabbix_rds_template) with my aws access data properly filled. The host is already added as well but Zabbix doesn't retrieve data from the AWS RDS Instance (all graphs display no data See the graphs ).
How could I check if the zabbix Server is able to reach the RDS instance to start to discard things?
Do anyone know the correct way to add AWS RDS host in Zabbix?
Any suggestion or advice is welcome always.
Thanks in advance
Kind Regards
KV.
Have you check the Zabbix Log? Maybe you don't use a user with enough permission. Try to run the script in the Zabbix server shell.
Have your Zabbix Server in AWS? if yes, set it a role with read access to CloudWatch and RDS.
Also, I don't like use credentials in a script, I prefer configure it with AWS CLI.
http://docs.aws.amazon.com/es_es/cli/latest/userguide/cli-chap-getting-started.html
Here, there is another example to monitoring any AWS Cloudwatch metrics with Zabbix that I use.
https://www.dbigcloud.com/cloud-computing/230-integrando-metricas-de-aws-cloudwatch-en-zabbix.html

AWS ECS container logs design pattern

I have a classic scala app, it produces three different logs in the location
/var/log/myapp/log1/mylog.log
/var/log/myapp/log2/another.log
/var/log/myapp/log3/anotherone.log
I containerized the app and working fine, I can get those logs by docker volume mount.
Now the app/container will be deployed in AWS ECS with auto scaling group. in this case multiple container may run on one single ecs host.
I would like to use cloud watch to monitor my application logs.
One solution could be put aws log agent inside my application container.
Is there any better way to get those application logs from container to cloudwatch log.
help is very much appreciated.
When using docker, the recommended approach is to not log to files, but to send logs to stdout and stderr. Doing so prevents the logs from being written to the container's filesystem, and (depending on the logging driver in use), allows you to view the logs using the docker logs / docker container logs subcommand.
Many applications have a configuration option to log to stdout/stderr, but if that's not an option, you can create a symlink to redirect output; for example, the official NGINX image on Docker Hub uses this approach.
Docker supports logging drivers, which allow you to send logging to (among others) AWS cloud watch. After you modified your image to make it log to stdout/stderr, your can configure the AWS logging driver.
More information about logging in Docker can be found in the "logging" section in the documentation
You don't need log agent if you can change the code.
You can directly publish Custom Metric Data into ColudWatch like this page said: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-cloudwatch-publish-custom-metrics.html

Getting Cloudwatch EC2 server health monitoring into ElasticSearch

I have an AWS account, and have several EC2 servers and an ElasticSearch domain set up to take the syslogs from these servers. However, in Cloudwatch and when investigating a specific server instance in the EC2 control panel, I see specific metrics and graphs for things like CPU, memory load, storage use, etc. Is there some way I can pipe this information into my ElasticSearch as well?
Set up Logstash and use this plugin https://github.com/EagerELK/logstash-input-cloudwatch
Or go the other way and use AWS Logs agent to put your syslogs into Cloudwatch and stop using ElasticSearch