What I'm trying to do is to monitor log file through CloudWatch logs agent.
I have installed CloudWatch to my EC2 Linux Instance (EC2 Instance has Instance profile and IAM Role that are connected).
The installation was successful, but when I'm using sudo service awslogs status
I'm having this status massage dead but pid file exists.
In my error log file ( /var/log/awslogs.log) I have only this line that repeats over and over again - 'AccessKeyId'.
How can I fix Cloud Watch logs agent and make it to work?
This means that your AWS Logs agent requires your AWS Access Key/Secret. This can be provided in /etc/awslogs/awscli.conf in following format:
[plugins]
cwlogs = cwlogs
[default]
region = YOUR_INSTANCE_REGION (e.g. us-east-1)
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
Restart the service after making this change:
sudo service awslogs restart
Hope this helps!!!
Related
Hi need to transfer a file to ec2 machine via ssm agent. I have successfully installed ssm-agent in ec2 instances and from UI i am able to start session via "session-manager" and login to the shell of that ec2 machine.
Now I tried to automate it via boto3 and using the below code,
ssm_client = boto3.client('ssm', 'us-west-2')
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': ['echo "hello world" >> /tmp/test.txt']},
InstanceIds=['i-xxxxx'],
)
The above works fine and i am able to send create a file called test.txt in remote machine but his is via echo command
Instead I need to send a file from my local machine to this remove ec2 machine via ssm agent, hence I did the following ,
Modified the "/etc/ssh/ssh_config" with proxy as below,
# SSH over Session Manager
host i-* mi-*
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
Then In above code, I have tried to start a session with below code and that is also successfully .
response = ssm_client.start_session(Target='i-04843lr540028e96a')
Now I am not sure how to use this session response or use this aws ssm session and send a file
Environment description:
Source: pod running in an EKS cluster
dest: ec2 machine (which has ssm agent running)
file to be transferred: Important private key which will be used by some process in ec2 machine and it will be different for different machine's
Solution tried:
I can push the file to s3 in source and execute ssm boto3 libaray can pull from s3 and store in the remote ec2 machine
But I don't want to do the above due to the reason I don't want to store the private key i s3. So wanted to directly send the file from memory to the remote ec2 machine
Basically i wanted to achieve scp which is mentioned in this aws document : https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-ssh
If you have SSH over SSM setup, you can just use normal scp, like so:
scp file.txt ec2-user#i-04843lr540028e96a
If it isn't working, make sure you have:
Session Manager plugin installed locally
Your key pair on the instance and locally (you will need to define it in your ssh config, or via the -i switch)
SSM agent on the instance (installed by default on Amazon Linux 2)
An instance role attached to the instance that allows Session Manager (it needs to be there at boot, so if you just attached, reboot)
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html
If you need more detail, give me more info on your setup, and I'll try and help.
I am trying to enable Remote Airflow logs, to do that I followed these steps:
apache-airflow install
pip install apache-airflow[crypto,postgres,ssh,s3,log]==1.10.10
Airflow.cfg file:
remote_logging = True
remote_base_log_folder = s3://bucket-name/logs
encrypt_s3_logs = False
logging_level = INFO
fab_logging_level = WARN
remote_log_conn_id = MyS3Conn
I have Airflow running in a docker container in an AWS ECS Blue/Green deploy.
I read that if airflow is hosted on an EC2 server you should create the connection leaving everything blank in the configuration apart from connection type which should stay as S3.
The S3hook will default to boto and this will default to the role of the EC2 server you are running airflow on. Assuming this role has rights to S3 your task will be able to access the bucket.
So I applied this, but I don't know if using docker it works as intended.
If I run a dag I see the logs which are createds in the /urs/local/airflow/logs folder in the docker container, but there is no new files in the specified folder in S3.
After the awslogs-agent-setup.py completes the CloudWatchLogs Agent runs as root user. It finds it credentials under /root/.aws/credentials.
On the machines other services are already using the /root/.aws/credentials and those credentials should not be shared.
There is a /var/awslogs/etc/aws.conf file and a first idea was to add the aws_access_key_id and aws_secret_access_key to this file and then run sudo service awslogs restart.
Unfortunately this did not do the trick. It still finds the /root/.aws/credentials first.
This won't solve the problem of applications accessing all the credentials in the file, you CAN set application-specific credentials in that file, so that the application can have it's own identity and credentials. It looks like the log agent can use credentials from an [AmazonCloudWatchAgent] section.
Source: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html
I have a few docker containers running with docker-compose on an AWS EC2 instance. I am looking to get the logs sent to AWS CloudWatch. I was also having issues getting the logs from docker containers to AWS CloudWatch from my Mac running Sierra so I've moved over to EC2 instances running Amazon AMI.
My docker-compose file:
version: '2'
services:
scraper:
build: ./Scraper/
logging:
driver: "awslogs"
options:
awslogs-region: "eu-west-1"
awslogs-group: "permission-logs"
awslogs-stream: "stream"
volumes:
- ./Scraper/spiders:/spiders
When I run docker-compose up I get the following error:
scraper_1 | WARNING: no logs are available with the 'awslogs' log driver
but the container is running. No logs appear on the AWS CloudWatch stream. I have assigned an IAM role to the EC2 container that the docker-containers run on.
I am at a complete loss now as to what I should be doing and would apprecaite any advice.
The awslogs works without using ECS.
you need to configure the AWS credentials (the user should have IAM roles appropriate [cloudwatch logs]).
I used this tutorial, it worked for me: https://wdullaer.com/blog/2016/02/28/pass-credentials-to-the-awslogs-docker-logging-driver-on-ubuntu/
I was getting the same error but when I checked the cloudwatch logs, I was able to see the logs in cloudwatch. Did you check that if you have the logs group created in cloudwatch. Docker doesn't support console logging when we use the custom logging drivers.
The section on limitations here says that docker logs command is only available for json-file and journald drivers, and that's true for built-in drivers.
When trying to get logs from a driver that doesn't support reading, nothing hangs for me, docker logs prints this:
Error response from daemon: configured logging driver does not support reading
There are 3 main steps involved it to it.
Create an IAM role/User
Install CloudAgent
Modify docker-compose file or docker run command
I have referred an article here with steps to send the docker logs to aws cloudwatch.
The AWS logs driver you are using awslogs is for use with EC2 Container Service (ECS). It will not work on plain EC2. See documentation.
I would recommend creating a single node ECS cluster. Be sure the EC2 instance(s) in that cluster have a role, and the role provides permissions to write to Cloudwatch logs.
From there anything in your container that logs to stdout will be captured by the awslogs driver and streamed to Cloudwatch logs.
I just followed these instructions (Link) to get AWS CloudWatch installed on my EC2 instance.
I updated my repositories: sudo yum update -y
I installed the awslogs package: sudo yum install -y awslogs
I edited the /etc/awslogs/awscli.conf, confirming that my AZ is us-west-2b on the EC2 page
I left the default condiguration of the /etc/awslogs/awslogs.conf file as is, confirming that the default path indeed has logs being written to it
I checked the /var/log/awslogs.log file and it is repeatedly showing the error:
EndpointConnectionError: Could not connect to the endpoint URL: "https://logs.us-west-2b.amazonaws.com/"
I do not see any newly created log group and log stream in the CloudWatch console as expected. What am I missing here?
Should I be pointing at some other endpoint other than https://logs.us-west-2b.amazonaws.com/ ? If so, where is that configured?
Thanks in advance,
Graham
The awscli.conf expects the region and not the AZ.
Specify the region as us-west-2.
Here is the documentation from the reference page
Edit the /etc/awslogs/awscli.conf file and in the [default] section, specify the region where you want to view log data and add your credentials.
region = us-east-1
aws_access_key_id = <YOUR ACCESS KEY>
aws_secret_access_key = <YOUR SECRET KEY>
The error
EndpointConnectionError: Could not connect to the endpoint URL: "https://logs.us-west-2b.amazonaws.com/"
could be attributed to wrong specification of region.
The correct endpoint for the cloudwatch logs service in US-WEST-2 is
logs.us-west-2.amazonaws.com.
Please refer to the following documentation for aws service endpoints
http://docs.aws.amazon.com/general/latest/gr/rande.html#cwl_region