Where is aws cloudwatch Logs state_file? - amazon-web-services

I'm trying to set up aws cloudwatch log service on my linux instance. In the config file they say to put something like this:
[general]
state_file = <value>
logging_config_file = <value>
use_gzip_http_content_encoding = [true | false]
Where state_file Specifies where the state file is stored according to the docs. I don't see any mention of this state_file anywhere else. Can anyone help me figure out what this file is and where I might be able to find it? I downloaded the logs using yum install -y awslogs

The file is where AWS logs keeps its current state, i.e. how it knows what log messages it has already sent. To find it, you need to look at the state_file location configured in your /etc/awslogs/awslogs.conf file, and then look there.
Looking on one of my servers it appears the default state file location was /var/lib/awslogs/agent-state. Looking at that file it appears to be a SQLite database file.

Related

AWS EMR step doesn't find jar imported from s3

I am attempting to run a spark application on aws emr in client mode. I have setup a bootstrap action to import needed files and the jar from s3, and I have a step to run a single spark job.
However when the step executes, the jar I have imported isn't found. Here is the stderr output:
19/12/01 13:42:05 WARN DependencyUtils: Local jar /mnt/var/lib/hadoop/steps/s-2HLX7KPZCA07B/~/myApplicationDirectory does not exist, skipping.
I am able to successfully import the jar and other needed files for the application from my s3 bucket to the master instance, I simply import them to home/ec2-user/myApplicationDirectory/myJar.jar via a bootstrap action.
However I don't understand why the step is looking for the jar at mnt/var/lib/hadoop/...etc.
here are the relevant parts of the cli configuration:
--steps '[{"Args":["spark-submit",
"--deploy-mode","client",
"--num-executors","1",
“--driver-java-options","-Xss4M",
"--conf","spark.driver.maxResultSize=20g",
"--class”,”myApplicationClass”,
“~/myApplicationDirectory”,
“myJar.jar",
…
application specific arguments and paths to folders here
…],
”Type":"CUSTOM_JAR",
thanks for any help,
It looks like it doesn't understand the ~ as referring to the home directory. Try changing "~/myApplicationDirectory" to "/home/ec2-user/myApplicationDirectory".
A little warning: in the sample in your question, straight quotation marks " are mixed with "smart" ones “. Make sure the "smart" quotation marks don't end up in your configuration file, or you will get very confusing error messages.

AWS cloudwatch terminal output logs

I'm currently doing my internship, and we were tasked to set up a hawkbit service on EWS ECR.
Hawkbit is used for software update roll-outs. We hace hit 2 bumps that we're currently stuck on.
first if we run the docker image on our local server the hawkbit service starts automatically by using a sh-file and running the following command in our dockerfile : CMD ["/hawkbit.sh"]
if we run the image in a cluster on ECR the service doesn't start automatically.
secondly, when hawkbit is running it outputs on the terminal, I can out this output into a log file, however, I'm not able to check the log on cloudwatch.
I used the following to create the file and put the input into the file:
2>&1 > /var/log/hawkbit/hawkbit
and I've edited the awslog.conf file as following:
[/var/log/hawkbit/hawkbit]
file = /var/log/hawkbit/hawkbit.*
log_group_name = /var/log/hawkbit/hawkbit
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
any idea's would be very appreciated
Things to check regarding awslogs agent:
ensure that the service is running
check /var/log/awslogs.log file for errors
make sure instance has role attached with permissions sufficient for agent to work, read about required permissions here.

AWS CLI save output as log file

I'm new to AWS CLI (and programming), but I've looked through documentation and posted questions and can't find this addressed, I must be missing something basic?
How do I save the output? I'd like to run AWS S3 Sync to backup my data overnight, and I'd like to see a log report in the morning of what happened.
At this point, I can run AWS from a command prompt:
aws s3 sync "my local directory" s3://mybucket
I've set output format to Text in the config. But I'm only seeing the text in the command prompt. How can I export it as a log file?
Is this not possible, what am I missing?
Many thanks in advance,
Matthew
aws s3 sync "my local directory" s3://mybucket --debug 2> "local path\logname.txt"
Not only did I figure out adding > filename to the end of the command, but I also figured out that when saving this as a batch file, it won't run as a scheduled task in Windows Server 2008 r2, or Windows 7, if it contains drive mappings. UNC paths are required.
Thanks!
Matthew
this perfectly worked for me
aws cloudformation describe-stack-events --stack-name "stack name" --debug 2> "C:\Users\ravi\Desktop\CICDWORKFolder\RedshiftFolder\logname.txt"

A sane way to set up CloudWatch logs (awslogs-agent)

tl;dr The configuration of cloudwatch agent is #$%^. Any straightforward way?
I wanted one place to store the logs, so I used Amazon CloudWatch Logs Agent. At first it seemed like I'd just add a Resource saying something like "create a log group, then a log stream and send this file, thank you" - all declarative and neat, but...
According to this doc I had to setup JSON configuration that created a BASH script that downloaded a Python script that set up the service that used a generated config in yet-another-language somewhere else.
I'd think logging is something frequently used, so there must be a declarative configuration way, not this 4-language crazy combo. Am I missing something, or is ops world so painful?
Thanks for ideas!
"Agent" is just an aws-cli plugin and a bunch of scripts. You can install the plugin with pip install awscli-cwlogs on most systems (assuming you already installed awscli itself). NOTE: I think Amazon Linux is not "most systems" and might require a different approach.
Then you'll need two configs: awscli config with the following content (also add credentials if needed and replace us-east-1 with your region):
[plugins]
cwlogs = cwlogs
[default]
region = us-east-1
and logging config with something like this (adjust to your needs according to the docs):
[general]
state_file = push-state
[logstream-cfn-init.log]
datetime_format = %Y-%m-%d %H:%M:%S,%f
file = /var/log/cfn-init.log
file_fingerprint_lines = 1-3
multi_line_start_pattern = {datetime_format}
log_group_name = ec2-logs
log_stream_name = {hostname}-{instance_id}/cfn-init.log
initial_position = start_of_file
encoding = utf_8
buffer_duration = 5000
after that, to start the daemon automatically you can create a systemd unit like this (change config paths to where you actually put them):
[Unit]
Description=CloudWatch logging daemon
[Service]
ExecStart=/usr/local/bin/aws logs push --config-file /etc/aws/cwlogs
Environment=AWS_CONFIG_FILE=/etc/aws/config
Restart=always
Type=simple
[Install]
WantedBy=multi-user.target
after that you can systemctl enable and systemctl start as usual. That's assuming your instance running a distribution that uses systemd (which is most of them nowadays but if not you should consult documentation to your distribution to learn how to run daemons).
Official setup script also adds a config for logrotate, I skipped that part because it wasn't required in my case but if your logs are rotated you might want to do something with it. Consult the setup script and logrotate documentation for details (essentially you just need to restart the daemon whenever files are rotated).
You've linked doco particular to CloudFormation so a bunch of the complexity is probably associated with that context.
Here's the stand-alone documentation for the Cloudwatch Logs Agent:
Quick Start
Agent Reference
If you're on Amazon Linux, you can install the 'awslogs' system package via yum. Once that's done, you can enable the logs plugin for the AWS CLI by making sure you have the following section in the CLI's config file:
[plugins]
cwlogs = cwlogs
E.g., the system package should create a file under /etc/awslogs/awscli.conf . You can use that file by setting the...
AWS_CONFIG_FILE=/etc/awslogs/awscli.conf
...environment variable.
Once that's all done, you can:
$ aws logs push help
and
$ cat /path/to/some/file | aws logs push [options]
The agent also comes with helpers to keep various log files in sync.

Amazon ec2-get-console-output returns "File not found"

I just set up a free instance of an Amazon ec2 server. I'm trying to figure out how to SSH into it. I downloaded the command line tools for ec2 and, following what was written at this page: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#EC2_LaunchInstance_Linux :
.ec2$ ec2-get-console-output [instance id]
File not found: ''
.ec2$
Where [instance id] refers to the id amazon lists in the list of instances I have. Can anyone tell me what's going on?
Edit: I might add it seems to be doing this for any binary I try to run from the command line tools... even if I call them directly.
I had the same problem until I specified the key and cert in my .bash_profile file:
export EC2_HOME=~/.ec2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY='ls $EC2_HOME/key.pem'
export EC2_CERT='ls $EC2_HOME/cert.pem'