tl;dr The configuration of cloudwatch agent is #$%^. Any straightforward way?
I wanted one place to store the logs, so I used Amazon CloudWatch Logs Agent. At first it seemed like I'd just add a Resource saying something like "create a log group, then a log stream and send this file, thank you" - all declarative and neat, but...
According to this doc I had to setup JSON configuration that created a BASH script that downloaded a Python script that set up the service that used a generated config in yet-another-language somewhere else.
I'd think logging is something frequently used, so there must be a declarative configuration way, not this 4-language crazy combo. Am I missing something, or is ops world so painful?
Thanks for ideas!
"Agent" is just an aws-cli plugin and a bunch of scripts. You can install the plugin with pip install awscli-cwlogs on most systems (assuming you already installed awscli itself). NOTE: I think Amazon Linux is not "most systems" and might require a different approach.
Then you'll need two configs: awscli config with the following content (also add credentials if needed and replace us-east-1 with your region):
[plugins]
cwlogs = cwlogs
[default]
region = us-east-1
and logging config with something like this (adjust to your needs according to the docs):
[general]
state_file = push-state
[logstream-cfn-init.log]
datetime_format = %Y-%m-%d %H:%M:%S,%f
file = /var/log/cfn-init.log
file_fingerprint_lines = 1-3
multi_line_start_pattern = {datetime_format}
log_group_name = ec2-logs
log_stream_name = {hostname}-{instance_id}/cfn-init.log
initial_position = start_of_file
encoding = utf_8
buffer_duration = 5000
after that, to start the daemon automatically you can create a systemd unit like this (change config paths to where you actually put them):
[Unit]
Description=CloudWatch logging daemon
[Service]
ExecStart=/usr/local/bin/aws logs push --config-file /etc/aws/cwlogs
Environment=AWS_CONFIG_FILE=/etc/aws/config
Restart=always
Type=simple
[Install]
WantedBy=multi-user.target
after that you can systemctl enable and systemctl start as usual. That's assuming your instance running a distribution that uses systemd (which is most of them nowadays but if not you should consult documentation to your distribution to learn how to run daemons).
Official setup script also adds a config for logrotate, I skipped that part because it wasn't required in my case but if your logs are rotated you might want to do something with it. Consult the setup script and logrotate documentation for details (essentially you just need to restart the daemon whenever files are rotated).
You've linked doco particular to CloudFormation so a bunch of the complexity is probably associated with that context.
Here's the stand-alone documentation for the Cloudwatch Logs Agent:
Quick Start
Agent Reference
If you're on Amazon Linux, you can install the 'awslogs' system package via yum. Once that's done, you can enable the logs plugin for the AWS CLI by making sure you have the following section in the CLI's config file:
[plugins]
cwlogs = cwlogs
E.g., the system package should create a file under /etc/awslogs/awscli.conf . You can use that file by setting the...
AWS_CONFIG_FILE=/etc/awslogs/awscli.conf
...environment variable.
Once that's all done, you can:
$ aws logs push help
and
$ cat /path/to/some/file | aws logs push [options]
The agent also comes with helpers to keep various log files in sync.
Related
I am trying to create a couple of os policy assignments to configure - run some scripts with PowerShell - and install some security agents on a Windows VM (Windows Server 2022), by using the VM Manager. I am following the official Google documentation to setup the os policies. The VM Manager is already enabled, nevertheless I have difficulties creating the appropriate .yaml file which is required for the policy assignment since I haven't found any detailed examples.
Related topics I have found:
Google documentation offers a very simple example of installing an .msi file - Example OS policies.
An example of a fixed policy assignment in Terraform registry - google_os_config_os_policy_assignment, from where I managed to better comprehend the required structure for the .yaml file even though it is in a .json format.
Few examples provided at GCP GitHub repository (OSPolicyAssignments).
OS Policy resources in JSON representation - REST Resource, from where you can navigate to sample cases based on the selected resource.
But, it is still not very clear how to create the desired .yaml file. (ie. Copy some files, run a PowerShell script to perform an installation or an authentication). According to the Google documentation pkg, repository, exec, and file are the supported resource types.
Are there any more detailed examples I could use to understand what is needed? Have you already tried something similar?
Update: Adding an additional source.
You need to follow these steps:
Ensure that the OS Config agent is installed in your VM by running the below command in PowerShell:
PowerShell Get-Service google_osconfig_agent
you should see an output like this:
Status Name DisplayName
------ ---- -----------
Running google_osconfig... Google OSConfig Agent
if the agent is not installed, refer to this tutorial.
Set the metadata values to enable OSConfig agent with Cloud Shell command:
gcloud compute instances add-metadata $YOUR_VM_NAME \
--metadata=enable-osconfig=TRUE
Generate an OS policy and OS policy assignment yaml file. As an example, I am generating an OS policy that installs a msi file retrieved from a GCS bucket, and an OS policy assignment to run it in all Windows VMs:
# An OS policy assignment to install a Windows MSI downloaded from a Google Cloud Storage bucket
# on all VMs running Windows Server OS.
osPolicies:
- id: install-msi-policy
mode: ENFORCEMENT
resourceGroups:
- resources:
- id: install-msi
pkg:
desiredState: INSTALLED
msi:
source:
gcs:
bucket: <your_bucket_name>
object: chrome.msi
generation: 1656698823636455
instanceFilter:
inventories:
- osShortName: windows
rollout:
disruptionBudget:
fixed: 10
minWaitDuration: 300s
Note: Every file has its own generation number, you can get it with the command gsutil stat gs://<your_bucket_name>/<your_file_name>.
Apply the policies created in the previous step using Cloud Shell command:
gcloud compute os-config os-policy-assignments create $POLICY_NAME --location=$YOUR_ZONE --file=/<your-file-path>/<your_file_name.yaml> --async
Refer to the Examples of OS policy assignments for more scenarios, and check out this example of a PowerShell script.
Down below you can find the the .yaml file that worked, in my case. It copies a file, and executes a PowerShell command, so as to configure and deploy a sample agent (TrendMicro) - again this is specifically for a Windows VM.
.yaml file:
id: trendmicro-windows-policy
mode: ENFORCEMENT
resourceGroups:
- resources:
- id: copy-exe-file
file:
path: C:/Program Files/TrendMicro_Windows.ps1
state: CONTENTS_MATCH
permissions: '755'
file:
gcs:
bucket: [your_bucket_name]
generation: [your_generation_number]
object: Windows/TrendMicro/TrendMicro_Windows.ps1
- id: validate-running
exec:
validate:
interpreter: POWERSHELL
script: |
$service = Get-Service -Name 'ds_agent'
if ($service.Status -eq 'Running') {exit 100} else {exit 101}
enforce:
interpreter: POWERSHELL
script: |
Start-Process PowerShell -ArgumentList '-ExecutionPolicy Unrestricted','-File "C:\Program Files\TrendMicro_Windows.ps1"' -Verb RunAs
To elaborate a bit more, this .yaml file:
copy-exe-file: It copies the necessary installation script from GCS to a specified location on the VM. Generation number can be easily found on "VERSION HISTORY" when you select the object on GCS.
validate-running: This stage contains two different steps. On the validate it checks if the specific agent is up and running on the VM. If not, then it proceeds with the enforce step, where it executes the "TrendMicro_Windows.ps1" file with PowerShell. This .ps1 file downloads, configures and installs the agent. Note 1: This command is executed as Administrator and the full path of the file is specified. Note 2: Instead of Start-Process PowerShell a Start-Process pwsh can also be utilized. It was vital for one of my cases.
Essentially, a PowerShell command can be directly run at the enforce
step, nonetheless, I found it much easier to pass it first to a .ps1
file, and then just run this file. There are some restriction with the
.yaml file anywise.
PS: Passing osconfig-log-level - debug as a key-value pair as Metadata - directly to a VM or applied to all of them (Compute Engine > Setting - Metadata > EDIT > ADD ITEM) - provide some additional information and may help you on dealing with errors.
I'm trying to configure the AWS Cloudwatch agent to run on vanilla Ubuntu 18.04, outside of AWS. Every time I run it, I get this error:
# /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c "file:/path/to/cloudwatch/cloudwatch.json" -s
/opt/aws/amazon-cloudwatch-agent/bin/config-downloader --output-dir /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d --download-source file:/path/to/cloudwatch/cloudwatch.json --mode onPrem --config /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml --multi-config default
Got Home directory: /root
I! Set home dir Linux: /root
Unable to determine aws-region.
Please make sure the credentials and region set correctly on your hosts.
Refer to http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Fail to fetch the config!
Running the program under strace -f reveals that it is trying to read /root/.aws/credentials and then exiting. Per the guide, here are the contents of /root/.aws/credentials:
[AmazonCloudWatchAgent]
aws_access_key_id = key
aws_secret_access_key = secret
region = us-west-2
If I run aws configure get region, it is able to retrieve the region correctly. However, the Cloudwatch Agent is unable to read it. Here's the contents of common-config.toml (which also gets read, per strace).
## Configuration for shared credential.
## Default credential strategy will be used if it is absent here:
## Instance role is used for EC2 case by default.
## AmazonCloudWatchAgent profile is used for onPremise case by default.
[credentials]
shared_credential_profile = "AmazonCloudWatchAgent"
shared_credential_file = "/root/.aws/credentials"
## Configuration for proxy.
## System-wide environment-variable will be read if it is absent here.
## i.e. HTTP_PROXY/http_proxy; HTTPS_PROXY/https_proxy; NO_PROXY/no_proxy
## Note: system-wide environment-variable is not accessible when using ssm run-command.
## Absent in both here and environment-variable means no proxy will be used.
# [proxy]
# http_proxy = "{http_url}"
# https_proxy = "{https_url}"
# no_proxy = "{domain}"
Here are other things I have tried:
enclosing region (and all values) in the configuration in double quotes, per https://forums.aws.amazon.com/thread.jspa?threadID=291589. This did not make a difference.
adding /home/myuser/.aws/config, /home/myuser/.aws/credentials, and /root/.aws/config and populating them with the appropriate values. Per strace these files are not being read.
searching for the source code for the CloudWatch Agent (it is not open source)
setting AWS_REGION=us-west-2 explicitly in the program environment (same error)
changing [AmazonCloudWatchAgent] to [profile AmazonCloudWatchAgent] everywhere and all permutations of the above (no difference)
adding a [default] section in all config files (makes no difference)
invoking the config-downloader program directly, setting AWS_REGION etc. (same error)
becoming a non-root user and then invoking the program using sudo instead of invoking the program as the root user without sudo.
I get the same error no matter what I try. I installed the CloudWatch agent by downloading the "latest" deb on March 23, 2020, per these instructions. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/download-cloudwatch-agent-commandline.html
The aws config defaults to C:\Users\Administrator instead of the user you installed the CloudWatch Agent as. So you may need to move the /.aws/ folder to the CLoudWatch user. Or...more straightforward:
aws configure --profile AmazonCloudWatchAgent
as described here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html#install-CloudWatch-Agent-iam_user-first
You can also specify the region using common-config.toml as described here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html#CloudWatch-Agent-profile-instance-first
On a server running Windows Server, this file is in the C:\ProgramData\Amazon\AmazonCloudWatchAgent directory. The default common-config.toml is as follows:
# This common-config is used to configure items used for both ssm and cloudwatch access
## Configuration for shared credential.
## Default credential strategy will be used if it is absent here:
## Instance role is used for EC2 case by default.
## AmazonCloudWatchAgent profile is used for onPremise case by default.
# [credentials]
# shared_credential_profile = "{profile_name}"
# shared_credential_file= "{file_name}"
## Configuration for proxy.
## System-wide environment-variable will be read if it is absent here.
## i.e. HTTP_PROXY/http_proxy; HTTPS_PROXY/https_proxy; NO_PROXY/no_proxy
## Note: system-wide environment-variable is not accessible when using ssm run-command.
## Absent in both here and environment-variable means no proxy will be used.
# [proxy]
# http_proxy = "{http_url}"
# https_proxy = "{https_url}"
# no_proxy = "{domain}"
You can also update the common-config.toml with a new location if needed.
I was using an incorrect "secret" with an invalid character that caused the INI file parser to break. The CloudWatch agent incorrectly reported this as a "missing region," when a parse error or "invalid secret" error would have been more accurate.
you should create a new file in the same folder as credentials with the name config
And add there the region
[default]
region = your-region
see more here
You have to uncomment the # [credentials] in the /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml config file as well
Set the AWS_REGION environment variable.
On Linux, macOS, or Unix, use :
export AWS_REGION=your_aws_region
Does anyone know of a way to persist configurations done using "gcloud init" commands inside cloudshell, so they don't vanish each time you disconnect?
I figured out how to persist python pip installs using the --user
example: pip install --user pandas
But, when I create a new configuration using gcloud init, use it for a bit, close cloudshell (or cloudshell times out on me), then reconnect later, the configurations are gone.
Not a big deal, I bounce between projects/etc so it's nice to have the configs saved so I can simply run
gcloud config configurations activate config-name
Thanks...Rich Murnane
Google Cloud Shell only persists data in your $HOME directory. Commands like gcloud init modify the environment variables and store configuration files in /tmp which is deleted when the VM is restarted. The VM is terminated after being idle for 20 minutes or 60 minutes depending on which document you read.
Google Cloud Shell is a Docker container. You can modify the docker image to customize to fit your needs. This method will allow you to install packages, tools, etc that are not located in your $HOME directory.
You can also store your files and configuration scripts on Google Cloud Storage. Modify .bashrc to download your cloud files and run your configuration script.
Either method will allow you to create a persistent environment.
This StackOverflow answer covers in detail what gcloud init does and how to basically emulate the same thing via script or command line.
gcloud init details
this isn't exactly what I wanted, but since my
account (userid) isn't changing, I'm simply going to
do the command
gcloud config set project second-project-name
good enough, thanks...Rich
I'm currently doing my internship, and we were tasked to set up a hawkbit service on EWS ECR.
Hawkbit is used for software update roll-outs. We hace hit 2 bumps that we're currently stuck on.
first if we run the docker image on our local server the hawkbit service starts automatically by using a sh-file and running the following command in our dockerfile : CMD ["/hawkbit.sh"]
if we run the image in a cluster on ECR the service doesn't start automatically.
secondly, when hawkbit is running it outputs on the terminal, I can out this output into a log file, however, I'm not able to check the log on cloudwatch.
I used the following to create the file and put the input into the file:
2>&1 > /var/log/hawkbit/hawkbit
and I've edited the awslog.conf file as following:
[/var/log/hawkbit/hawkbit]
file = /var/log/hawkbit/hawkbit.*
log_group_name = /var/log/hawkbit/hawkbit
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
any idea's would be very appreciated
Things to check regarding awslogs agent:
ensure that the service is running
check /var/log/awslogs.log file for errors
make sure instance has role attached with permissions sufficient for agent to work, read about required permissions here.
I'm trying to set up aws cloudwatch log service on my linux instance. In the config file they say to put something like this:
[general]
state_file = <value>
logging_config_file = <value>
use_gzip_http_content_encoding = [true | false]
Where state_file Specifies where the state file is stored according to the docs. I don't see any mention of this state_file anywhere else. Can anyone help me figure out what this file is and where I might be able to find it? I downloaded the logs using yum install -y awslogs
The file is where AWS logs keeps its current state, i.e. how it knows what log messages it has already sent. To find it, you need to look at the state_file location configured in your /etc/awslogs/awslogs.conf file, and then look there.
Looking on one of my servers it appears the default state file location was /var/lib/awslogs/agent-state. Looking at that file it appears to be a SQLite database file.