getting auth failure on salt-cloud command - amazon-web-services

I'm using salt stack and I want to try and provision new EC2 instances using the salt-cloud command. But I'm getting an auth failure on salt-cloud command:
[root#salt:~] #salt-cloud -p base_ec2_public ops.example.com
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '3a5e33e2-d1a9-44fa-983c-26691d4f8ee7'}
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '163079c6-2b79-4301-80c8-77ba0d7c896d'}
[ERROR ] There was a profile error: string indices must be integers, not str
This is my /etc/salt/cloud.providers.d/aws.conf file
----
my-ec2-us-east-public-ips:
# Set up the location of the salt master
#
minion:
master: salt.example.com
# Set up grains information, which will be common for all nodes
# using this provider
grains:
node_type: broker
release: 1.0.1
# Specify whether to use public or private IP for deploy script.
#
# Valid options are:
# private_ips - The salt-cloud command is run inside the EC2
# public_ips - The salt-cloud command is run outside of EC2
#
ssh_interface: public_ips
# Optionally configure the Windows credential validation number of
# retries and delay between retries. This defaults to 10 retries
# with a one second delay betwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
#
id: "REDACTED"
key: "REDACTED"
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/my_test_key.pem
keyname: my_test_key
securitygroup: default
# Optionally configure default region
# Use salt-cloud --list-locations <provider> to obtain valid regions
#
location: us-east-1
availability_zone: us-east-1a
#
ssh_username: ec2-user
# Optionally add an IAM profile
iam_profile: 'arn:aws:iam::REDACTED:user/bluethundr'
driver: ec2
my-ec2-us-east-private-ips:
# Set up the location of the salt master
#
minion:
master: salt.example.com
# Specify whether to use public or private IP for deploy script.
#
# Valid options are:
# private_ips - The salt-master is also hosted with EC2
# public_ips - The salt-master is hosted outside of EC2
#
ssh_interface: private_ips
# Optionally configure the Windows credential validation number of
# retries and delay between retries. This defaults to 10 retries
# with a one second delay betwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
#
id: "REDACTED"
key: "REDACTED"
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/my_test_key.pem
keyname: my_test_key
# This one should NOT be specified if VPC was not configured in AWS to be
# the default. It might cause an error message which says that network
# interfaces and an instance-level security groups may not be specified
# on the same request.
#
securitygroup: default
# Optionally configure default region
#
location: us-east-1
availability_zone: us-east-1a
# Configure which user to use to run the deploy script. This setting is
# dependent upon the AMI that is used to deploy. It is usually safer to
# configure this individually in a profile, than globally. Typical users
# are:
#
# Amazon Linux -> ec2-user
# RHEL -> ec2-user
# CentOS -> ec2-user
# Ubuntu -> ubuntu
#
ssh_username: ec2-user
# Optionally add an IAM profile
iam_profile: 'arn:aws:iam::REDACTED:user/bluethundr'
driver: ec2
And this is my /etc/salt/cloud.profiles.d/aws_pofiles.conf
base_ec2:
provider: my-ec2-us-east-public-ips
image: ami-869a9cee
size: t2.micro
ssh_username: ec2-user
base_ec2_private:/et
provider: my-ec2-us-east-private-ips
image: ami-869a9cee
size: t2.micro
ssh_username: ec2-user
base_ec2_public:
provider: my-ec2-us-east-public-ips
image: ami-e565ba8c
size: t2.micro
ssh_username: ec2-user
base_ec2_db:
provider: my-ec2-us-east-public-ips
image: ami-e565ba8c
size: m1.xlarge
ssh_username: ec2-user
volumes:
- { size: 10, device: /dev/sdf }
- { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdi, tags: {"Environment": "production"} }
# optionally add tags to profile:
tag: {'Environment': 'production', 'Role': 'database'}
# force grains to sync after install
sync_after_install: grains
base_ec2_vpc:
provider: my-ec2-us-east-public-ips
image: ami-a73264ce
size: m1.xlarge
ssh_username: ec2-user
script: /etc/salt/cloud.deploy.d/user_data.sh
network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
#auto assign public ip (not EIP)
AssociatePublicIpAddress: True
SubnetId: subnet-813d4bbf
SecurityGroupId:
- sg-750af413
del_root_vol_on_destroy: True
del_all_vol_on_destroy: True
volumes:
- { size: 10, device: /dev/sdf }
- { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
tag: {'Environment': 'production', 'Role': 'database'}
sync_after_install: grains
Here's some debug output of the command I'm trying to get working:
[root#salt:~] #salt-cloud -p base_ec2_public ops.example.com -l debug
[DEBUG ] Reading configuration from /etc/salt/cloud
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: salt.example.com
[DEBUG ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG ] Including configuration from '/etc/salt/cloud.providers.d/aws.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.providers.d/aws.conf
[DEBUG ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/aws_profiles.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/aws_profiles.conf
[DEBUG ] Configuration file path: /etc/salt/cloud
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO ] salt-cloud starting
[DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG ] LazyLoaded parallels.avail_locations
[DEBUG ] LazyLoaded proxmox.avail_sizes
[DEBUG ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG ] LazyLoaded rackspace.reboot
[DEBUG ] LazyLoaded openstack.list_locations
[DEBUG ] LazyLoaded rackspace.list_locations
[DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG ] LazyLoaded parallels.avail_locations
[DEBUG ] LazyLoaded proxmox.avail_sizes
[DEBUG ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG ] LazyLoaded rackspace.reboot
[DEBUG ] LazyLoaded openstack.list_locations
[DEBUG ] LazyLoaded rackspace.list_locations
[DEBUG ] Using AWS endpoint: ec2.us-east-1.amazonaws.com
[DEBUG ] AWS Request: https://ec2.us-east-1.amazonaws.com/?Action=DescribeInstances&Version=2014-10-01
[DEBUG ] AWS Response Status Code: 401
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided acce
ss credentials', 'Code': 'AuthFailure'}}, 'RequestID': '0f483305-6cb2-4c09-ae2f-ec804fd3beea'}
[DEBUG ] Failed to execute 'ec2.list_nodes()' while querying for running nodes: An error occurred while listing nodes: AWS was not able to validate the provided a
ccess credentials
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 2383, in run_parallel_map_providers_query
cloud.clouds[data['fun']]()
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 3496, in list_nodes
nodes = list_nodes_full(get_location())
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 3346, in list_nodes_full
return _list_nodes_full(location)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 3436, in _list_nodes_full
instances['error']['Errors']['Error']['Message']
SaltCloudSystemExit: An error occurred while listing nodes: AWS was not able to validate the provided access credentials
[DEBUG ] Generating minion keys for 'ops.jokefire.com'
[DEBUG ] LazyLoaded cloud.fire_event
[DEBUG ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event - data = {'profile': 'base_ec2_public', 'event': 'starting create', '_stamp': '2016-09-13T19:24:13.555913', 'name': 'ops.jokefire.com', 'pr
ovider': 'my-ec2-us-east-public-ips:ec2'}
[INFO ] Creating Cloud VM ops.jokefire.com in us-east-1
[DEBUG ] Using AWS endpoint: ec2.us-east-1.amazonaws.com
[DEBUG ] AWS Request: https://ec2.us-east-1.amazonaws.com/?Action=DescribeAvailabilityZones&Filter.0.Name=region-name&Filter.0.Value.0=us-east-1&Version=2014-10-0
1
[DEBUG ] AWS Response Status Code: 401
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided acce
ss credentials', 'Code': 'AuthFailure'}}, 'RequestID': 'e9912cf2-2e9b-496f-b607-4b9bae8b8938'}
[ERROR ] There was a profile error: string indices must be integers, not str
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 284, in run
self.config.get('names')
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1454, in run_profile
ret[name] = self.create(vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1284, in create
output = self.clouds[func](vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 2512, in create
data, vm_ = request_instance(vm_, location)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 1742, in request_instance
az_ = get_availability_zone(vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 1094, in get_availability_zone
zones = _list_availability_zones(vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 1242, in _list_availability_zones
ret[zone['zoneName']] = zone['zoneState']
TypeError: string indices must be integers, not str
Can someone take a stab and let me know why I'm getting auth failures? The redacted AWS keys were taken straight from the AWS interface and copied into the cloud.providers file.

It seems the EC2 credentials are not provided. You may need to check the Key/ID of the EC2 credentials, and their policy.
For credentials, replace "REDACTED" string with your real key/ID.

Related

AWS role attached to EC2 throws error when used for statSd

I have used the following in my config file for statsd
It worked fine when provided with access ID and access key . But iamRole fails. Any insight would be helpful
{
backends: [ "aws-cloudwatch-statsd-backend" ],
cloudwatch:
{
iamRole: 'role_attached_to_EC2_with_CloudWatchAgentServerPolicy',
region: 'US_EAST_1'
}
}
#I am being shown following error#
node node_modules/statsd/stats.js localConfig.js
4 Jul 21:55:17 - [2783] reading config file: localConfig.js
4 Jul 21:55:
17 - server is up INFO
/home/ubuntu/webapp-backend/node_modules/awssum/lib/amazon/amazon.js:67
throw MARK + 'accessKeyID is required';
^
amazon: accessKeyID is required

How to stream tomcat catalinat.out logs to cloud watch?

I want to stream tomcat catalinat.out logs to cloud watch:
This is my config I follow:
https://github.com/awsdocs/elastic-beanstalk-samples/blob/master/configuration-files/aws-provided/instance-configuration/logs-streamtocloudwatch-linux.config
But I don't see catalina.out in cloudwatch console :
This is error , I have in awslogs.log
How can I solve it.
2020-05-22 18:15:55,450 - cwlogs.push.batch - WARNING - 3374 - Thread-29 - CreateLogGroup failed with exception An error occurred (AccessDeniedException) when calling the CreateLogGroup operation: User: arn:aws:sts::610232524349:assumed-role/aws-elasticbeanstalk-ec2-role/i-099300c0bfd4b6a28 is not authorized to perform: logs:CreateLogGroup on resource: arn:aws:logs:eu-central-1:610232524349:log-group:/aws/elasticbeanstalk/************/var/log/tomcat8/catalina.out:log-stream:
With the sample provided you are not exporting the catalinat.out you are streaming to cloudwatch the following files:
/var/log/dmesg
/var/log/messages
To stream the catalitat.out you have to add the file to the configuration with the location of the log at the end of content section (Lines 61-71 on the sample provided)
It should be something like this replacing /path/to/catalitat.log with the actual path to the log:
[/path/to/catalitat.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/path/to/catalitat.log"]]}`
log_stream_name = {instance_id}
file = /path/to/catalitat.log
Steps to publish tomcat logs (catalina.out) to the CloudWatch stream
Create a new policy for EC2 to use AWS CloudWatch, providing access to create log groups, log streams and publish logs
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "logs:*",
"Resource": "*"
}
]
}
Attach the policy to newly created or existing IAM role of the EC2 instance
Connect to the instance using SSH, using the PEM/PPK file.
Locate the AWS CloudWatch Logs Agent configuration file
[ec2-user# elasticbeanstalk]$ sudo su
[root# elasticbeanstalk]# find / -name "*awslogs.conf"
/etc/awslogs/awslogs.conf
Edit the configuration file and add the entry for a log stream for tomcat logs. I have used catalina.out
[ec2-user# elasticbeanstalk]$ cat /etc/awslogs/awslogs.conf
[general]
state_file = /var/lib/awslogs/agent-state
[tomcatLogs]
log_group_name = tomcatLogs
log_stream_name = catalinaLogs
time_zone = LOCAL
file = /[your-path-to]/tomcat8/catalina.out
[ec2-user# elasticbeanstalk]$
Restart the service AWS Logs
[ec2-user# elasticbeanstalk]$ sudo service awslogs restart
Revisit the CloudWatch log groups page, where you can see the new group is created with the name “tomcatLogs” and a log stream with the name “catalinaLogs”
I feel your pain! I've detailed in a new Medium blog how this all works and an example .ebextensions file and where to put it.
Below is an excerpt that you might be able to use, though the article explains how to determine the right folder/file(s) to stream.
packages:
yum:
awslogs: []
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 90
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/config/logs.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat/localhost.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/localhost.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/localhost.*
[/var/log/tomcat/catalina.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/catalina.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/catalina.*
[/var/log/tomcat/localhost_access_log.txt]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/access_log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/access_log.*
commands:
"01":
command: systemctl enable awslogsd.service
"02":
command: systemctl restart awslogsd

refresh EC2 Instance Tags failed: SharedCredsLoad

I have been struggling to get basic metrics from the CloudWatch agent. I've been getting this error and I have no idea what it means nor can I find resources online which talk much about it
refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
I followed the instructions here and have read through the documentation carefully. Again, the goal is just to read in some basic metrics from my EC2 instance to CloudWatch. Here are the steps I have followed:
Followed instructions here "To create the IAM role necessary to run the CloudWatch agent on EC2 instances" and then assigned it to my instance.
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
ami id is ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20190628 (ami-0cfee17793b08a293)
Install the .deb with command sudo dpkg --install --skip-same-version ./amazon-cloudwatch-agent.deb
note --install and --skip-same-version is just -i -E as done in the docs
generated a config.json with the wizard, located here /opt/aws/amazon-cloudwatch-agent/bin/config.json. I pasted the contents under the error message below.
modify the /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml file to point to new credentials of cwagent (since not using root user) with the following:
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# tail -n 4 common-config.toml
#### BEGIN ANSIBLE MANAGED BLOCK ####
[credentials]
shared_credential_file = "/home/cwagent/.aws/credentials"
#### END ANSIBLE MANAGED BLOCK ####
fetch config and start agent with sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
here's the error I'm seeing in the logs now, and I'm assuming why this is why I can't see any metrics
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/logs# tail -n 20 amazon-cloudwatch-agent.log
2019/10/29 22:41:08 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json ...
2019/10/29 22:41:08 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json ...
2019/10/29 22:41:08 I! Detected runAsUser: cwagent
2019/10/29 22:41:08 I! Change ownership to cwagent:cwagent
2019/10/29 22:41:08 I! Set HOME: /home/cwagent
2019-10-29T22:41:08Z I! will use file based credentials provider
2019-10-29T22:41:08Z I! cloudwatch: get unique roll up list []
2019-10-29T22:41:08Z I! Starting AmazonCloudWatchAgent (version 1.230621.0)
2019-10-29T22:41:08Z I! Loaded outputs: cloudwatch
2019-10-29T22:41:08Z I! cloudwatch: publish with ForceFlushInterval: 1m0s, Publish Jitter: 37s
2019-10-29T22:41:08Z I! Loaded inputs: disk mem
2019-10-29T22:41:08Z I! Tags enabled: host=ip-172-31-71-5
2019-10-29T22:41:08Z I! Agent Config: Interval:10s, Quiet:false, Hostname:"ip-172-31-71-5", Flush Interval:1s
2019-10-29T22:41:08Z I! will use file based credentials provider
2019-10-29T22:41:08Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:42:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:43:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:46:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:49:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:52:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
and the config.json I used
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/bin# cat config.json
{
"agent": {
"metrics_collection_interval": 10,
"run_as_user": "cwagent"
},
"metrics": {
"namespace": "TestNamespace",
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
EDITS
I got it working after I removed the credentials modification
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# tail -n 4 common-config.toml
#### BEGIN ANSIBLE MANAGED BLOCK ####
#[credentials]
#shared_credential_file = "/home/cwagent/.aws/credentials"
#### END ANSIBLE MANAGED BLOCK ####
and after I went ahead and copied the config file to the default location it checks (even though the docs say you can pass the file name as I did).
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/bin# cp config.json /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/bin# cd ../etc/
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# chown cwagent:cwagent amazon-cloudwatch-agent.json
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# ls -l
total 16
drwxr-xr-x 2 cwagent cwagent 4096 Oct 30 22:05 amazon-cloudwatch-agent.d
-rwxr-xr-x 1 cwagent cwagent 611 Oct 30 22:11 amazon-cloudwatch-agent.json
-rw-rw-r-- 1 cwagent cwagent 1144 Oct 30 22:05 amazon-cloudwatch-agent.toml
-rw-r--r-- 1 cwagent cwagent 1073 Oct 30 22:05 common-config.toml
The error appears to be related to accessing tags that are associated with Amazon EC2 instances.
The installation instructions you linked suggest creating an IAM Role with the CloudWatchAgentServerPolicy policy attached. This policy includes permission to describe tags:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeVolumes",
"ec2:DescribeTags",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:*:*:parameter/AmazonCloudWatch-*"
}
]
}
It would appear that the CloudWatch Agent on that server is not receiving such permissions, and is therefore unable to list the tags.
Therefore:
Confirm that an IAM Role has been created and that it includes the CloudWatchAgentServerPolicy policy
Confirm that this IAM Role has been assigned to the Amazon EC2 instance that is running the CloudWatch Agent
If it is still failing, check whether there are any credentials stored locally on the instance that the Agent could be using instead of the IAM Role assigned to the instance

Elastic Beanstalk not streaming to CloudWatch

I've had an Elastic Beanstalk instance streaming my logs to Cloud Watch for about a year. This week the logs stopped streaming. This may have been because I 'rebuilt' the environment in Beanstalk. No configuration changes were made at the same time.
I've double checked that my Beanstalk role has the correct permissions in IAM (it has CloudWatchFullAccess).
I also tried deleting all of my existing group logs. I then went into the Beanstalk 'Instance log streaming to CloudWatch Logs' area, changed my log retention period and restarted the App Server. Sure enough my log groups were recreated (with the new retention period), so I'm pretty sure the permissions look OK. Despite this, no log messages are appearing in the log groups.
I have requested the recent logs though Beanstalk and I can see messages are being written to the logs on the App Server OK.
My platform is Tomcat 8 with Java 8 running on 64bit Amazon Linux/2.6.2
I'm not sure where to go from here. I have no error messages to work off, or any good ideas for what to check next.
Edit: Here is my custom config for CloudWatch, as defined here
files:
"/etc/awslogs/config/company_log.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat8/company.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/company.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat8/company.*
It's probably because it's not allowed to. Make sure the role used by the EC2 instances is allowed to access:
logs:CreateLogGroup
logs:CreateLogStream
logs:GetLogEvents
logs:PutLogEvents
logs:DescribeLogGroups
logs:DescribeLogStreams
logs:PutRetentionPolicy
With Elastic Beanstalk the IAM role used by the EC2 instances is probably aws-elasticbeanstalk-ec2-role. Give that role access to a new policy: ec2-cloudwatch-logs-stream (if someone knows a better name let me know) using JSON (as suggested here):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:GetLogEvents",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutRetentionPolicy"
],
"Resource": [
"*"
]
}
]
}
And it should work after restarting the awslogs service with sudo services awslogs restart. If not; check the logs. You can find it here: /var/log/awslogs.log
You might have upgraded your AWS AMI platform, the Tomcat location is therefore different (e.g. /var/log/tomcat instead of /var/log/tomcat8).
I've detailed in a new Medium blog how this all works and an example .ebextensions file and where to put it.
Below is an excerpt that you might be able to use, though the article explains how to determine the right folder/file(s) to stream.
packages:
yum:
awslogs: []
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 90
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/config/logs.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat/localhost.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/localhost.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/localhost.*
[/var/log/tomcat/catalina.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/catalina.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/catalina.*
[/var/log/tomcat/localhost_access_log.txt]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/access_log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/access_log.*
commands:
"01":
command: systemctl enable awslogsd.service
"02":
command: systemctl restart awslogsd

EC2ResponseError: 401 Unauthorized using Saltstack boto_vpc module

I'm trying to create a vpc using Saltstack and boto_vpc module. This is my state:
vpc_create:
module.run:
- name: boto_vpc.create
- cidr_block: '10.0.0.0/24'
- vpc_name: 'myVpc'
- region: 'us-east-1'
- key: 'ADJJDNEJFJGNFKFKFKIW'
- keyid: 'SJDJNFNEJUWLLLCLCLENNRBFLGSLSLKEMFUHE'
The keys that I'm using are correct but I got this error:
[INFO ] Running state [boto_vpc.create] at time 14:25:35.839797
[INFO ] Executing state module.run for boto_vpc.create
[ERROR ] EC2ResponseError: 401 Unauthorized
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to validate the provided access credentials</Message></Error></Errors><RequestID>7cb74939-afda-4722-a31e-2855c5cbe16b</RequestID></Response>
[ERROR ] {'ret': False}
[INFO ] Completed state [boto_vpc.create] at time 14:25:35.882840
[DEBUG ] File /var/cache/salt/minion/accumulator/49944656 does not exist, no need to cleanup.
[DEBUG ] LazyLoaded highstate.output
[DEBUG ] LazyLoaded nested.output
local:
----------
ID: vpc_create
Function: module.run
Name: boto_vpc.create
Result: False
Comment: Module function boto_vpc.create executed
Started: 14:25:35.839797
Duration: 43.043 ms
Changes:
----------
ret:
False
Saltstack version:
Salt: 2015.5.0
Python: 2.6.9 (unknown, Apr 1 2015, 18:16:00)
Jinja2: 2.7.2
M2Crypto: 0.21.1
msgpack-python: 0.4.6
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.3.1
RAET: Not Installed
ZMQ: 3.2.5
Mako: Not Installed
I tried with aws ec2 create-vpc --cidr-block 10.0.0.0/16 and works fine!
From reading the salt reference, it looks like keyid represents the access key and key represents the secret key. Have you accidentally transposed them?