Amazon MQ unable to publish logs in CloudWatch - amazon-web-services

I’ve tested a variation of wide policy access , and got to the same point – the log groups is created, but the log stream isn’t.
Followed https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-configuring-cloudwatch-logs.html and the expected result is getting those messages in CloudWatch, but nothing's coming in.
The goal is to have audit and general MQ logs in CloudWatch.
Has anyone managed to stream MQ logs in CloudWatch? How could I further debug this?

I managed to create the Amazon MQ Broker with logging enabled, and publishing log messaged to Cloudwatch using terraform's provider 1.43.2 -- my project has a lock-down on an older tf provider version, so if you're using a newer one you should be fine
https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md#1430-november-07-2018
This was the policy that I didn't get right the first time, and needed for MQ to post to Cloudwatch:
data "aws_iam_policy_document" "mq-log-publishing-policy" {
statement {
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents",
]
resources = ["arn:aws:logs:*:*:log-group:/aws/amazonmq/*"]
principals {
identifiers = ["mq.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_cloudwatch_log_resource_policy" "mq-log-publishing-policy" {
policy_document = "${data.aws_iam_policy_document.mq-log-publishing-policy.json}"
policy_name = "mq-log-publishing-policy"
}
Make sure this policy has been correctly applied, otherwise nothing will come up on Cloudwatch. I did so using aws cli:
aws --profile my-testing-profile-name --region my-profile-region logs describe-resource-policies
and you should see the policy in the output.

Or if you're using aws cli you can try
aws --region [your-region] logs put-resource-policy --policy-name AmazonMQ-logs \
--policy-document '{
"Statement": [
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Principal": {
"Service": "mq.amazonaws.com"
},
"Resource": "arn:aws:logs:*:*:log-group:/aws/amazonmq/*"
}
],
"Version": "2012-10-17"
}'

Install the AWS CLI agent for Windows and configure your credentials https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html
Create a JSON file in "C:\Users\YOUR-USER\" containing your policy. For example: C:\Users\YOUR-USER\policy.json. You can simply copy this one here and paste into your .json file:
{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"Service": "mq.amazonaws.com"},"Action":["logs:CreateLogStream","logs:PutLogEvents"],"Resource" : "arn:aws:logs:*:*:log-group:/aws/amazonmq/*"}]}
Open your CMD and simply type:
aws --region eu-central-1 logs put-resource-policy --policy-name amazonmq_to_cloudwatch --policy-document file://policy.json
Well Done ! This will create a AWS RESOURCE POLICY, which sometimes is not possible to create in the IAM console.

Related

Multiple AWS IoT Devices

I am creating an application where I need to connect multiple devices to AWS IoT, but I noticed that only the last device that was connected remains connected. I saw that I was using the same certificates for all devices and after I created a certificate for each one the problem was solved, but it turns out that it will be multiple devices and it will be unproductive to keep registering device by device. I would like to know if there is a solution for multiple devices to remain connected to the aws iot simultaneously without having to register the certificates one by one.
This mainly comes from: https://iot-device-management.workshop.aws/en/provisioning-options/bulk-provisioning.html.
There are other options (just in time etc...) on the link above.
Create a bulk thing registration task
To create a bulk registration task a role is required that grants permission to access the input file. This role has been already created by CloudFormation and the name of the role has been copied during the setup of the workshop to the shell variable $ARN_IOT_PROVISIONING_ROLE.
aws iot start-thing-registration-task \
--template-body file://~/templateBody.json \
--input-file-bucket $S3_BUCKET \
--input-file-key bulk.json --role-arn $ARN_IOT_PROVISIONING_ROLE
When successful the command returns a taskId. The output looks similar to:
{
"taskId": "aaaf0a94-b5a9-4bd6-a1f5-cf188322a111"}
Provisioning templates
https://docs.aws.amazon.com/iot/latest/developerguide/provision-template.html
A provisioning template is a JSON document that uses parameters to describe the resources your device must use to interact with AWS IoT. A template contains two sections: Parameters and Resources. There are two types of provisioning templates in AWS IoT. One is used for just-in-time provisioning (JITP) and bulk registration and the second is used for fleet provisioning.
Script to create a provisioning template
https://github.com/aws-samples/aws-iot-device-management-workshop/blob/master/bin/mk-bulk.sh
Create bucket
aws s3api create-bucket\
--bucket bulk-iot-test\
--region ap-northeast-1
Upload bulk.json (if using cloudshell) and copy to S3
Upload bulk.json via the UI
aws s3 cp bulk.json s3://bulk-iot-test
aws s3 ls s3://bulk-iot-test
Create the role to register the things
From CloudFormation template… This is incomplete and needs further refination.
"DMWSIoTServiceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": [ "iot.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AWSIoTThingsRegistration",
"arn:aws:iam::aws:policy/service-role/AWSIoTLogging",
"arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
],
"Path": "/"
}
},
Start the thing registration task
aws iot start-thing-registration-task \
--template-body file://~/templateBody.json \
--input-file-bucket bulk-iot-test \
--input-file-key bulk.json --role-arn "arn:aws:sts::ACCOUNTID:assumed-role/ROLE/USER#DOMAIN.com"

AWS IAM Policies for pushing llogs to S3

I have a ubuntu ec2 with cloudwatch agent running. The agent is able to push the logs to Cloudwatch as expected. But I am unable to export the logs to S3.
The instance policy has SSMManagedInstanceCore and CloudwatchAgentServerPolicy as described in the documentation.
At this point, I am not sure what policy needs to be assigned.
I also added log policy to write to S3 bucket.
All this is being done in terraform.
Can someone help me solve this pls?
Thanks.
You can add inline policy to your instance role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::/<your-bucket-name>/*"
}
]
}
Depending on the bucket setup, other permissions may be required, e.g. for KMS encryption.
UPDATE
If you want to automatically export your logs from CloudWatch Logs to S3 you have to setup Subscription Filter with Amazon Kinesis Data Firehose. This is fully independent from your instance role and the instance itself.

Using aws cli to add send message permission for SQS

I'm trying to set up an existing SQS Queue as a subscriber to an SNS topic. In the AWS console in the permissions tab, I can set the policy document to
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:7670234568007:stdsourcequeue/SQSDefaultPolicy",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:7670234568007:stdsourcequeue",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:sns:us-east-1:7670234568007:new_posts"
}
}
}
]
}
How can I do this using the aws-cli
This is an practical example of how to do it using set-queue-attributes:
cat >/tmp/sqs_polcy << EOL
{
"Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"arn:aws:sqs:us-east-1:7670234568007:stdsourcequeue\/SQSDefaultPolicy\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"sqs:SendMessage\",\"Resource\":\"arn:aws:sqs:us-east-1:7670234568007:stdsourcequeue\",\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"arn:aws:sns:us-east-1:7670234568007:new_posts\"}}}]}"
}
EOL
aws sqs set-queue-attributes \
--queue-url https://<your-queue-url> \
--attributes file:///tmp/sqs_polcy
Above I create /tmp/sqs_polcy file with the policy, which is required for the set-queue-attributes command.
Your policy must be stringified from json, before you can use in the CLI command.
These are generally managed via the CLI command of add-permission for SQS.
However as you're using your own custom policy the AWS documentation states the following
AddPermission generates a policy for you. You can use SetQueueAttributes to upload your policy.
This would be accessible via the set-queue-attributes function.
Your policy will need to be be converted into a JSON file, against the key value of Policy.
As a word of caution by doing this it will replace the policy attached to your SQS queue, so make sure to validate it before hand.

AWS Kinesis agent not authorized to perform: firehose:PutRecordBatch

I have setup an Amazon linux AMI with the Kinesis Agent installed and configured to send the logs over to Firehose. The EC2 instance has been attached an IAM role with KinesisFirehoseFullAccess permission. However I am receiving the inadequate permissions error while the data is being sent over.
I know that I have provided the highest level of IAM Kinesis permissions but I am facing a blank wall now. I will, of course, trim the permissions down later but I first need to get this proof of concept working.
From the AWS Firehose, did a test send to the S3 bucket. This worked OK.
Created logs via the Fake Log Generator. I then ran the service. Service is up and running.
User: arn:aws:sts::1245678012:assumed-role/FirstTech-EC2-KinesisFireHose/i-0bdf3adc7a4d97afa is not authorized to perform: firehose:PutRecordBatch on resource: arn:aws:firehose:ap-southeast-1:1245678012:deliverystream/firsttech-ingestion-weblogs (Service: AmazonKinesisFirehose; Status Code: 400; Error Code: AccessDeniedException;
localhost (Agent.MetricsEmitter RUNNING) com.amazon.kinesis.streaming.agent. Agent: Progress: 900 records parsed (220430 bytes), and 0 records sent successfully to destinations. Uptime: 840058ms
I got this working for the aws kinesis agent to send data to a kinesis data stream just in case anyone else here has issues.
I had the same issues after attaching the correct iam role and policy permissions to an ec2 instance that needed to send records to a kinesis data stream.
I just removed the references to aws firehose in the config file. You do not need to use keys embedded in the ec2 instance itself, the iam role is sufficient.
enter image description here
Make sure you create the Kinesis Firehose with from console to ensure that all required IAM accesses are instantiated by default.
Next in your instance ensure your agent.json is correct from:
{
"cloudwatch.emitMetrics": true,
"flows": [
{
"filePattern": "/var/log/pathtolog",
"deliveryStream": "kinesisstreamname" }
]
}
Make sure the EC2 instance has the necessary permissions to send the data to Kinesis:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "arn:aws:firehose:us-east-1:accountid:deliverystream/deliverstreamname"
}
]
}
Also make sure you kinesis agent can collect data from any directory in the instance - the easiest way to do this is by adding the agent to the sudoers group.
sudo usermod -aG sudo aws-kinesis-agent-user

Why AWS Bucket Policy NotPrincipal with specific user doesn't work with aws client when no profile is specified?

I have this AWS S3 Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OnlyS3AdminCanPerformOperationPolicy",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::<account-id>:user/s3-admin"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
Side note: IAM s3-admin user has AdministratorAccess policy attached.
At first I though the bucket policy didn't worked. It was probably because of the way I tested the operation.
aws s3 rm s3://my-bucket-name/file.csv
Caused:
delete failed: s3://test-cb-delete/buckets.csv An error occurred (AccessDenied)
but I if used --profile default as per
aws s3 --profile default rm s3://my-bucket-name/file.csv
It worked.
I verified and only have one set of credentials configured for the aws client. Also, I am able to list the content of the bucket even when I don't use the --profile default argument.
Why is the aws client behaving that way?
Take a look at the credential precedence provider chain and use that to determine what is different about the two credentials you're authenticating as.
STS has a handly API that tells you who you are. It's similar to the UNIX-like command whoami, except for AWS Principals. To see which credential is which, do this:
aws sts get-caller-identity
aws sts --profile default get-caller-identity