AWS IAM Policies for pushing llogs to S3 - amazon-web-services

I have a ubuntu ec2 with cloudwatch agent running. The agent is able to push the logs to Cloudwatch as expected. But I am unable to export the logs to S3.
The instance policy has SSMManagedInstanceCore and CloudwatchAgentServerPolicy as described in the documentation.
At this point, I am not sure what policy needs to be assigned.
I also added log policy to write to S3 bucket.
All this is being done in terraform.
Can someone help me solve this pls?
Thanks.

You can add inline policy to your instance role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::/<your-bucket-name>/*"
}
]
}
Depending on the bucket setup, other permissions may be required, e.g. for KMS encryption.
UPDATE
If you want to automatically export your logs from CloudWatch Logs to S3 you have to setup Subscription Filter with Amazon Kinesis Data Firehose. This is fully independent from your instance role and the instance itself.

Related

SageMaker Studio domain creation fails due to KMS permissions

Question
Please help understand the cause and solution for the problem.
Problem
SageMaker Studio domain creation fails due to KMS permissions. The IAM Role specified to the SageMaker arn:aws:iam::316725000538:role/SageMaker has the permissions for KMS required as specified in https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html.
Domain creation failed
Unable to create Amazon EFS for domain 'd-1dq5c9rpkswy' because you don't have permissions to use the KMS key 'arn:aws:kms:us-east-2:316725000538:key/1e2dbf9d-daa0-408d-a290-1633b615c54f'. See https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html for required permissions for CreateDomain action.
tells the IAM permissions
IAM Permission for CreateDomain action
Amazon SageMaker API Permissions: Actions, Permissions, and Resources Reference
The IAM permission required for the CreateDomain action have been attached to the IAM role.
I had the same problem when trying to use the aws/s3 key. I created my own Customer Managed Key (CMK) and it worked just fine.
I think it's related to the AWS assigned policy on the aws/s3 key.
This part:
"Condition": {
"StringEquals": {
"kms:CallerAccount": "120455730103",
"kms:ViaService": "s3.us-east-1.amazonaws.com"
}
I don't think SageMaker meets the kms:ViaService condition.
Apart from SageMakerFullAccess we need to create a new policy and attach that to your user.
Create a new policy with below json -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sagemaker:CreateUserProfile",
"sagemaker:CreateModel",
"sagemaker:CreateLabelingJob",
"sagemaker:CreateFlowDefinition",
"sagemaker:CreateDomain",
"sagemaker:CreateAutoMLJob",
"sagemaker:CreateProcessingJob",
"sagemaker:CreateTrainingJob",
"sagemaker:CreateNotebookInstance",
"sagemaker:CreateCompilationJob",
"sagemaker:CreateImage",
"sagemaker:CreateMonitoringSchedule",
"sagemaker:RenderUiTemplate",
"sagemaker:UpdateImage",
"sagemaker:CreateHyperParameterTuningJob"
],
"Resource": "*"
}
]
}

How to make an IAM Role for a Django Application?

I want to make an IAM Role for my Django app. How can I do this both from AWS side and Django side? Also, I have heard that this is best practice, but don't really understand why it is important. Could someone explain? Thanks!
Update for Marcin:
session = boto3.Session(
aws_access_key_id=my_key,
aws_secret_access_key=my_secret
)
s3 = session.resource('s3')
Update 2 for Marcin:
client = boto3.client(
'ses',
region_name='us-west-2',
aws_access_key_id=my_key,
aws_secret_access_key=my_secret
client.send_raw_email(RawMessage=raw_message)
The default instance role that EB is using is aws-elasticbeanstalk-ec2-role. One way to customize it by adding inline policies to it in IAM console.
Since you require S3, SES and SNS you can add permissions to them in the inline policy. Its not clear which actions do you require (read only for S3, publish message for SNS only?), or if you have specific resources in mind (e.g. only one given bucket or single sns topic), you can start by adding full access to the services. But please note that giving full access is a bad practice and does not follow grant least privilege rule.
Nevertheless, en example of an inline policy with full access to S3, SES and SNS is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sns:*",
"ses:*",
"s3:*"
],
"Resource": "*"
}
]
}
The following should be enough:
s3 = boto3.resource('s3')

AWS Kinesis agent not authorized to perform: firehose:PutRecordBatch

I have setup an Amazon linux AMI with the Kinesis Agent installed and configured to send the logs over to Firehose. The EC2 instance has been attached an IAM role with KinesisFirehoseFullAccess permission. However I am receiving the inadequate permissions error while the data is being sent over.
I know that I have provided the highest level of IAM Kinesis permissions but I am facing a blank wall now. I will, of course, trim the permissions down later but I first need to get this proof of concept working.
From the AWS Firehose, did a test send to the S3 bucket. This worked OK.
Created logs via the Fake Log Generator. I then ran the service. Service is up and running.
User: arn:aws:sts::1245678012:assumed-role/FirstTech-EC2-KinesisFireHose/i-0bdf3adc7a4d97afa is not authorized to perform: firehose:PutRecordBatch on resource: arn:aws:firehose:ap-southeast-1:1245678012:deliverystream/firsttech-ingestion-weblogs (Service: AmazonKinesisFirehose; Status Code: 400; Error Code: AccessDeniedException;
localhost (Agent.MetricsEmitter RUNNING) com.amazon.kinesis.streaming.agent. Agent: Progress: 900 records parsed (220430 bytes), and 0 records sent successfully to destinations. Uptime: 840058ms
I got this working for the aws kinesis agent to send data to a kinesis data stream just in case anyone else here has issues.
I had the same issues after attaching the correct iam role and policy permissions to an ec2 instance that needed to send records to a kinesis data stream.
I just removed the references to aws firehose in the config file. You do not need to use keys embedded in the ec2 instance itself, the iam role is sufficient.
enter image description here
Make sure you create the Kinesis Firehose with from console to ensure that all required IAM accesses are instantiated by default.
Next in your instance ensure your agent.json is correct from:
{
"cloudwatch.emitMetrics": true,
"flows": [
{
"filePattern": "/var/log/pathtolog",
"deliveryStream": "kinesisstreamname" }
]
}
Make sure the EC2 instance has the necessary permissions to send the data to Kinesis:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "arn:aws:firehose:us-east-1:accountid:deliverystream/deliverstreamname"
}
]
}
Also make sure you kinesis agent can collect data from any directory in the instance - the easiest way to do this is by adding the agent to the sudoers group.
sudo usermod -aG sudo aws-kinesis-agent-user

AWS security group rules deployment (lambda->SQS)

On AWS we've implemented functionality that AWS lambda pushes message to AWS queue;
However during this implementation I had to manuall grant permissions to AWS lambda to add message to particular queue. And this apporach with manual clicks not so good for prod deployment.
Any suggestions how to automate process of adding permissions between AWS services (mainly lambda and SQS) and cretate "good" deployment package for prod env ?
Each Lambda function has an attached role, which you can specify permissions for in the IAM dashboard. If you give the Lambda functions' role the permission to push to an SQS queue, you're good to go. For example, attach this JSON as a custom role (see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSExamples.html):
{
"Version": "2012-10-17",
"Id": "Queue1_Policy_UUID",
"Statement":
{
"Sid":"Queue1_SendMessage",
"Effect": "Allow",
"Principal": {
"AWS": "111122223333"
},
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:444455556666:queue1"
}
}
You can use asterisks to give permission to multiple queues, like:
"Resource": "arn:aws:sqs:us-east-1:444455556666:production-*"
To give sendMessage permission to all queues that start with production-.

Aws IAM user permission to specific region for cloudwatch

Here is what i want. I have a IAM user for whom i want to give read only access to a us-east-1 and that too only read metrics for particular ec2 instance. I have 3 instances runnning in us-east-1 but i want this user to have access to metrics of only 1 ec2 server.
I have written policy like below. which is giving access to all the metrics in all the region. I tried putting that instanceid in below code but it didn't work.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:Describe*",
"cloudwatch:Get*",
"cloudwatch:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
I dont understand what i am missing here.
In short, this is not possible, according to the Cloudwatch docs:
You can't use IAM to control access to CloudWatch data for specific
resources. For example, you can't give a user access to CloudWatch
data for only a specific set of instances or a specific LoadBalancer.
Permissions granted using IAM cover all the cloud resources you use
with CloudWatch.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/UsingIAM.html