AWS Beanstalk ELB Logging Terraform - elb_account_id hardcoded - amazon-web-services

I am trying to enable logging on the load balancers created by AWS Beanstalk using Terraform, by referring the below article,
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html
The article speaks about hard coding the 'elb-account-id' in the S3 policy so that the ELB has access to write logs to the bucket. Is this secure from a security standpoint, and what is this account ID?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::elb-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*"
}
]
}
Is there a way to replace this elb-account-id with my own account id?

Related

AWS CloudWatch and EC2 Role + Policy

I've followed a great tutorial by Martin Thwaites outlining the process of logging to AWS CloudWatch using Serilog and .Net Core.
I've got the logging portion working well to text and console, but just can't figure out the best way to authenticate to AWS CloudWatch from my application. He talks about inbuilt AWS authentication by setting up an IAM policy which is great and supplies the JSON to do so but I feel like something is missing. I've created the IAM Policy as per the example with a LogGroup matching my appsettings.json, but nothing comes though on the CloudWatch screen.
My application is hosted on an EC2 instance. Are there more straight forward ways to authenticate, and/or is there a step missing where the EC2 and CloudWatch services are "joined" together?
More Info:
Policy EC2CloudWatch attached to role EC2Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
ALL EC2 READ ACTIONS HERE
],
"Resource": "*"
},
{
"Sid": "LogStreams",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging
:log-stream:*"
},
{
"Sid": "LogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging"
}
]
}
In order to effectively apply the permissions, you need to assign the role to the EC2 instance.

how to access kibana url in aws Elasticsearch?

I followed a tutorial to create a new domain in the elastic search service. I created a policy as follows,
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"es:ESHttpDelete","es:ESHttpGet","es:ESHttpHead",
"es:ESHttpPost","es:ESHttpPut"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
then i created a role for a lambda to access elastic service. later i plan to call elastic search from lambda. here is my role ARN
arn:aws:iam::566879691663:role/myRole
and then for elastic search domain , I assigned "public access" for network configuration. and for access policy, I selected "custom access policy" and added my above role. the access policy json looks like below
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::566879691663:role/myRole*"
]
},
"Action": [
"es:*"
],
"Resource": "arn:aws:es:us-east-1:566879691663:domain/mydomain/*"
}
]
}
once the domain is up and running, when I click on the kibana url generated, I get the following json response in the browser. how can i access this via browser ?
{"message : " user : anonymous is not authorized to perform this action..."}
also, to be able to access/upload programatically, using AWS4AUTH, which requires aws access and secret key, how to I generate those? do i need to create a user and assign the above policy to the user?

Access AWS Elasticsearch from AWS Beanstalk

I have an Elasticsearch Service instance on AWS and an Elastic Beanstalk one.
I want to give read-only access to beanstalk however beanstalk doesn't have a static ip address be default and with a bit of googling it is too much trouble to add one.
I therefore gave access to the aws account but that doesnt seem to work. I am still getting the error:
"User: anonymous is not authorized to perform: es:ESHttpPost
When I set it to public access everything works so I am certain I am doing something wrong here:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxx:root"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:xxx:domain/xxx-elastic-search/*"
}
]
}
Use identity-based policy such as this instead of IP whitelists.
{
"Version": "2012-10-17",
"Statement": [
{
"Resource": "arn:aws:es:us-west-2:111111111111:domain/recipes1/*",
"Action": ["es:*"],
"Effect": "Allow"
}
]
}
Then attach it to the Elastic Beanstalk role. Read more here
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/

How to give access an IAM Role access to an Elasticsearch domain in AWS?

I have an IAM Role for my Federated Identity Pool in Cognito. I want to give this role access to my Elasticsearch domain.
I added an inline policy to give read access to my Elasticsearch domain name using the new visual editor. I've attached this policy below.
I'm confused how to configure the access policy now for the Elasticsearch domain to give access to my IAM Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "es:ListTags",
"Resource": "arn:aws:es:us-west-2:ACCOUNT_ID:domain/DOMAIN_NAME"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "es:ESHttpPost",
"Resource": "*"
}
]
}
EDIT: I was still never able to figure this out. We also tried locking things down with a VPN but then we were not able to access services like Kibana.

Error While storing the document Permission denied AWS

I have a EC2 instance in elasticbeanstalk environment(dev) which works as expected. I have also deployed the same APP on a new elasticbeanstalk environment(Test). Application comes up and all the functionality works, but the upload to S3 functionality does't work in this TEST ENV. I get "Error While storing the document Permission denied" Exception.
I have give all the permission in S3 for the bucket policy. My bucket policy details are as follow -
{
"Version": "2012-10-17",
"Id": "Policy150025",
"Statement": [
{
"Sid": "Stmt1500252113871",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::dev/devkey"
}
]
}
I am not sure why the same APP works in One ENV and not the Other. Appreciate any suggestions.
* Updated *
Trust Relationship
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
bucket policy grants the user access to the objects, but the user who is uploading the files to the bucket should have put objects access to the bucket,
for the ec2 instance can you confirm the aws credentials inside machine env, or any role attached to the instance which is allowing to put objects into bucket.