AWS Elasticsearch IAM question to access Kibana via Browser - amazon-web-services

I've set up my elasticsearch yml file (deployed via Serverless) as follows:
Resources:
CRMSearch:
Type: "AWS::Elasticsearch::Domain"
Properties:
ElasticsearchVersion: "7.10"
DomainName: "crm-searchdb-${self:custom.stage}"
ElasticsearchClusterConfig:
DedicatedMasterEnabled: false
InstanceCount: "1"
ZoneAwarenessEnabled: false
InstanceType: "t3.medium.elasticsearch"
EBSOptions:
EBSEnabled: true
Iops: 0
VolumeSize: 10
VolumeType: "gp2"
AccessPolicies:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
AWS: [
"arn:aws:iam::#{AWS::AccountId}:role/crm-databases-dev-us-east-1-lambdaRole",
'#{AWS::AccountId}',
'arn:aws:iam::#{AWS::AccountId}:user/nicholas',
'arn:aws:iam::#{AWS::AccountId}:user/daniel'
]
Action: "es:*"
Resource: "arn:aws:es:us-east-1:#{AWS::AccountId}:domain/crm-searchdb-${self:custom.stage}"
- Effect: "Allow"
Principal:
AWS: [
"*"
]
Action: "es:*"
Resource: "arn:aws:es:us-east-1:#{AWS::AccountId}:domain/crm-searchdb-${self:custom.stage}"
AdvancedOptions:
rest.action.multi.allow_explicit_index: 'true'
AdvancedSecurityOptions:
Enabled: true
InternalUserDatabaseEnabled: true
MasterUserOptions:
MasterUserName: admin
MasterUserPassword: fD343sfdf!3rf
EncryptionAtRestOptions:
Enabled: true
NodeToNodeEncryptionOptions:
Enabled: true
DomainEndpointOptions:
EnforceHTTPS: true
I'm just trying to get access to Kibana via browser. I set up open permission Kibana a few months ago at a previous company, but can't seem to access Kibana via browser no matter what I do. I always get the {"Message":"User: anonymous is not authorized to perform: es:ESHttpGet"} error. How do I setup permissions (ideally via yml) to accomplish this?

User: anonymous is not authorized to perform: es:ESHttpGet
The breakdown of what results in this message is:
Your browser fetches Kibana assets / scripts
Kibana client-side code running on your browser makes a GET request to your ElasticsearchService domain.
Your browser is not a permitted principal and is denied access. The anonymous user from the message is your browser
This is explained in the AWS ElasticsearchService documentation:
Because Kibana is a JavaScript application, requests originate from the user's IP address.
In terms of your next step, the answers to the following question cover the two options you have:
How to access Kibana from Amazon elasticsearch service?
(Yaml solution, overall NOT advisable for several reasons) Add an extra statement to your access policy to allow actions from your device's IP address(es)
(Non-Yaml solution) Set up a proxy that will handle the requests from your browser and basically pass them on after signing them with the credentials of a trusted principal. This can either be a proxy running on additional AWS infrastructure or something running on your local machine. Again, the answers on the linked question go into more details.

Related

Cloud custodian GCP storage enable versioning check for all storage

i am trying to write GCP storage bucket policy of Cloud custodian but not getting idea how to filter out the versioning on all avilable buckets
policies:
- name: check-all-bucket-versioning
description: |
Check all bucket versionig enabled
resource: gcp.bucket
filters:
- type: value
key: versioning
value: true
actions:
any help would be really helpful..!
thanks
Your example policy is very close. It is failing because the value for versioning is an object rather than a string. When versioning is enabled for a bucket, the versioning value will be {"enabled": True}. We can filter for that by using versioning.enabled as the key:
policies:
- name: check-all-bucket-versioning
resource: gcp.bucket
filters:
- type: value
key: versioning.enabled
value: true

Create IAM account with CloudFormation

I want to create an AWS IAMS account that has various permissions with CloudFormation.
I understand there are policies that would let a user change his password and let him get his account to use MFA here
How could I enforce the user to use MFA at first log in time when he needs to change the default password?
This is what I have:
The flow I have so far is:
User account is created
When user tries to log in for the first time is asked to change the default password.
User is logged in the AWS console.
Expected behavior:
User account is created
When user tries to log in for the first time is asked to change the default password and set MFA using Authenticator app.
User is logged in the AWS console and has permissions.
A potential flow is shown here. Is there another way?
Update:
This blog explains the flow
Again, is there a better way? Like an automatic pop up that would enforce the user straight away?
Update2:
I might have not been explicit enough.
What we have so far it is an ok customer experience.
This flow would be fluid
User tries to log in
Console asks for password change
Colsole asks for scanning the code and introducing the codes
User logs in with new password and the code from authenticator
5.User is not able to deactivate MFA
Allow users to self manage MFA is the way to go, if you are using regular IAM. You can try AWS SSO, it's easier to manage and free.
Allowing users to login, change password, setup MFA and Denying everything other than these if MFA is not setup as listed here
We could create an IAM Group with an inline policy and assign users to that group.
This is CF for policy listed in the docs.
Resources:
MyIamGroup:
Type: AWS::IAM::Group
Properties:
GroupName: My-Group
MyGroupPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action:
- iam:GetAccountPasswordPolicy
- iam:GetAccountSummary
- iam:ListVirtualMFADevices
- iam:ListUsers
Effect: Allow
Resource: "*"
- Action:
- iam:ChangePassword
- iam:GetUser
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:user/${aws:username}
- Action:
- iam:CreateVirtualMFADevice
- iam:DeleteVirtualMFADevice
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:mfa/${aws:username}
- Action:
- iam:DeactivateMFADevice
- iam:EnableMFADevice
- iam:ListMFADevices
- iam:ResyncMFADevice
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:user/${aws:username}
- NotAction:
- iam:CreateVirtualMFADevice
- iam:EnableMFADevice
- iam:GetUser
- iam:ListMFADevices
- iam:ListVirtualMFADevices
- iam:ListUsers
- iam:ResyncMFADevice
- sts:GetSessionToken
Condition:
BoolIfExists:
aws:MultiFactorAuthPresent: "false"
Effect: Deny
Resource: "*"
PolicyName: My-Group-Policy
Groups:
- Ref: MyIamGroup
I think this is the way to go and one could extract the knowledge of creating users with whatever permissions he wants after the user sets up the MFA.
The policy template it is useful.
instructions

S3 website with access restricted to VPC endpoint is getting 403 from inside the VPC

I would like to create an S3 bucket that is configured to work as a website, and I would like to restrict access to the S3 website to requests coming from inside a particular VPC only.
I am using Cloudformation to set up the bucket and the bucket policy.
The bucket CF has the WebsiteConfiguration enabled and has AccessControl set to PublicRead.
ContentStorageBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: PublicRead
BucketName: "bucket-name"
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
The bucket policy includes two conditions: one condition grants access full access to the bucket when on the office IP, and the other condition grants access through a VPC endpoint. The code is as follows:
ContentStorageBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ContentStorageBucket
PolicyDocument:
Id: BucketPolicy
Version: '2012-10-17'
Statement:
- Sid: FullAccessFromParticularIP
Action:
- s3:*
Effect: "Allow"
Resource:
- !GetAtt [ ContentStorageBucket, Arn ]
- Fn::Join:
- '/'
- - !GetAtt [ ContentStorageBucket, Arn ]
- '*'
Principal: "*"
Condition:
IpAddress:
aws:SourceIp: "x.x.x.x"
- Sid: FullAccessFromInsideVpcEndpoint
Action:
- s3:*
Effect: "Allow"
Resource:
- !GetAtt [ ContentStorageBucket, Arn ]
- Fn::Join:
- '/'
- - !GetAtt [ ContentStorageBucket, Arn ]
- '*'
Principal: "*"
Condition:
StringEquals:
aws:sourceVpce: "vpce-xxxx"
To test the above policy conditions, I have done the following:
I've added a file called json.json to the S3 bucket;
I've created an EC2 instance and placed it inside the VPC referenced in the bucket.
I've made a curl request to the file endpoint http://bucket-name.s3-website-us-east-1.amazonaws.com/json.json from inside the whitelisted IP address, and the request succeeds;
I've made a curl request to the file endpoint from inside the EC2 instance (placed in the VPC), and the request fails with a 403 Access Denied
Notes:
I have made sure that the EC2 instance is in the correct VPC.
The aws:sourceVpce is not using the value of the VPC ID, but it is using the value of the Endpoint ID of the corresponding VPC.
I have also used aws:sourceVpc with the VPC ID, instead of using the aws:sourceVpce with the endpoint ID, but this produced the same results as the one mentioned above.
Given this, I currently am not sure how to proceed in further debugging this. Do you have any suggestions about what might be the problem? Please let me know if the question is not clear or anything needs clarification. Thank you for your help!
In order for resources to use the VPC endpoint for S3, the VPC router must point all traffic destined for S3 to the VPC endpoint. Rather than maintain a list of all of the CIDR blocks that are S3 specific on your own, AWS allows you to use BGP prefix lists which are a first-class resource in AWS.
To find the prefix list for S3 run the following command (your output should match mine, since this should be the same region wide across all accounts, but best to check). Use the region of your VPC.
aws ec2 describe-prefix-lists --region us-east-1
I get the following output:
{
"PrefixLists": [
{
"Cidrs": [
"54.231.0.0/17",
"52.216.0.0/15"
],
"PrefixListId": "pl-63a5400a",
"PrefixListName": "com.amazonaws.us-east-1.s3"
},
{
"Cidrs": [
"52.94.0.0/22",
"52.119.224.0/20"
],
"PrefixListId": "pl-02cd2c6b",
"PrefixListName": "com.amazonaws.us-east-1.dynamodb"
}
]
}
For com.amazonaws.us-east-1.s3, the prefix list id is pl-63a5400a,
so you can then create a route in whichever route table services the subnet in question. The Destination should be the prefix list (pl-63a5400a), and the target should be the VPC endpoint ID (vpce-XXXXXXXX) (which you can find with a aws ec2 describe-vpc-endpoints).
This is trivial from the console. I don't remember how to do this from the command line, I think you have to send a cli-input-json with something like the below, but I haven't tested. this is left as an exercise for the reader.
{
"DestinationPrefixListId": "pl-63a5400a",
"GatewayId": "vpce-12345678",
"RouteTableid": "rt-90123456"
}

AWS ALB Ingress Controller with IRSA authorization error

I am trying to setup an AWS ALB Ingress Controller using the IRSA method instead of kube2iam. There is however some lack of documentation so I came to a dead end.
What I did so far:
Configured the OIDC provider for my cluster
eksctl utils associate-iam-oidc-provider --cluster devops --approve
Created the proper policy by using the template
Created the IAM service account that will be used by the Ingress Controller and associated the policy
eksctl create iamserviceaccount --name alb-ingress --namespace default --cluster devops --attach-policy-arn arn:aws:iam::112233445566:policy/eks-ingressController-iam-policy-IngressControllerPolicy-1111111111 --approve
Deployed required rbac rules provided
kubectl apply -f rbac-role.yaml
Deployed the AWS Ingress Controller by using this template. Payed attention so the ServiceAccount matches the service account I created previously.
Everything up to here is deployed fine. Now I try to deploy my Ingress service but I get this error (in the controller logs)
kubebuilder/controller "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due to failed to get AWS tags. Error: AccessDeniedException: User: arn:aws:sts::1122334455:assumed-role/eksctl-devops-nodegroup-ng-1-work-NodeInstanceRole-J08FDJHIWPI7/i-000000000000 is not authorized to perform: tag:GetResources\n\tstatus code: 400, request id: 94d614a1-c05d-4b92-8ad6-86b450407f6a" "Controller"="alb-ingress-controller" "Request"={"Namespace":"superset","Name":"superset-ingress"}
Obviously the node doesn't have the proper permissions for the ALB creation, and I guess that if I attached my policy to the role stated in the log it would work. But that defeats the whole purpose of doing the IRSA method right?
What I would expect is for the Ingress Controller pod to need the appropriate permissions -by using the service account- to create the ALB and not the Node. Am I missing something here?
I've got a similar error (not identical) when using version v1.1.8 of this controller:
kubebuilder/controller "msg"="Reconciler
error"="failed get
WAFv2 webACL for load balancer arn:aws:elasticloadbalancing:...:
AccessDeniedException: User:
arn:aws:sts:::assumed-role/eks-node-group-role/
is not authorized to perform: wafv2:GetWebACLForResource on resource:
arn:aws:wafv2:us-east-2::regional/webacl/*\n\tstatus code:
400, request id: ..."
"controller"="alb-ingress-controller"
"request"={"Namespace":"default","Name":"aws-alb-ingress"}
I'll add it because I think it can help people which search under the same error message.
The reason for the error described above was the fact that version v1.1.7 of this controller needs new IAM permissions in the nodegroup role's *PolicyALBIngress policy.
(!) Be aware that the new IAM permission is required even no wafv2 annotation is used.
Solution 1
Adding the section of wafv2 allow actions to the policy:
{
"Effect": "Allow",
"Action": [
"wafv2:GetWebACL",
"wafv2:GetWebACLForResource",
"wafv2:AssociateWebACL",
"wafv2:DisassociateWebACL"
],
"Resource": "*"
}
Solution 2
WAFV2 support can be disabled by controller flags as mentioned here.
A) If you install it via kubectl, add - --feature-gates=waf=false to the spec -> containers -> args section.
B) If you install it via helm, add --set extraArgs."feature-gates"='waf=false' in helm upgrade command.
Notice that this requirment was already being updated in the eksctl tool (Review also in here).
Additional reference.
So, in case someone comes up to the same problem.
The solution is, when creating the rbac roles, to comment out from the rbac-role.yaml (as provided here) the last part which creates the service account.
Since we already created a service account with eksctl and attached to it the aws policy, we can attach to this service account the rbac permissions also. Then this service account can be used normally in the ingress controller pod to do its magic.
According to the documentation need the permission to CRUD an ALB. You could if you wanted to try giving just the ALB driver Pod a role with permissions create the ALB but I have not tested it and I am not sure it matters, if your entire scheduler has been given access to to use the ALB driver/pod to create these objects on AWS.
I am not using EKS's 3.0's cluster creation tool, instead I have my own CFT that I use to create workers due to my orgs additional security requirements.
I have have created and attached the bellow managed policy to workers that need to create ALB's and it just works.
ALBPolicy:
Type: "AWS::IAM::ManagedPolicy"
Properties:
Description: Allows workers to CRUD alb's
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "acm:DescribeCertificate"
- "acm:ListCertificates"
- "acm:GetCertificate"
Resource: "*"
-
Effect: "Allow"
Action:
- "ec2:AuthorizeSecurityGroupIngress"
- "ec2:CreateSecurityGroup"
- "ec2:CreateTags"
- "ec2:DeleteTags"
- "ec2:DeleteSecurityGroup"
- "ec2:DescribeAccountAttributes"
- "ec2:DescribeAddresses"
- "ec2:DescribeInstances"
- "ec2:DescribeInstanceStatus"
- "ec2:DescribeInternetGateways"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DescribeSecurityGroups"
- "ec2:DescribeSubnets"
- "ec2:DescribeTags"
- "ec2:DescribeVpcs"
- "ec2:ModifyInstanceAttribute"
- "ec2:ModifyNetworkInterfaceAttribute"
- "ec2:RevokeSecurityGroupIngress"
Resource: "*"
-
Effect: "Allow"
Action:
- "elasticloadbalancing:AddListenerCertificates"
- "elasticloadbalancing:AddTags"
- "elasticloadbalancing:CreateListener"
- "elasticloadbalancing:CreateLoadBalancer"
- "elasticloadbalancing:CreateRule"
- "elasticloadbalancing:CreateTargetGroup"
- "elasticloadbalancing:DeleteListener"
- "elasticloadbalancing:DeleteLoadBalancer"
- "elasticloadbalancing:DeleteRule"
- "elasticloadbalancing:DeleteTargetGroup"
- "elasticloadbalancing:DeregisterTargets"
- "elasticloadbalancing:DescribeListenerCertificates"
- "elasticloadbalancing:DescribeListeners"
- "elasticloadbalancing:DescribeLoadBalancers"
- "elasticloadbalancing:DescribeLoadBalancerAttributes"
- "elasticloadbalancing:DescribeRules"
- "elasticloadbalancing:DescribeSSLPolicies"
- "elasticloadbalancing:DescribeTags"
- "elasticloadbalancing:DescribeTargetGroups"
- "elasticloadbalancing:DescribeTargetGroupAttributes"
- "elasticloadbalancing:DescribeTargetHealth"
- "elasticloadbalancing:ModifyListener"
- "elasticloadbalancing:ModifyLoadBalancerAttributes"
- "elasticloadbalancing:ModifyRule"
- "elasticloadbalancing:ModifyTargetGroup"
- "elasticloadbalancing:ModifyTargetGroupAttributes"
- "elasticloadbalancing:RegisterTargets"
- "elasticloadbalancing:RemoveListenerCertificates"
- "elasticloadbalancing:RemoveTags"
- "elasticloadbalancing:SetIpAddressType"
- "elasticloadbalancing:SetSecurityGroups"
- "elasticloadbalancing:SetSubnets"
- "elasticloadbalancing:SetWebACL"
Resource: "*"
-
Effect: "Allow"
Action:
- "iam:CreateServiceLinkedRole"
- "iam:GetServerCertificate"
- "iam:ListServerCertificates"
Resource: "*"
-
Effect: "Allow"
Action:
- "cognito-idp:DescribeUserPoolClient"
Resource: "*"
-
Effect: "Allow"
Action:
- "waf-regional:GetWebACLForResource"
- "waf-regional:GetWebACL"
- "waf-regional:AssociateWebACL"
- "waf-regional:DisassociateWebACL"
Resource: "*"
-
Effect: "Allow"
Action:
- "tag:GetResources"
- "tag:TagResources"
Resource: "*"
-
Effect: "Allow"
Action:
- "waf:GetWebACL"
Resource: "*"

How to enable IAM users to set the Name and other custom tags when limited by tag restricted resource-level permissions in EC2

I have been playing with configuring tag based resource permissions in EC2, using an approach similar to what is described in the answer to the following question: Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?
I have been using this in conjunction with a lambda function to auto tag EC2 instances, setting the Owner and PrincipalId based on the IAM user who called the associated ec2:RunInstances action. The approach I have been following for this is documented in the following AWS blog post: How to Automatically Tag Amazon EC2 Resources in Response to API Events
The combination of these two approaches has resulted in my restricted user permissions for EC2 looking like this, in my CloudFormation template:
LimitedEC2Policy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: UserLimitedEC2
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: ec2:RunInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetA}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetB}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetC}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:security-group/${BasicSSHAccessSecurityGroup.GroupId}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:key-pair/${AuthorizedKeyPair}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:network-interface/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:volume/*'
- !Sub 'arn:aws:ec2:${AWS::Region}::image/ami-*'
Condition:
StringLikeIfExists:
ec2:Vpc: !Ref Vpc
StringLikeIfExists:
ec2:InstanceType: !Ref EC2AllowedInstanceTypes
- Effect: Allow
Action:
- ec2:TerminateInstances
- ec2:StopInstances
- ec2:StartInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
Condition:
StringEquals:
ec2:ResourceTag/Owner: !Ref UserName
Users:
- !Ref IAMUser
These IAM permissions restricts users to running EC2 instances within a limited set of subnets, within a single VPC and security group. Users are then only able to start/stop/terminate instances which have been tagged with their IAM user in the Owner tag.
What I'd like to be able to do is allow users to also create and delete any additional tags on their EC2 resources, such as setting the Name tag. What I can't work out is how I can do this without also enabling them to change the Owner and PrincipalId tags on resources they don't "own".
Is there a way one can limit the ec2:createTags and ec2:deleteTags actions to prevent users from setting certain tags?
After much sifting through the AWS EC2 documentation I found the following: Resource-Level Permissions for Tagging
This gives the example:
Use with the ForAllValues modifier to enforce specific tag keys if
they are provided in the request (if tags are specified in the
request, only specific tag keys are allowed; no other tags are
allowed). For example, the tag keys environment or cost-center are
allowed:
"ForAllValues:StringEquals": { "aws:TagKeys": ["environment","cost-center"] }
Since what I want to achive is essentially the opposite of this (allow users to specify all tags, with the exception of specific tag keys) I have been able to prevent users from creating/deleting the Owner and PrincipalId tags by adding the following PolicyDocument statement to my user policy in my CloudFormation template:
- Effect: Allow
Action:
- ec2:CreateTags
- ec2:DeleteTags
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:*/*'
Condition:
"ForAllValues:StringNotEquals":
aws:TagKeys:
- "Owner"
- "PrincipalId"
This permits users to create/delete any tags they wish, so long as they aren't the Owner or PrincipalId.