I've got a node.js application trying to create an Instance Group Manager. It's running on an instance with a service account attached to the instance with scopes compute-rw and cloud-platform. This service account has a role with the following permissions:
includedPermissions:
- compute.autoscalers.create
- compute.autoscalers.get
- compute.disks.create
- compute.images.get
- compute.images.useReadOnly
- compute.instanceGroupManagers.create
- compute.instanceGroupManagers.get
- compute.instanceGroupManagers.use
- compute.instanceTemplates.create
- compute.instanceTemplates.get
- compute.instanceTemplates.useReadOnly
- compute.instances.create
- compute.instances.setMetadata
- compute.instances.setTags
- compute.networks.get
- compute.subnetworks.get
- compute.subnetworks.use
Looking at the audit log for resource.type="gce_instance_group_manager" I can see in first log entry:
ProtoPayload.authorizationInfo:
- granted: true
permission: compute.instanceGroupManagers.create
resourceAttributes:
name: projects/my-project/zones/us-east1-b/instanceGroupManagers/resource-name
service: compute
type: compute.instanceGroupManagers
- granted: true
permission: compute.instanceTemplates.useReadOnly
resourceAttributes:
name: projects/my-project/global/instanceTemplates/resource-name
service: compute
type: compute.instanceTemplates
- granted: true
permission: compute.instances.create
resourceAttributes:
name: projects/my-project/zones/us-east1-b/instances/resource-name-0000
service: compute
type: compute.instances
- granted: true
permission: compute.disks.create
resourceAttributes:
name: projects/my-project/zones/us-east1-b/disks/resource-name-0000
service: compute
type: compute.disks
- granted: true
permission: compute.images.useReadOnly
resourceAttributes:
name: projects/my-project/global/images/resource-name-image
service: compute
type: compute.images
- granted: true
permission: compute.subnetworks.use
resourceAttributes:
name: projects/my-project/regions/us-east1/subnetworks/resource-name-subnet
service: compute
type: compute.subnetworks
- granted: true
permission: compute.instances.setMetadata
resourceAttributes:
name: projects/my-project/zones/us-east1-b/instances/resource-name-0000
service: compute
type: compute.instances
- granted: true
permission: compute.instances.setTags
resourceAttributes:
name: projects/my-project/zones/us-east1-b/instances/resource-name-0000
service: compute
type: compute.instances
I get 200 OK back with status: "PENDING" in body.
Only when looking through the audit logs do I see a log entry with status.message: INVALID_PARAMETER with no explanation and then another log entry with:
jsonPayload.error:
- code: SERVICE_ACCOUNT_ACCESS_DENIED
detail_message: ''
location: ''
When attaching the Editor role to the service account I can create the Instance Group Manager so there seem to be some permissions missing. The logs show no permissions that were not granted so what could be missing?
Raw logs
Turns out that the instanceTemplate attached service accounts to the instances. Because of that iam.serviceAccountUser role is required on the service account used by the instance creating the instance group manager.
In my case the service accounts are not needed so I removed it from the instance template and the permissions above work.
Related
I've set up my elasticsearch yml file (deployed via Serverless) as follows:
Resources:
CRMSearch:
Type: "AWS::Elasticsearch::Domain"
Properties:
ElasticsearchVersion: "7.10"
DomainName: "crm-searchdb-${self:custom.stage}"
ElasticsearchClusterConfig:
DedicatedMasterEnabled: false
InstanceCount: "1"
ZoneAwarenessEnabled: false
InstanceType: "t3.medium.elasticsearch"
EBSOptions:
EBSEnabled: true
Iops: 0
VolumeSize: 10
VolumeType: "gp2"
AccessPolicies:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
AWS: [
"arn:aws:iam::#{AWS::AccountId}:role/crm-databases-dev-us-east-1-lambdaRole",
'#{AWS::AccountId}',
'arn:aws:iam::#{AWS::AccountId}:user/nicholas',
'arn:aws:iam::#{AWS::AccountId}:user/daniel'
]
Action: "es:*"
Resource: "arn:aws:es:us-east-1:#{AWS::AccountId}:domain/crm-searchdb-${self:custom.stage}"
- Effect: "Allow"
Principal:
AWS: [
"*"
]
Action: "es:*"
Resource: "arn:aws:es:us-east-1:#{AWS::AccountId}:domain/crm-searchdb-${self:custom.stage}"
AdvancedOptions:
rest.action.multi.allow_explicit_index: 'true'
AdvancedSecurityOptions:
Enabled: true
InternalUserDatabaseEnabled: true
MasterUserOptions:
MasterUserName: admin
MasterUserPassword: fD343sfdf!3rf
EncryptionAtRestOptions:
Enabled: true
NodeToNodeEncryptionOptions:
Enabled: true
DomainEndpointOptions:
EnforceHTTPS: true
I'm just trying to get access to Kibana via browser. I set up open permission Kibana a few months ago at a previous company, but can't seem to access Kibana via browser no matter what I do. I always get the {"Message":"User: anonymous is not authorized to perform: es:ESHttpGet"} error. How do I setup permissions (ideally via yml) to accomplish this?
User: anonymous is not authorized to perform: es:ESHttpGet
The breakdown of what results in this message is:
Your browser fetches Kibana assets / scripts
Kibana client-side code running on your browser makes a GET request to your ElasticsearchService domain.
Your browser is not a permitted principal and is denied access. The anonymous user from the message is your browser
This is explained in the AWS ElasticsearchService documentation:
Because Kibana is a JavaScript application, requests originate from the user's IP address.
In terms of your next step, the answers to the following question cover the two options you have:
How to access Kibana from Amazon elasticsearch service?
(Yaml solution, overall NOT advisable for several reasons) Add an extra statement to your access policy to allow actions from your device's IP address(es)
(Non-Yaml solution) Set up a proxy that will handle the requests from your browser and basically pass them on after signing them with the credentials of a trusted principal. This can either be a proxy running on additional AWS infrastructure or something running on your local machine. Again, the answers on the linked question go into more details.
I want to create an AWS IAMS account that has various permissions with CloudFormation.
I understand there are policies that would let a user change his password and let him get his account to use MFA here
How could I enforce the user to use MFA at first log in time when he needs to change the default password?
This is what I have:
The flow I have so far is:
User account is created
When user tries to log in for the first time is asked to change the default password.
User is logged in the AWS console.
Expected behavior:
User account is created
When user tries to log in for the first time is asked to change the default password and set MFA using Authenticator app.
User is logged in the AWS console and has permissions.
A potential flow is shown here. Is there another way?
Update:
This blog explains the flow
Again, is there a better way? Like an automatic pop up that would enforce the user straight away?
Update2:
I might have not been explicit enough.
What we have so far it is an ok customer experience.
This flow would be fluid
User tries to log in
Console asks for password change
Colsole asks for scanning the code and introducing the codes
User logs in with new password and the code from authenticator
5.User is not able to deactivate MFA
Allow users to self manage MFA is the way to go, if you are using regular IAM. You can try AWS SSO, it's easier to manage and free.
Allowing users to login, change password, setup MFA and Denying everything other than these if MFA is not setup as listed here
We could create an IAM Group with an inline policy and assign users to that group.
This is CF for policy listed in the docs.
Resources:
MyIamGroup:
Type: AWS::IAM::Group
Properties:
GroupName: My-Group
MyGroupPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action:
- iam:GetAccountPasswordPolicy
- iam:GetAccountSummary
- iam:ListVirtualMFADevices
- iam:ListUsers
Effect: Allow
Resource: "*"
- Action:
- iam:ChangePassword
- iam:GetUser
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:user/${aws:username}
- Action:
- iam:CreateVirtualMFADevice
- iam:DeleteVirtualMFADevice
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:mfa/${aws:username}
- Action:
- iam:DeactivateMFADevice
- iam:EnableMFADevice
- iam:ListMFADevices
- iam:ResyncMFADevice
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:user/${aws:username}
- NotAction:
- iam:CreateVirtualMFADevice
- iam:EnableMFADevice
- iam:GetUser
- iam:ListMFADevices
- iam:ListVirtualMFADevices
- iam:ListUsers
- iam:ResyncMFADevice
- sts:GetSessionToken
Condition:
BoolIfExists:
aws:MultiFactorAuthPresent: "false"
Effect: Deny
Resource: "*"
PolicyName: My-Group-Policy
Groups:
- Ref: MyIamGroup
I think this is the way to go and one could extract the knowledge of creating users with whatever permissions he wants after the user sets up the MFA.
The policy template it is useful.
instructions
I am experimenting with deployment manager and each time I try to deploy an SQL instance with a DB on it and 2 users; some of the tasks are failing. Most of the time they are the users:
conf.yaml:
resources:
- name: mycloudsql
type: gcp-types/sqladmin-v1beta4:instances
properties:
name: mycloudsql-01
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: MYSQL_5_7
region: europe-west6
settings:
tier: db-f1-micro
locationPreference:
zone: europe-west6-a
activationPolicy: ALWAYS
dataDiskSizeGb: 10
- name: mydjangodb
type: gcp-types/sqladmin-v1beta4:databases
properties:
name: django-db-01
instance: $(ref.mycloudsql.name)
charset: utf8
- name: sqlroot
type: gcp-types/sqladmin-v1beta4:users
properties:
name: root
host: "%"
instance: $(ref.mycloudsql.name)
password: root
- name: sqluser
type: gcp-types/sqladmin-v1beta4:users
properties:
name: user
instance: $(ref.mycloudsql.name)
password: user
Error:
PS C:\Users\user\Desktop\Python\GCP> gcloud --project=sound-catalyst-263911 deployment-manager deployments create dm-sql-test-11 --config conf.yaml
The fingerprint of the deployment is TZ_wYom9Q64Hno6X0bpv9g==
Waiting for create [operation-1589869946223-5a5fa71623bc9-1912fcb9-bc59aafc]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1589869946223-5a5fa71623bc9-1912fcb9-bc59aafc]: errors:
- code: RESOURCE_ERROR
location: /deployments/dm-sql-test-11/resources/sqluser
message: '{"ResourceType":"gcp-types/sqladmin-v1beta4:users","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Precondition
check failed.","status":"FAILED_PRECONDITION","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/sound-catalyst-263911/instances/mycloudsql-01/users","httpMethod":"POST"}}'
- code: RESOURCE_ERROR
location: /deployments/dm-sql-test-11/resources/sqlroot
message: '{"ResourceType":"gcp-types/sqladmin-v1beta4:users","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Precondition
check failed.","status":"FAILED_PRECONDITION","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/sound-catalyst-263911/instances/mycloudsql-01/users","httpMethod":"POST"}}'
Console View:
It doesn`t say what that precondition failing is or am I missing something?
It seems the installation of database is not completed by the time the Deployment Manager starts to create users despite the reference notation is used in the YAML code to take care of dependencies. That is why you receive the "FAILED_PRECONDITION" error.
As a workaround you can split the deployment into two parts:
Create a CloudSQL instance and a database;
Create users.
This does not look elegant, but it works.
Alternatively, you can consider using Terraform. Fortunately, Cloud Shell instance is provided with Terraform pre-installed. There are sample Terraform code for Cloud SQL out there, for example this one:
CloudSQL deployment with Terraform
resources:
- name: practice-service-account
type: iam.v1.serviceAccount
properties:
displayName: practice-service-account
projectId: {{ project }}
accountId: practice-service-account
- name: get-iam-policy
action: 'gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.getIamPolicy'
properties:
resource: resources-practice {# make this environment variable #}
- name: set-iam-policy
action: 'gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy'
properties:
resource: {{ project }}
policy: $(ref.get-iam-policy)
gcpIamPolicyPatch:
add:
- role: roles/viewer
members:
- user:email1#example.com
- user:email2#example.com
- user:email3#example.com
Why am I always experiencing the error below when trying to create these IAM resources?
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1544014242908-57c45d47a0760-6a2ec217-9ee53506]: errors:
- code: RESOURCE_ERROR
location: /deployments/infrastructure/resources/set-iam-policy
message: '{"ResourceType":"gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/resources-practice:setIamPolicy","httpMethod":"POST"}}'
Deployment manager acts using the [PROJECT_NUMBER]#cloudservices.gserviceaccount.com service account. This error indicates that that service account doesn't have permission to change the IAM policy on that project. Try granting the service account the iam.roleAdmin role on the project (or iam.organizationRoleAdmin role on the organization).
I have been playing with configuring tag based resource permissions in EC2, using an approach similar to what is described in the answer to the following question: Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?
I have been using this in conjunction with a lambda function to auto tag EC2 instances, setting the Owner and PrincipalId based on the IAM user who called the associated ec2:RunInstances action. The approach I have been following for this is documented in the following AWS blog post: How to Automatically Tag Amazon EC2 Resources in Response to API Events
The combination of these two approaches has resulted in my restricted user permissions for EC2 looking like this, in my CloudFormation template:
LimitedEC2Policy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: UserLimitedEC2
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: ec2:RunInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetA}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetB}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetC}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:security-group/${BasicSSHAccessSecurityGroup.GroupId}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:key-pair/${AuthorizedKeyPair}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:network-interface/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:volume/*'
- !Sub 'arn:aws:ec2:${AWS::Region}::image/ami-*'
Condition:
StringLikeIfExists:
ec2:Vpc: !Ref Vpc
StringLikeIfExists:
ec2:InstanceType: !Ref EC2AllowedInstanceTypes
- Effect: Allow
Action:
- ec2:TerminateInstances
- ec2:StopInstances
- ec2:StartInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
Condition:
StringEquals:
ec2:ResourceTag/Owner: !Ref UserName
Users:
- !Ref IAMUser
These IAM permissions restricts users to running EC2 instances within a limited set of subnets, within a single VPC and security group. Users are then only able to start/stop/terminate instances which have been tagged with their IAM user in the Owner tag.
What I'd like to be able to do is allow users to also create and delete any additional tags on their EC2 resources, such as setting the Name tag. What I can't work out is how I can do this without also enabling them to change the Owner and PrincipalId tags on resources they don't "own".
Is there a way one can limit the ec2:createTags and ec2:deleteTags actions to prevent users from setting certain tags?
After much sifting through the AWS EC2 documentation I found the following: Resource-Level Permissions for Tagging
This gives the example:
Use with the ForAllValues modifier to enforce specific tag keys if
they are provided in the request (if tags are specified in the
request, only specific tag keys are allowed; no other tags are
allowed). For example, the tag keys environment or cost-center are
allowed:
"ForAllValues:StringEquals": { "aws:TagKeys": ["environment","cost-center"] }
Since what I want to achive is essentially the opposite of this (allow users to specify all tags, with the exception of specific tag keys) I have been able to prevent users from creating/deleting the Owner and PrincipalId tags by adding the following PolicyDocument statement to my user policy in my CloudFormation template:
- Effect: Allow
Action:
- ec2:CreateTags
- ec2:DeleteTags
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:*/*'
Condition:
"ForAllValues:StringNotEquals":
aws:TagKeys:
- "Owner"
- "PrincipalId"
This permits users to create/delete any tags they wish, so long as they aren't the Owner or PrincipalId.