Vault Cross Account Access - Role Not Found - amazon-web-services

I'm trying to set up the AWS auth method for cross account access in Vault.
I've enabled the aws auth method
vault auth enable aws
vault write auth/aws/role/dev-role auth_type=iam bound_account_id=[RemoteAccountID] inferred_entity_type=ec2_instance inferred_aws_region=us-east-1 policies=dev max_ttl=24h
vault write auth/aws/config/sts/[RemoteAccountID] sts_role=arn:aws:iam::[RemoteAccountID]:role/VaultRole
I've configured this policy on the ec2 instance on which vault runs
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"iam:GetInstanceProfile",
"iam:GetUser",
"iam:GetRole"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::[RemoteAccountID]:role/VaultRole"
]
}
]
}
I've also added the Vault account as a trusted entity to the "arn:aws:iam::[RemoteAccountID]:role/VaultRole" which is on the ec2 instance in the account I'm trying to authenticate from.
But when I log into the instance on the remote account and call the vault login -method=aws role=dev-role command, I get the error
error authenticating: Error making API request.
URL: PUT http://11.11.11.11:8200/v1/auth/aws/login
Code: 400. Errors:
* entry for role dev-role not found
Is there some other config that needs to be set up in order to estabish this sort of cross account authentication with Vault?

I've replicated something similar and it works just fine.
Are you by any chance using Vault namespaces? If you are, in what namespace are you enabling the AWS auth engine and writing the role/sts config?
Try logging into the remote ec2 instance and exporting the namespace export VAULT_NAMESPACE=foo and then re-run your vault login -method=aws role=dev-role command.
Or just do vault login -method=aws role=dev-role -namespace=foo
If it complains about a missing header value, you might want to set one up on the master.

Related

CLIENT_ERROR: authorization failed for primary source and source version

I opened a free AWS account to learn and created an Administrator user group and user in IAM for myself.
I am following a tutorial "Automating your API testing with AWS CodeBuild, AWS CodePipeline, and Postman."
I am getting the error CLIENT_ERROR: authorization failed for primary source and source version in the DOWNLOAD_SOURCE phase of the Build transition in CodePipeline.
I followed the directions in an earlier post at AWS CodeBuild failed CLIENT_ERROR: authorization failed for primary source and source version with no success.
I added and attached a policy for connection-permissions in my service role as directed like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codestar-connections:UseConnection",
"Resource": "insert connection ARN here"
}
]
}
Later, I changed the Action above to
"codepipeline:GetPipelineState"
I added and attached a policy for GitPull like so:
{
"Action": [
"codecommit:GitPull"
],
"Resource": "*",
"Effect": "Allow"
},
I have disconnected and reconnected my connection to GitHub and also tried creating a new personal access token with no success.
I have tried changing my S3 to public and Allow with
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
I also tried updating my node in the source code to 16.18.0.
I am stuck. The resources I have found keep pointing me to the same AWS page I mentioned. I don't know what else to do. I would appreciate any help.
My repo is located at https://github.com/venushofler/my-aws-codepipeline-codebuild-with-postman.git
The answer to the above was to add a default set of access permissions to my users, groups, and roles in my account. I found documentation at https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html which in part stated, "To add a default set of CodeBuild access permissions to an IAM group or IAM user, choose Policy Type, AWS Managed, and then do the following:
To add full access permissions to CodeBuild, select the box named AWSCodeBuildAdminAccess, choose Policy Actions, and then choose Attach. "
This worked to allow the Build and Deploy stage to succeed.

aws glue IAM role cant connect to aws opensearch

I have a Glue job to push data into AWS OpenSearch. Everythings works perfectly when I have an "open" permission on OpenSearch, for example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:<region>:<accountId>:domain/<domain>/*"
}
]
}
This works without issue. The problem is I want to secure my OpenSearch domain to only the role running the glue job.
I attempted to do that starting basic with:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<accountId>:role/AWSGluePowerUser"
]
},
"Action": [
"*"
],
"Resource": [
"*"
]
}
]
}
This disables all access to OpenSearch which I want, however it also blocks it for Glue even though the jobs a running with the AWSGluePowerUser role set.
An error occurred while calling o805.pyWriteDynamicFrame. Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
Which I assume is because the Glue job can no longer see the OpenSearch cluster. Keep in mind everything works when using the "default" access policy for OpenSearch.
I have my glue job configured to use the IAM role AWSGluePowerUser which also has AmazonOpenSearchServiceFullAccess policy attached.
I'm not sure where I've gone wrong here?
Edit: Here is where/how I've set the roles for the Glue job, I assume this is all I needed to do?
From Glue Job Details
I believe this is not possible because the AWS Glue Elasticsearch connector is based on an open-source Elasticsearch Spark library that doest not sign requests using AWS Signature Version 4 which is required for enforcing domain access policies.
If you take a look at the key concepts for fine-grained access control in OpenSearch, you'll see:
If you choose IAM for your master user, all requests to the cluster must be signed using AWS Signature Version 4.
If you visit the Elasticsearch Connector for AWS Glue AWS Marketplace page, you'll notice that the connector itself is based on an open-source implementation:
For more details about this open-source Elasticsearch spark connector, please refer to this open-source connector online reference
Under the hood, AWS Glue is using this library to index data from Spark dataframes to the Elasticsearch endpoint. Since this open-source library (maintained by the Elasticsearch community) does not have support for signing requests using using AWS Signature Version 4, it will only work with the "open permission" you've referenced. This is hinted at in the big picture on fine-grained access control:
In general, if you enable fine-grained access control, we recommend using a domain access policy that doesn't require signed requests.
Note that you can always fall back us using a master user based on username/password:
Create a master user (username/password) for the OpenSearch domain's fine-grained access control configuration.
Store the username/password in an AWS Secrets Manager secret as described here.
Attach the secret to the AWS Glue connector as described here.
Hope this helps!
I usually take a "deny everyone except" approach in these situations
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "es:*",
"Resource": [
"*"
],
"Condition": {
"ArnNotLike": {
"aws:PrincipalArn": [
"arn:aws:iam::<accountId>:role/AWSGluePowerUser"
]
}
}
}
]
}

Pulling image from AWS ECR repository without AWS credentials

I need to pull docker images from on premise. However, I don't have access to AWS keys to be able to perform such an action against a private repository. How can I pull ECR images without AWS authentication? I've noticed ECR public repository, however, I still need some level of restriction to protect the repos contents.
Yes, you can authenticate temporarily. As document pointed;
You can use temporary security credentials to make programmatic requests for AWS resources using the AWS CLI or AWS API (using the AWS SDKs). The temporary credentials provide the same permissions as long-term security credentials, such as IAM user credentials.
Reference : https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html
Also, If there is no way to succeed the authentication this way or any other way,
You can use public registries + registry policies. You can ALLOW certain IPs/services/users to reach your registry. Example registry policy is below;
{
"Version": "2012-10-17",
"Id": "ECRPolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Principal": "*",
"Action": "ecr:*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
1.2.3.4/32,
2.3.4.5/32
]
},
"IpAddress": {
"aws:SourceIp": "0.0.0.0/0"
}
}
}
]
}

Databricks AWS account setup - AWS storage with error - Missing permissions: PUT, LIST, DELETE

I have created a PREMIUM trail Databricks account with AWS. I have setup AWS account with user access keys.
And for configuring AWS storage followed the below instructions in the URL(setup bucket policy as below in below URL).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Grant Databricks Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::98765432101:root"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-databricks-user-bucket/*",
"arn:aws:s3:::my-databricks-user-bucket"
]
}
]
}
https://docs.databricks.com/administration-guide/account-settings/aws-storage.html
But, I am getting the error as below.
The provided S3 bucket is valid, but have insufficient permissions to
launch a Databricks deployment. Please double check your settings
according to the tutorial. Missing permissions: PUT, LIST, DELETE
In the above bucket policy which I used, PUT, LIST, DELETE policies are there. Still facing the above error.
Note: As trail and error, changed the Action as below which allows all actions. But, still getting the same error.
"Action": "*"
The above error is caused because of the mistake I did when I am setting up Databricks account with AWS.
As part of setting up AWS account details in Databricks, a cross-account-role should be created (alternative is through access key). When creating the role, AWS account id should be given(Databricks AWS account id). The value of that is 414351767826.
The mistake I did was I gave my AWS account ID instead of Databricks one. Following as it is in the below URL will work as expected.
The same issue I did when I am setting AWS storage. Following the documentation as it is will work perfectly.
https://docs.databricks.com/administration-guide/account-settings/aws-accounts.html

Unable to download AWS CodeDeploy Agent Install file

I am trying to download AWS Codedeploy Agent file in my Amazon Linux. I followed instructions as mentioned in http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html, for Amazon Linux, have created appropriate instance profile, service role etc. Everything is latest (Amazon Linux, CLI Packages, it is a brand new instance and I have tried this with at least 3 more brand new instances with same result). All instances have full outbound internet access.
But following statement for downloading install from S3 always fails,
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
With Error,
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
Can anyone help me with this error?
I figured out the problem, According to Codedeploy documentation for IAM Instance profile
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html
following permissions needs to be given to your IAM instance profile.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
But I limited the resource to my code bucket since I don't want my instances to access other buckets directly. But turns out I also need to give additional permission for aws-codedeploy-us-east-1/* s3 resource for being able to download the agent. This is not very clear in the document for setting up IAM instance profile for Codedeploy.
More restrictive policy that works:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-2/*",
"arn:aws:s3:::aws-codedeploy-ap-south-1/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-1/*",
"arn:aws:s3:::aws-codedeploy-eu-central-1/*",
"arn:aws:s3:::aws-codedeploy-eu-west-1/*",
"arn:aws:s3:::aws-codedeploy-sa-east-1/*"
]
}
]
}