Pulling image from AWS ECR repository without AWS credentials - amazon-web-services

I need to pull docker images from on premise. However, I don't have access to AWS keys to be able to perform such an action against a private repository. How can I pull ECR images without AWS authentication? I've noticed ECR public repository, however, I still need some level of restriction to protect the repos contents.

Yes, you can authenticate temporarily. As document pointed;
You can use temporary security credentials to make programmatic requests for AWS resources using the AWS CLI or AWS API (using the AWS SDKs). The temporary credentials provide the same permissions as long-term security credentials, such as IAM user credentials.
Reference : https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html
Also, If there is no way to succeed the authentication this way or any other way,
You can use public registries + registry policies. You can ALLOW certain IPs/services/users to reach your registry. Example registry policy is below;
{
"Version": "2012-10-17",
"Id": "ECRPolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Principal": "*",
"Action": "ecr:*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
1.2.3.4/32,
2.3.4.5/32
]
},
"IpAddress": {
"aws:SourceIp": "0.0.0.0/0"
}
}
}
]
}

Related

aws glue IAM role cant connect to aws opensearch

I have a Glue job to push data into AWS OpenSearch. Everythings works perfectly when I have an "open" permission on OpenSearch, for example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:<region>:<accountId>:domain/<domain>/*"
}
]
}
This works without issue. The problem is I want to secure my OpenSearch domain to only the role running the glue job.
I attempted to do that starting basic with:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<accountId>:role/AWSGluePowerUser"
]
},
"Action": [
"*"
],
"Resource": [
"*"
]
}
]
}
This disables all access to OpenSearch which I want, however it also blocks it for Glue even though the jobs a running with the AWSGluePowerUser role set.
An error occurred while calling o805.pyWriteDynamicFrame. Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
Which I assume is because the Glue job can no longer see the OpenSearch cluster. Keep in mind everything works when using the "default" access policy for OpenSearch.
I have my glue job configured to use the IAM role AWSGluePowerUser which also has AmazonOpenSearchServiceFullAccess policy attached.
I'm not sure where I've gone wrong here?
Edit: Here is where/how I've set the roles for the Glue job, I assume this is all I needed to do?
From Glue Job Details
I believe this is not possible because the AWS Glue Elasticsearch connector is based on an open-source Elasticsearch Spark library that doest not sign requests using AWS Signature Version 4 which is required for enforcing domain access policies.
If you take a look at the key concepts for fine-grained access control in OpenSearch, you'll see:
If you choose IAM for your master user, all requests to the cluster must be signed using AWS Signature Version 4.
If you visit the Elasticsearch Connector for AWS Glue AWS Marketplace page, you'll notice that the connector itself is based on an open-source implementation:
For more details about this open-source Elasticsearch spark connector, please refer to this open-source connector online reference
Under the hood, AWS Glue is using this library to index data from Spark dataframes to the Elasticsearch endpoint. Since this open-source library (maintained by the Elasticsearch community) does not have support for signing requests using using AWS Signature Version 4, it will only work with the "open permission" you've referenced. This is hinted at in the big picture on fine-grained access control:
In general, if you enable fine-grained access control, we recommend using a domain access policy that doesn't require signed requests.
Note that you can always fall back us using a master user based on username/password:
Create a master user (username/password) for the OpenSearch domain's fine-grained access control configuration.
Store the username/password in an AWS Secrets Manager secret as described here.
Attach the secret to the AWS Glue connector as described here.
Hope this helps!
I usually take a "deny everyone except" approach in these situations
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "es:*",
"Resource": [
"*"
],
"Condition": {
"ArnNotLike": {
"aws:PrincipalArn": [
"arn:aws:iam::<accountId>:role/AWSGluePowerUser"
]
}
}
}
]
}

S3 Bucket Policy - Allow EC2 Access

My CI pipeline deposits intermediate artifacts in an S3 bucket that I don't want to be accessible to the public, but that does need to be accessible to myself and certain IP addresses.
Currently I have the following bucket permission policy in place:
{
"Version": "2012-10-17",
"Id": "CIBucketPolicy",
"Statement": [
{
"Sid": "IpAllowList",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ci-bucket-name",
"arn:aws:s3:::ci-bucket-name/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip-address1",
"ip-address2",
"ip-address3"
]
}
}
}
]
}
One of the artifacts that my CI deposits here is a raw disk image, which is used in later stages to build an Amazon Machine Image using aws ec2 import-snapshot. The problem here is that the aws ec2 import-snapshot stage keeps failing with the error message:
ClientError: Disk validation failed [We do not have access to the given resource. Reason 403 Forbidden]
I'm fairly confident that something about my bucket permission policy is blocking ec2 from being able to access the bucket even though they're in the same availability zone, but I haven't figured out how to overcome this without simply removing the IP address condition.
Do I need to add something special to this policy for EC2 to have access? Perhaps I'm missing an S3 Action that EC2 needs to be able to perform in order to access the image file? Any advice would be greatly appreciated!

Restrict Amazon S3 access to single HTTPS host

I want to proxy an Amazon S3 bucket through our reverse proxy (Nginx).
For higher security, I want to forbid the read access to the bucket to anything except of the HTTPS host at which I ran the proxy.
Is there a way to configure Amazon S3 for this task?
Please provide the configuration.
I considered to add a password in S3 bucket name, but it is not a solution, because we need also signed uploads to the bucket and so the bucket name will be publicly available.
If your reverse proxy has a Public IP address, then you would add this policy to the Amazon S3 bucket:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.22/32"}
}
}
]
}
This grants permissions to GetObject if the request is coming from the specific IP address. Amazon S3 is private by default, so this is granting access only in that particular situation. You will also want to grant access to IAM Users/Groups (via IAM, not a Bucket Policy) so that bucket content can be updated.
See: Bucket Policy Examples - Amazon Simple Storage Service

Kubernetes pull private external amazon ECR images

I have an Amazon account with a K8S cluster which is able to pull images from the same account's ECR repository.
But, my company have another account with another ECR repository. How can I pull image from this "external" ECR repository ?
I'am also a Rancher user and I used to do this by installing a special container (https://github.com/rancher/rancher-ecr-credentials) which does the job.
Is there something equivalent for Kubernetes?
Thanks for your precious help
Since you already have this setup for pulling images from the same account, you can do this with IAM policy level or ECR permissions, in your other AWS account set up a policy specifying the AWS account number (where k8s is) that will be able to pull images
For example grant pull permissions in the ECR Permissions tab
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "k8s-aws-permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::aws_account_number:root"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
]
}
]
}

Amazon S3 Bucket Policy: How to lock down access to only your EC2 Instances

I am looking to lock down an S3 bucket for security purposes - i'm storing deployment images in the bucket.
What I want to do is create a bucket policy that supports anonymous downloads over http only from EC2 instances in my account.
Is there a way to do this?
An example of a policy that I'm trying to use (it won't allow itself to be applied):
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[my bucket name]",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:ec2:us-east-1:[my account id]:instance/*"
}
}
}
]
}
Just to clarify how this is normally done. You create a IAM policy, attach it to a new or existing role, and decorate the ec2 instance with the role. You can also provide access through bucket policies, but that is less precise.
Details below:
S3 buckets are default deny except for my the owner. So you create your bucket and upload the data. You can verify with a browser that the files are not accessible by trying https://s3.amazonaws.com/MyBucketName/file.ext. Should come back with error code "Access Denied" in the xml. If you get an error code of "NoSuchBucket", you have the url wrong.
Create an IAM policy based on arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess. Starts out looking like the snip below. Take a look at the "Resource" key, and note that it is set to a wild card. You just modify this to be the arn of your bucket. You have to do one for the bucket and its contents so it becomes: "Resource": ["arn:aws:s3:::MyBucketName", "arn:aws:s3:::MyBucketName/*"]
Now that you have a policy, what you want to do is to decorate your instances with a IAM Role that automatically grants it this policy. All without any authentication keys having to be in the instance. So go to Role, create new role, make an Amazon EC2 role, find the policy you just created, and your Role is ready.
Finally you create your instance, and add the IAM role you just created. If the machine already has its own role, you just have to merge the two roles into a new one for the machine. If the machine is already running, it wont get the new role until you restart.
Now you should be good to go. The machine has the rights to access the s3 share. Now you can use the following command to copy files to your instance. Note you have to specify the region
aws s3 cp --region us-east-1 s3://MyBucketName/MyFileName.tgz /home/ubuntu
Please Note, the term "Security through obscurity" is only a thing in the movies. Either something is provably secure, or it is insecure.
I used something like
{
"Version": "2012-10-17",
"Id": "Allow only My VPC",
"Statement": [
{
"Sid": "Allow only My VPC",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject", "s3:ListBucket",
"Resource": [
"arn::s3:::{BUCKET_NAME}",
"arn::s3:::{BUCKET_NAME}/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "{VPC_ID}" OR "aws:sourceVpce": "{VPCe_ENDPOINT}"
}
}
}
]
}