How to pull from AWS ECR docker image anonymously? - amazon-web-services

I understand we need to login to ECR to pull docker image from AWS ECR. How can I make it anonymous? Since we separate code, data and infrastructure (all open source) separate we do not find a need for the infrastructure part to be private.
I was able to find the way to create permission with *, not sure how can I make it anonymous so that anyone who wants to download does not need an IAM user access.
Below is the policy,
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublic",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
]
}
]
}
Not sure how can I create an anonymous IAM user as well.

If you read the FAQ
Q: Can Amazon ECR host public container images?
Amazon ECR currently supports private images. However, using IAM resource-based permissions, you can configure policies for each repository to allow access to IAM users, roles, or other AWS accounts.
The only workaround I can think of is probably putting a EC2 machine and the using NGINX to proxy_pass to the ECR url and using the EC2 IP for docker repo

Starting 1 Dec 2020, You can use ECR public to pull container images anonymously.
Links to How To & Launch Announcement
Anyone who pulls images anonymously gets 500 GB of free data bandwidth
each month after which they can sign up for or sign in to an AWS
account. Simply authenticating with an AWS account increases free data
bandwidth to 5 TB each month when pulling images from the internet.
And finally, workloads running in AWS will get unlimited data
bandwidth from any region when pulling from ECR Public.

Related

Is it possible to pull image from ECR in a EC2 without using docker login?

I'm now having a private ECR repo and a EC2 instance. If I would like to pull the image from the private ECR in my local machine, I have to setup my AWS credential by using aws configure and perform a docker login.
And now, I want to pull image from the EC2 instance. When I am trying to run docker command directly, it told me to authenticate first. Is it possible to attach IAM role to my EC2 instance and skip the docker login or aws ecr login workflow?
At this moment, I can only run aws configure inside the EC2 instance, and it seems need an extra IAM user which I am trying to avoid.
You don't have to run aws configure in on EC2 machine, in fact this would a bad security practice. You should attach an AWS role which allows the EC2 instance to fetch image and more importantly, be abel to grab the authorization token for the ECR registry. For example, you can create a policy with the following permissions to have read-only access to ECR images:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:GetAuthorizationToken",
"ecr:ListImages"
],
"Resource": "*"
}
]
}
Using this policy, create a new IAM service role and attach it attached to the EC2 instance.
Now, even if you have this role attached, you will have to authenticate the Docker CLI using an authorization token.
In addition to the other answers posted here stating you should use the EC2 IAM role instead of configuring a role with aws configure, I also suggest installing the Amazon ECR Credential Helper on your EC2 instance. Then you won't have to perform a docker login.
I have to setup my AWS credential by using aws configure and perform a docker login.
You don't have to. If your code runs on EC2, you should use instance IAM role instead of regular setup of aws credentials using aws configure.

AWS Firehose delivery to Cross Account Elasticsearch in VPC

I have a Elasticsearch inside the VPC running in account A.
I want to deliver logs from Firehose in Account B to the Elasticsearch in Account A.
Is it possible?
When I try to create delivery stream from AWS CLI I am getting below exception,
$: /usr/local/bin/aws firehose create-delivery-stream --cli-input-json file://input.json --profile devops
An error occurred (InvalidArgumentException) when calling the CreateDeliveryStream operation: Verify that the IAM role has access to the ElasticSearch domain.
The same IAM role, and same input.json works when modified to the Elasticsearch in Account B. I have Transit gateway connectivity enabled between the AWS accounts and I can connect telnet to the Elasticsearch in Account A from EC2 instance in Account B.
Adding my complete terraform code(i got same exception in AWS CLI and also in Terraform):
https://gist.github.com/karthikeayan/a67e93b4937a7958716dfecaa6ff7767
It looks like you haven't granted sufficient permissions to the role that is used when creating the stream (from the CLI example provided I'm guessing its a role named 'devops'). At minimum you will need firehose: CreateDeliveryStream.
I suggest adding the below permissions to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:CreateDeliveryStream",
"firehose:UpdateDestination"
],
"Resource": "*"
}
]
}
https://forums.aws.amazon.com/message.jspa?messageID=943731
I have been informed from AWS forum that this feature is currently not supported.
You can set up Kinesis Data Firehose and its dependencies, such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch, to stream across different accounts. Streaming data delivery works for publicly accessible OpenSearch Service clusters whether or not fine-grained access control (FGAC) is enabled
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/

connecting to aws elastic search with nodejs aws sdk

what is the best approach in using aws elastic search with nodejs? I am using aws ecs ec2 instance for running my docker containers and is using the IAM role to accessing the other aws resource like S3 bucket and dynamodb from nodejs.
Can we use the same procedure for accessing the aws elastic search endpoint too?
I added an inline policy with the existing role and added the elastic search end point arn. but the nodejs sdk is not able to connect to the ES. when the aws key and id is added as environment variable in task definition it starts working. But I dont need to use that method as it will conflict with the other aws resource. (looks like the dev team is configured the program such that it looks for env)
It for sure is not the best method but you can also use a ip based restriction. We currently use this and it works fine. Just set an elastic ip on your ec2 instance (if you haven't already) and set the ip address in the access policy like this:
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"XXX.XXX.XXX.XXX",
]
}
}
For anybody else stumbling across this, here's a few things I learnt whilst I was stuck on something similar:
EC2's role ARN can be added in the access policy for your Elasticsearch domain along with the permissions you want the role to have. For eg. for an EC2 running with role "aws-ec2" needing permissions to make HTTP GET requests to ES, you could have the following in your ES domain access policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<ACCOUNT_ID>:role/aws-ec2",
]
},
"Action": "es:ESHttpGet",
"Resource": "arn:aws:es:<REGION>:<ACCOUNT_ID>:domain/<DOMAIN_NAME>/*"
}
]
}
Any requests made by an EC2 instance running with role "aws-ec2" in your account will have access to elasticsearch.
Note that if you have trouble getting credentials, try the following:
AWS.config.getCredentials(function(err) {
if (err) console.log(err.stack);
// credentials not loaded
else {
// credentials are loaded and can be accessed using
AWS.config.credentials.accessKeyId, AWS.config.credentials.secretAccessKeyId etc.
}
});
This will usually pull the credentials in like magic, I have a theory about how it works (tl:dr; I think it pulls them from the EC2 instance metadata by making a request to a fixed IP) but it's unproven so I won't embarrass myself until I know more. Note that this should work even if you don't have credentials stored in your environment or in the shared credentials file.

connecting Docker to a cloud provider, Amazon AWS

Context: I was going though Link to Amazon Web Services to create Swarms, in order to connect to my provider.
The role was created with success.
Then, while creating the policy, to associate to the role, a problem happened.
Problem:
An error occurred: Cannot exceed quota for PolicySize: 5120
As suggested by them, this is what I need to add in policy:
https://docs.docker.com/docker-for-aws/iam-permissions/
Did some research and people seem to like this solution:
https://github.com/docker/machine/issues/1655
How can I create the policy using the best method?
Noticing that the documentation in Docker is wrong - doesn't work in my case - what's the best method?
You are looking at the wrong instructions to connect docker-cloud to AWS follow these instructions: https://docs.docker.com/docker-cloud/infrastructure/link-aws/
It's the following 3 steps
Create AWS Policy for docker-cloud
Create a docker-cloud role and attache the policy from 1
Attach AWS role/account to docker-cloud
The policy in (1) above is pretty simple. It should be allowed to perform ec2 instances related actions (your screenshot of the policy looks like it doesn't provide ec2 permissions):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"iam:ListInstanceProfiles"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
The role must have the permissions to implement the policy.
For a detailed post on the deployment via docker-cloud see: https://blog.geografia.com.au/how-we-are-using-docker-cloud-for-automated-testing-and-deployments-of-applications-bb87ec3173e7

AWS Elastic Beanstalk deployment failed

When I try to deploy Java web app to Elastic Beanstalk Tomcat container it was failed with following error:
Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.
Please note the following points:
Deployment was automated via Jenkins running on EC2 server.
This error is not a continuous issue. Sometimes it was deployed successfully but sometimes it was failed with above error.
I had this exact problem, from what I could tell it was completely random but it turned out to be linked to IAM roles. Everything worked perfectly until I added .ebextensions with a database migration script, after that I couldn't get my Bamboo builder to work again. However I managed to figure it out (No thanks to Amazon's non-existing documentation on what permissions are needed for EB).
I based my IAM policy on this Gist: https://gist.github.com/magnetikonline/5034bdbb049181a96ac9
However I had to make some modifications. This specific issue was caused by a too restrictive policy on S3 gets, so I simply replaced the one provided with
{
"Action": [
"s3:Get*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-*/*"
]
},
This allows users with the policy to perform all kinds of Get operations on the bucket, since I couldn't be bothered to find out which specific one was required.
Uploading to beanstalk involves sending a zipped artifact into S3 along with modifying the cloudformation templates (this part is hands off).
Likely the IAM role that is attached to the jenkins runner (or access credentials) does not have access to the relevant S3 buckets. Ensure this via IAM. See: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.html
This is an edge-case, but I wanted to capture it here for posterity. This error message can be returned as a generic error message sometimes. I spent many weeks working through this error with AWS to find out that it was related to Security Token Service (STS) credentials expiring. When you generate STS credentials the maximum duration of the session is 36 hours. If you generate a 36 hour key some services used by Elastic Beanstalk don't respect this session length and consider the session expired. To work around this we no longer allow STS credentials with a session length longer then 2 hours.
I have also struggled with this and, as in Rick's case, it turned out to be a permissions problem. But his solution hasn't worked for me.
I have fixed the
Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.
Adding "s3:Get*" alone wasn't enough, I needed also "s3:List*".
The interesting thing is that I was getting this issue just for one EB environments out of three. It turned out that the other environments did deploy to all nodes at once, while the problematic one had enabled Rolling updates (which, obviously, perform other actions, adding new instances etc.).
Here is the final IAM policy that works: gist: IAM policy to allow Continuous Integration user to deploy to AWS Elastic Beanstalk
I had the same issue. Based on what I gathered from AWS support, an IAM user requires full access to S3 to perform some actions like deployment. This is because EB uses CloudFormation which is using S3 to store templates. You need to attach the managed policy "AWSElasticBeanstalkFullAccess" to the IAM user performing deployment, or create a policy like the following and attach it to the user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Ideally amazon should have a way to restrict the Resource to specific buckets, but it doesn't look like that it is doable right now!