I use GCR to store my team's private docker registry. I have a docker image that I want to make publicly visible so that multiple projects can use it / share with customers / etc.
How do I make a docker image public within Google's Container Registry?
1) Create an empty cloud project to house the public registry
2) Push something to the registry (so the bucket gets created)
docker push gcr.io/{PROJECT_NAME}/{IMAGE_NAME}
3) Make all future objects in the bucket public:
gsutil defacl ch -u AllUsers:R gs://artifacts.{PROJECT_NAME}.appspot.com
4) Make all current objects in the bucket public (eg, the image you just pushed):
gsutil acl ch -r -u AllUsers:R gs://artifacts.{PROJECT_NAME}.appspot.com
5) Make the bucket itself public (not handled by -r!):
gsutil acl ch -u AllUsers:R gs://artifacts.{PROJECT_NAME}.appspot.com
6) upload more images if desired
Thanks to How do you make many files public in Google Cloud Storage? for providing some of the breadcrumbs here.
Google Cloud Registry (GCR) now allows you to easily do this through the Console. Click on "Settings" on the left panel, and change the Visibility to Public.
Related
I have a system built on top of Google's services, however AWS seems to have a terrific setup for video utilities (https://aws.amazon.com/elastictranscoder/ and https://aws.amazon.com/mediaconvert/). Is it possible to send my users' video from GCP to AWS and back again?
You can do it if you use Google Cloud Storage and Amazon S3 to store and exchange data between clouds.
Have a look at the gsutil command line documentation:
The gsutil tool lets you access Cloud Storage from the command line.
It can also be used to access and work with other cloud storage
services that use HMAC authentication, like Amazon S3. For example,
after you add your Amazon S3 credentials to the .boto configuration
file for gsutil, you can start using gsutil to manage objects in your
Amazon S3 buckets.
To do it, follow Setting Up Credentials to Access Protected Data guide, then go to your ~/.boto file and find these lines:
# To add HMAC aws credentials for "s3://" URIs, edit and uncomment the
#aws_access_key_id = <your aws access key ID>
#aws_secret_access_key = <your aws secret access key>
fill in the aws_access_key_id and aws_secret_access_key settings with your S3 credentials.
After that, you'll be able to copy from S3 to GCS or vice versa:
gsutil cp -R s3://my-aws-bucket gs://my-gcp-bucket
If you have a large number of files to transfer you might want to use
the top-level gsutil -m option (see gsutil help options), to perform a
parallel (multi-threaded/multi-processing) copy:
gsutil -m cp -R s3://my-aws-bucket gs://my-gcp-bucket
for more information check gsutil cp documentation.
Also, you can use gsutil rsync command to synchronizes data between S3 and GCP:
gsutil rsync -d -r s3://my-aws-bucket gs://my-gcp-bucket
for more information check gsutil rsync documentation.
Cant I use Amazon ECR registry the same was as I am creating private registry?
Whitelist the private registry by adding to daemon.json file and restart docker service
docker push <ecr/registry/ip>/<image_name>
docker pull <ecr/registry/ip>/<image_name>
we need to use aws cli, but I dont want to use the same and handle it via private registry method.
Any leads?
You will need to use aws cli,
aws ecr get-login --registry-ids 012345678910 023456789012
This command will output one or more docker login commands for you that includes a user, a password and the specific registry urls for the registries that you requested, then you can eval the output or run the command(s) manually, after that you can use docker pull and docker push.
More info here
I would like to use gsutil as a command in Ansible (2.5.X).
On the managed server I already setup Cloud access (service account).
When I use gsutil on the machine, it works without problems.
But when I create a playbook on my management machine and try to
run SDK command I have no access to cloud and permission denied
errors.
I suspect that SSH connection and environment is handled in
a specific way by Ansible. Could someone help me how to use SDK commands in Ansible?
- name: use ansible command
command: >
gsutil list gs://project.something.com
I know that there is gs_storage module. But I do not know
where to look for gs_access_key in an already configured setup.
In .config/gcloud? I'm still learning the Cloud, so some of this
things are new to me. The Cloud access was setup using .json key,
but after I delete this key from the managed machine (shouldn't be exposed).
Best Regards
Kamil
gsutil list would at least require role Viewer assigned to the instance service account - or roles/storage.objectViewer, in case it should also be able to get files from a bucket. Providing Credentials as Module Parameters shows how to authenticate with the gcp_compute_instance module - also see the Cloud Storage IAM Roles and Cloud Storage Authentication (the scopes).
I am trying to build a Docker image and I need to copy some files from S3 to the image.
Inside the Dockerfile I am using:
Dockerfile
FROM library/ubuntu:16.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
# Copy files from S3 inside docker
RUN aws s3 COPY s3://filepath_on_s3 /tmp/
However, aws requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I know I can probably pass them using ARG. But, is it a bad idea to pass them to the image at build time?
How can I achieve this without storing the secret keys in the image?
In my opinion, Roles is the best to delegate S3 permissions to Docker containers.
Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.
Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role
This worked successfully in Dockerfile:
RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive
I wanted to build upon #Ankita Dhandha answer.
In the case of Docker you are probably looking to use ECS.
IAM Roles are absolutely the way to go.
When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytem
from ubuntu:latest
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws
Role Types
EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
Use the ECS Task Definition to define a Task Role and a Task Execution Role.
Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.
General Role Propagation
In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.
reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html
Plain text credentials fed into either the target system or environment variables.
CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
Task Execution Role used to deploy the docker task.
The running task will use the Task Role.
When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.
Blocking EC2 instance role access
Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/
AWS VPC Network Mode
# ecs.confg
ECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh
# install dependencies
yum install -y aws-cli iptables-services
# setup ECS dependencies
aws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config
# setup IPTABLES
iptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROP
iptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPT
service iptables save
Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.
Example docker with args
docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234
Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.
Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.
you should configure your credentials on the ~/.aws/credentials file
~$ cat .aws/credentials
[default]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAAAAAAAAAAa
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBB
I am trying to deploy a container on a Google VM instance.
From the doc it seems straightforward: specify your image in the container text field and start the VM.
My image is stored in the Google Container Registry in the same project as the VM. However, the VM starts but does not pull and run the docker image. I ssh'ed the VM and docker images ls returns an empty list.
Pulling the image doesn't work.
~ $ docker pull gcr.io/project/image
Using default tag: latest
Error response from daemon: repository gcr.io/project/image not found: does not exist or no pull access
I know we're supposed to use gcloud docker but gcloud isn't installed on the VM (which is dedicated to containers) so I supposed it's something else.
Also, the VM service account has read access to storage. Any idea?
From the GCR docs, you can use docker-credential-gcr to automatically authenticate with credentials from your GCE instance metadata.
To do that manually (assuming you have curl and jq installed):
TOKEN=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r ".access_token")
docker login -u oauth2accesstoken -p "$TOKEN" https://gcr.io
To pull the image from the gcr.io container registry you can use the gcloud sdk, like this:
$ gcloud docker -- pull gcr.io/yrmv-191108/autoscaler
Or you can use the docker binary directly as you did. This command has the same effect as the previous gcloud one:
$ docker pull gcr.io/yrmv-191108/autoscaler
Basically you problem is that you are not specifying either the project you are working nor the image you are trying to pull, unless (very unlikely) your project ID is project and the image you want to pull is named image.
You can get a list of the images you have uploaded to your current project with:
$ gcloud container images list
Which, for me, gets:
NAME
gcr.io/yrmv-191108/autoscaler
gcr.io/yrmv-191108/kali
Only listing images in gcr.io/yrmv-191108. Use --repository to list images in other repositories.
If, for some reason you don't have permissions to install the Gcloud SDK (very advisable for working with Google Cloud) you can see your uploaded images on the Google Cloud GUI by navigating to "Container registry -> images"