gsutil permission in Ansible - google-cloud-platform

I would like to use gsutil as a command in Ansible (2.5.X).
On the managed server I already setup Cloud access (service account).
When I use gsutil on the machine, it works without problems.
But when I create a playbook on my management machine and try to
run SDK command I have no access to cloud and permission denied
errors.
I suspect that SSH connection and environment is handled in
a specific way by Ansible. Could someone help me how to use SDK commands in Ansible?
- name: use ansible command
command: >
gsutil list gs://project.something.com
I know that there is gs_storage module. But I do not know
where to look for gs_access_key in an already configured setup.
In .config/gcloud? I'm still learning the Cloud, so some of this
things are new to me. The Cloud access was setup using .json key,
but after I delete this key from the managed machine (shouldn't be exposed).
Best Regards
Kamil

gsutil list would at least require role Viewer assigned to the instance service account - or roles/storage.objectViewer, in case it should also be able to get files from a bucket. Providing Credentials as Module Parameters shows how to authenticate with the gcp_compute_instance module - also see the Cloud Storage IAM Roles and Cloud Storage Authentication (the scopes).

Related

GCP - Impersonate Service Account from Local Machine

I have used AWS in the past. So I might be trying to compare the behavior with GCP. Bear with me and provide the steps to do things in a correct way.
What I did:
I create a service account in GCP with storage object viewer role.
I also create a key-pair and downloaded the json file in my local.
If I have gcloud/gsutil installed in my local machine, How can I assume/impersonate the service account and work on GCP resources?
Where should i keep the downloaded json file? I already referred to this - https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating
I downloaded file as key.json and kept in my home directory.
Then I did this.
export GOOGLE_APPLICATION_CREDENTIALS="~/key.json"
Execute this command
gsutil cp dummy.txt gs://my-bucket/
Ideally, it should NOT work. but I am able to upload files.
You can set the path to your service account in env (when using Google Cloud SDKs):
#linux
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/path/to/serviceAccountFile.json"
#windows
$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\path\to\serviceAccountFile.json"
If you don't have the key file, run the following command that'll download the key:
gcloud iam service-accounts keys create ./serviceAccount.json --iam-account=svc_name#project.iam.gserviceaccount.com
You can then use activate-service-account to use given service account as shown below:
gcloud auth activate-service-account --key-file=serviceAccount.json

Unable to authenticate my AWS credentials for ECR

I have installed the latest versions of the aws-cli-2 and docker, as well as ran "aws configure" and entered my access key and secret key. I have also verified the aws.config is correct and showing the right region and output format. My credentials in AWS are admin. I keep getting the following error:
'''Unable to locate credentials. You can configure credentials by running "aws configure".
Error: Cannot perform an interactive login from a non TTY device'''
Even though I have already ran 'aws configure.' I am running the commands prefixed with 'sudo' as well. Any thoughts?! Thank you for your time!
The aws configure command was being run as the local user, whereas the ecr command was being run as sudo.
If you run commands as sudo it will not have access to your local users config, it will instead default to the root users.
Instead ensure all commands are run as the same user.
If you want to use the aws credentials file from the default location you can also specify the location via the AWS_CONFIG_FILE environment variable.

User Data script to call aws cli

I am trying to get some files from S3 on startup in an EC2 instance by using a User Data script and the command
/usr/bin/aws s3 cp ...
The log tells me that permission was denied and I believe it is due to aws cli not finding any credentials when executing the user data script.
Running the command with sudo after the instance has started works fine.
I have run aws configure both with sudo and without.
I do not want to use cronjob to run something on startup since I am working with an AMI and often need to change the script, therefore it is more convenient for me to change the user data instead of creating a new AMI everytime the script changes.
If possible, I would also like to avoid writing the credentials into the script.
How can I configure awscli in such a way that the credentials are used when running a user data script?
I suggest you remove the AWS credentials from the instance/AMI. Your userdata script will be supplied with temporary credentials when needed by the AWS metadata server.
See: IAM Roles for Amazon EC2
Clear/delete AWS credentials configurations from your instance and create an AMI
Create a policy that has the minimum privileges to run your script
Create a IAM role and attach the policy you just created
Attach the IAM role when you launch the instance (very important)
Have your userdata script call /usr/bin/aws s3 cp ... without supplying credentials explicitly or using credentials file
You can configure your EC2 instance to receive a pre-defined IAM Role that has its credentials "baked-in" to the instance that it fetches upon instantiation, which it turn will allow it to call aws-cli commands in your User Data script without the need to configure credentials at all.
Here's more info on IAM Roles for EC2:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
It's worth noting here that you'll need to attach the appropriate policies to the IAM Role that you assign to your instance in order for the aws-cli commands to succeed. More information on that can be found here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#working-with-iam-roles

IAM Role + Boto3 + Docker container

As far I as I know, boto3 will try to load credentials from the instance metadata service.
If I am running this code inside a EC2 instance I expected to hae no problem. But when my code is dockerized how the boto3 will find the metadata service?
The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable which can be used to get credentials. These special variables are provided only to process with PID 1. Script that is specified in Dockerfile ENTRYPOINT gets PID 1.
There are many networking modes and details might differ for other networking modes. More information can be found in: How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
For awsvpc networking mode If you would run printenv with PID 1 you would see something similar to this:
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/0f891318-ab05-46fe-8fac-d5113a1c2ecd
HOSTNAME=ip-172-17-0-123.ap-south-1.compute.internal
AWS_DEFAULT_REGION=ap-south-1
AWS_REGION=ap-south-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/2c9107c385e04a70b30d3cc4d4de97e7-527074092
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/2c9107c385e04a70b30d3cc4d4de97e7-527074092
It also gets tricky to debug something since after SSH'ing into container you are using PID other than 1 meaning that services that need to get credentials might fail to do so if you run them manually.
ECS task metadata endpoint documentation
Find .aws folder in ~/.aws in your machine and move this to Docker container's /root folder.
.aws contains files which has AWS KEY and AWS PW.
You can easily copy it to currently running container from your local machine by
docker cp ~/.aws <containder_id>:/root

Copy files between two Google Cloud instances

I have two projects in Google Cloud and I need to copy files from an instance in one project to an instance in another project. I tried to using the 'gcloud compute copy-files' command but I'm getting this error:
gcloud compute copy-files test.tgz --project stack-complete-343 instance-IP:/home/ubuntu --zone us-central1-a
ERROR: (gcloud.compute.copy-files) Could not fetch instance: - Insufficient Permission
I was able to replicate your issue with a brand new VM instance, getting the same error. Here are a few steps that I took to correct the problem:
Make sure you are authenticated and have rights to both projects with the same account!
$ gcloud config list (if you see the service account #developer.gserviceaccount.com, you need to switch to the account that is enabled on both projects. you can check that from the Devlopers Console > Permissions)
$ gcloud auth login (copy the link to a new window, login, copy the code and paste it back in the prompt)
$ gcloud compute scp test.tgz --project stack-complete-343 instance-IP:/home/ubuntu --zone us-central1-a (I would also use the instance name instead of the IP)
This last command should also generate your ssh keys. You should see something like this, but do not worry about entering a passphrase :
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair
Enter passphrase (empty for no passphrase):
Go to the permissions tab on the remote instance(i.e. the instance you WON'T be running gcloud compute copy-files on). Then go to service accounts and create a new one, give it a name and check the box to get a key file for it and leave JSON selected. Upload that key file from your personal machine using gcloud compute copy-files and your personal account to the local instance(i.e. the machine you're SSHing into and running the gcloud compute copy-files command on.) Then run this from the local instance via SSH. gcloud auth activate-service-account ACCOUNT --key-file KEY-FILE replacing ACCOUNT with the email like address that was generated and KEY-FILE with the path to the key file you uploaded from your personal machine earlier. Then you should be able to access the instance that setup the account. These steps have to be repeated on every instance you want to copy files between. If these instructions weren't clear let me know and I'll try to help out.
It's not recommended to auth your account on Compute Engine instances because that can expose your credentials to anybody with access to the machine.
Instead, you can let your service accounts use the Compute Engine API. First, stop the instance. Once stopped you can edit Cloud API access scopes from the console. Modify the Compute Engine scope from Disabled to Read Only.
You should be able to just use the copy-files command now. This lets your service account access the Compute Engine API.
The most simple way to to this will be using 'scp' command and .pem file. Here's as example
sudo scp -r -i your/path_to/.pem your_username#ip_address_of_instance:path/to/copy/file
If both of them are in the same project this is the simplest way
gcloud compute copy-files yourFileName --project yourProjectName instance-name:~/folderInInstance --zone europe-west1-b
Obviously you should edit the zone according to your instances.
One of the approaches to get permissions is to enable Cloud API access scopes. You may set them to Allow full access to all Cloud APIs.
In console click on the instance and use EDIT button above. Scroll to the bottom and change Cloud API access scopes. See also this answer.