Usually using the google-cloud-storage gem to read/write files
This gem expects a .json service account key path, or an environment variable specifying the path
I was wondering how this could work in a cloudrun context, as the expected environment variable can't reference a static file path. A service account can be specified when deploying to cloudrun, but how to reach the service account info it with such tool ?
While running on Cloud Run (or Compute Engine, Kubernetes Engine, App Engine, Cloud Functions...), you don't need to specify any JSON key files (or the GOOGLE_APPLICATION_CREDENTIALS environment variable). All Google Cloud client libraries automatically get credentials (a token) from the compute platform your app is running on.
In fact this gem’s documentation linked to says:
This library uses Service Account credentials to connect to Google Cloud services. When running on Compute Engine the credentials will be discovered automatically
So you should delete that field in the code, and it should be working on Cloud Run just fine.
You need to specify a key file path (or env. variable) if:
you need want to use a different identity than the default/configured identity of the platform you're running on
(e.g. in this case service account you configured for the Cloud Run service)
while running outside Google Cloud
This gem expects a .json service account key path, or an environment variable specifying the path
I was wondering how this could work in a cloudrun context, as the expected environment variable can't reference a static file path.
The value of the GOOGLE_CLOUD_CREDENTIALS environment variable can be: "Path to JSON file, or JSON contents". So if you can't reference a static file path, provide the entire contents of your JSON key file as the value for the environment variable.
See google-cloud-storage Authentication for full docs.
Related
I'm using Google cloud build for CI/CD for my django app, and one requirement I have is to set my GOOGLE_APPLICATION_CREDENTIALS so I can perform authenticated actions in my Docker build. For example, I need to run RUN python manage.py collectstatic --noinput which requires access to my Google cloud storage buckets.
I've generated the credentials and it works well when simply including it in my (currently private) repo as a .json file, so it gets pulled into my Docker container with the COPY . . command and setting the env variable with ENV GOOGLE_APPLICATION_CREDENTIALS=credentials.json. Ultimately, I want to grab the credential value from secret manager and create the credentials file during the build stage, so I can completely remove the credentials from the repo. I tried doing this with editing cloudbuild.yaml (referencing this doc) with various implementations of the availableSecrets config, $$SECRET syntax, and build-args in the docker build command and trying to access in Dockerfile with
ARG GOOGLE_BUILD_CREDS
RUN echo "$GOOGLE_BUILD_CREDS" >> credentials.json
ENV GOOGLE_APPLICATION_CREDENTIALS=credentials.json
with no success.
If someone could advise me how to implement this in my cloudbuild.yaml and Dockerfile if its possible, or if there's another better solution altogether, would be much appreciated.
This is the relevant part of my cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
availableSecrets:
secretManager:
- versionName: projects/PROJECT_ID/secrets/CREDENTIALS/versions/latest
env: 'CREDENTIALS'
If your container will run on Cloud Run, it's super easy: Remove the service account key file (and roughly, in most use cases, you never ever need it).
Keep in mind that a service account key file is a secret with a private key. And if you put it in your container, you simply store it in plain text. So bad for a secret!! (with dive, you can explore your container content, and steal the secret if you have access to the container directly)
But, I'm sure you know that because you want to store the secret in a secret manager. Now a question? How do you access a secret manager? Do you need a service account key file to be authenticated to access it?
In fact not.
The solution is to use ADC (Application default credentials). With the client libraries, use the get default credential method to let the library determine automatically the platform and the credential to use
On Cloud Run (as any other Google Cloud services), you have a metadata server that allows client libraries to get credentials information from the runtime service account.
On your local environment, you have 2 options:
Use your own credential. For that run the command gcloud auth application-default login. It's your own credential and permissions, not exactly the same as the Cloud Run runtime environment
Impersonate the Cloud Run runtime service account and act as itself to run your container/code locally. For that run the command gcloud auth application-default login --impersonate-service-account=<service account email>, Be sure to have the role service account token creator on the service account.
And then, run your app locally, and let the ADC use the credentials
I think I've worked out a fix. To solve the error I mentioned in my reply to #guillaume-blaquiere, I updated my build args in cloudbuild.yaml to include --network=cloudbuild, allowing me access to the correct service account credentials (credit to this answer).
The next issue I faced is with the django-storages library, returning this exception
AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'google.auth.compute_engine.credentials.Credentials'> just contains a token. see https://googleapis.dev/python/google-api-core/latest/auth.html#setting-up-a-service-account for more details.
I then came across this suggestion to add the setting GS_QUERYSTRING_AUTH = False to my django config and this seems to do the trick. My only conern is the documentation here does not go into too much detail on impacts or risks of disabling this (the bucket is public-read as it recommends). It seems to be working as intended however. So I will go with this configuration unless a better solution is put forward.
I am currently deploying a Django application to GCP Cloud Run.
I have replaced Cloud Run Default Service Account (....compute#developer.gserviceaccount.com) with a custom one.
But I get an error message:
AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'google.auth.compute_engine.credentials.Credentials'> just contains a token. ...
This bit is confusing me:
<class 'google.auth.compute_engine.credentials.Credentials'>
The error reported of the context of Django Storages which I use to store files.
Is this message means the service account which is used by Cloud Run is still the default one (GOOGLE_APPLICATION_CREDENTIALS is set as Compute Engine)?
Why not the Custom Service Account is used as an Identity for the Cloud Run?
Why Cloud Run still checking the Default one I expected I replaced it with te Custome Service Account?
I am a bit new with the IAM, but if somebody could explian why it is happeing would be appriciated.
I've had to download a key for a Google's Firebase service and yet another key for the pub/sub. How am I supposed to reference both keys with the GOOGLE_APPLICATION_CREDENTIALS key word?
Normally you only use one service account that has the required permissions.
Application Default Credentials (ADC) support one and only one service account JSON key file specified by the environment variable GOOGLE_APPLICATION_CREDENTIALS.
When writing code for Google Cloud, the SDK clients support specifying a service account as a parameter. In your example, you will need to create SDK clients using the appropriate credentials (service account JSON key file). The Firebase admin client can use one credential and the Pub/Sub client can use the other credential.
I'm having exactly the same issue. I'm trying to run two different Firestore services on one machine. Each service uses a different Firestore project. As far as I can see, to explicitly authenticate by directly accessing my JSON key file, I need to do something like this:
FirestoreClient client = Use ClientBuilderBase and its property: CredentialsPath
FirestoreDb = FirestoreDb.Create(firestoreProjectId, client);
But as ClientBuilderBase is an abstract class, I'm stumped. Anyone whose got some sample code that does this for real would be a real help.
Cheers
Keith
Google Cloud gsutil iam get gs://testBucket command should return bucket policy, but instead received "Failure: GetBucketIamPolicy must be overloaded"
Verified storage.buckets.GetIamPolicy and storage.buckets.setIamPolicy are in placed.
Any help or suggestion is appreciated.
That functionality only exists in the JSON API; it sounds like you've somehow managed to get gsutil to try using the XML API to make this call.
Here's the base API client class:
https://github.com/GoogleCloudPlatform/gsutil/blob/0e4bdc80f90f42edd86c3da772c22087e63b21be/gslib/cloud_api.py#L84
And here are the subclasses that implement functionality for the JSON and XML API (note that GetBucketIamPolicy is only implemented in the JSON API's client class):
https://github.com/GoogleCloudPlatform/gsutil/blob/0e4bdc80f90f42edd86c3da772c22087e63b21be/gslib/gcs_json_api.py#L334
https://github.com/GoogleCloudPlatform/gsutil/blob/0e4bdc80f90f42edd86c3da772c22087e63b21be/gslib/boto_translation.py#L160
My best guess is that you have HMAC credentials configured in your boto file, rather than OAuth2 credentials. This will force gsutil to use the XML API (since HMAC credentials only work for that API), regardless of whether the command is supposed to support the XML API. The iam command is supposed to only support the JSON API, but it looks like we didn't add a test for the edge case where only HMAC credentials were configured.
I've filed https://github.com/GoogleCloudPlatform/gsutil/issues/846 to track this bug in gsutil.
"Failure: GetBucketIamPolicy must be overloaded"
This error means that the function GetBucketIamPolicy is not implemented in the gsutil program.
This indicates that the Google Cloud SDK is not installed correctly, Python is not set up correctly, or you have external libraries with name conflicts with the Google libraries.
Note: I am not confirmed this yet: yesterday there was an internal issue mentioned about gsutil. If you are using the latest version, try going back to a release from two weeks ago.
Previous versions
Previous versions of Cloud SDK are available in the download archive in Google Cloud Storage.
#mhouglum, #John Hanley I was able to replicate the issue on a different machine and the solution is to issue "gcloud config set pass_credentials_to_gsutil true" command
Like #mhouglum said, gsutil will try to read the OAuth credentials first from "gcloud auth login" but since the pass_credentials_to_gsutil" is set to false, it will read the HMAC credentials from the .boto file which doesn't support the XML API.
Thank you both for your time and efforts.
No matter what I try it seems my web service cannot access my .aws/credentials file.
I always get this error:
System.UnauthorizedAccessException: Access to the path '{PATH}' is denied.
Here is what I have tried:
Move the path from the default directory to the website root
Change the website app pool to run as my user account
Given Everyone full control of the folder and the file
Verify that when I put the same key and secret into the web.config the call works
Tried removing the region from the config
Tried removing the path from the config
Here is my config (note if I don't provide the path, even when in the default location, it says no credentials file was found)
<add key="AWSProfileName" value="default" />
<add key="AWSRegion" value="us-east-1"/>
<add key="AWSProfilesLocation" value="{PATH}" />
In the AWS toolkit I have a `default' profile setup as well that has rights but that does not help this work.
I have even tried the legancy format called out in the AWS docs. What am I missing? It seems I have followed everything AWS calls out in their docs.
I am using Castle Windsor DI so could that be getting in the way?
container.Register(
Component.For<IAmazonDynamoDB>()
.ImplementedBy<AmazonDynamoDBClient>()
.DependsOn(Dependency.OnValue<RegionEndpoint>(RegionEndpoint.USEast1))
.LifestylePerWebRequest());
container.Register(
Component.For<IDynamoDBContext>()
.ImplementedBy<DynamoDBContext>()
.DependsOn(Dependency.OnComponent<IAmazonDynamoDB, AmazonDynamoDBClient>())
.DependsOn(Dependency.OnValue<DynamoDBContextConfig>(
new DynamoDBContextConfig
{
TableNamePrefix = configurationManager.GetRequiredAppSetting<string>(Constants.Web.AppSettings.AwsDynamoDbPrefix),
Conversion = DynamoDBEntryConversion.V2
}))
.LifestylePerWebRequest());
The problem that you have is that the path ~\.aws\credentials is only defined when logged in as a user.
A Windows services such as IIS is not logged in as the user that created the credentials file. Therefore the the path is not accessible to the Windows service. Actually the service does not know what user to look into. For example if your user name is john, the path would be c:\users\john\.aws\credentials. The Windows service does not know about your identity.
Note: I believe - but I am not 100% sure - is that a windows service will look in c:\.aws for credentials. I have used this path in the past but I cannot find Amazon reference documentation to support this. I no longer store credentials on my EC2 instances, so I am out of touch on the location c:\.aws.
You have a number of choices. Create the credentials as usual. Then create a directory outside of your IIS installation and setup such as c:\.aws. Copy ~\.aws to c:\.aws. Then specify the full path in your programs.
A much better and more secure method, if you are running your services on AWS, is to use IAM Role. Create a role with the desired permissions and attach the role to your EC2 instance. All AWS SDKs and Tools know how to find the credentials from AWS Metadata.
There are many more methods such as EC2 Parameter Store. Storing credentials on your instances or inside your program is not a good idea.
[Edit after thinking more about the error message]
You may have an issue where IIS does not have access rights to the location where the credentials are stored.
Open Windows Explorer and locate the folder for your credentials file. Right click this folder, select Properties and click the Security tab. From here, choose Edit then Add. The following users must be added and given at least READ permissions: IUSR & IIS_IUSRS. You may need to add "LIST FOLDER CONTENTS".