Currently we put the google cloud credentials in json file and set it via GOOGLE_APPLICATION_CREDENTIALS environment variable.
But we do not want to hardcode the private key in the json file due to security reasons. We want to put the private key in Azure key vault (yes we use azure key Vault), is there a way to provide the credentials programatically to GCP. so that i can read azure key vault and provide the private key via code. I tried to check and use GoogleCredentials and googles DefaultCredentialsProvider etc classes, but i could not find a proper example.
Note: The google credentials type is Service Credential
Any help is much appreciated.
Store the service account JSON base64 encoded in Azure Key Vault as a string.
Read the string back from Key Vault and base64 decode back to a JSON string.
Load the JSON string using one of the SDK member methods with names similar to from_service_account_info() (Python SDK).
John's answer is the correct one if you want to load a secret from Azure vault.
However, if you need a service account credential in Google Cloud environment, you don't need a service account key file, you can use the automatically loaded service account into the metadata server
In almost all Google Cloud service, you can customize the service account to use. In the worse case, you need to use the default service account for the context, and grant it the correct permissions.
Service account key file use isn't a good practice on Google Cloud product. It's a long lived credential, and you have to rotate yourselves, keep it secrets,...
Related
We are struggling to connect with Azure Data Factory to Amazon Marketplace Web Services.
It seems that we have all information required, however, we are getting the following error:
Parameter AWSAccessKeyId cannot have multiple values.
All data seems to be correct. However, we think it is strange that a Access Key Id and Secret Access Key are needed to connect to the Marketplace Web Services. Both keys come from the AWS environment which is currently not connected to anything.
Any help is appreciated.
Kind regards,
Jens
Yes, you need Access key ID and Secret key while creating the Amazon Marketplace Web Service linked service in Azure Data Factory. There should only be one Access Key assigned to per user in AWS Marketplace. Apart from this, other properties are also required. Please refer below image for the same. Some properties are mandatory and others not.
To allow people in your company to sign in to the AWS Marketplace Management Portal, create an IAM user for each person who needs access.
To create IAM users
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Users and then choose Create New Users.
In the numbered text boxes, enter a name for each user that you want to create.
Clear the Generate an access key for each user check box and then choose Create.
This key now you will pass in Linked Service in ADF.
Also, for better security, you can save the SecretKey in Azure Key Vault and use Azure Key Vault Linked Service to access the SecretKey. Refer Store credentials in Azure Key Vault.
I want to validate google service account credentials, which I have got from my google service account in the form of JSON key.
I tried validating the credentials by performing list buckets operation on the storage cloud and it was a success.
Now, I tried a negative scenario where I removed few of the keys from json file like:
"type": "yy",
"private_key_id": "dsfdngdhgdsafa",
"client_id": "12133423123",
But I am still able to access my bucket list giving no errors, but when I change any other keys other than shown above like:"private_key", then it fails.
Can anyone help me with, the explanation that this keys are not at all required or are for specific purpose ? why is this happening ? or any other way I can validate the credentials ?
[Update 2021-08-25]
Google Cloud publishes the public certificate for a Google Cloud service account private key. After a service account private key is deleted or invalidated, the public certificate is removed preventing validation of data signed with the private key. The location of the certificate is located in the service account JSON. The two keys are client_x509_cert_url and private_key_id. The private_key_id is used to select the matching public certificate.
Deep Dive into Google Cloud IAM Signblob and Service Accounts
[End Update]
The only items required are the Private Key private_key and the email address client_email. Everything else are comments for your usage (or the client libraries) or provide additional information for lookups on Google's end, for example: private_key_id.
I am not speaking for client libraries that may or may not implement their own common sense error checking. Study the libraries that you are using for details.
Google does not have/provide any security or validation of service account JSON key files. You are responsible for protecting the file and its contents.
You can use the service account, create a Signed JWT and call the Google Endpoint. An invalid JWT will return an error.
https://www.googleapis.com/oauth2/v4/token
I wrote an article that shows how to take a service account, create a Signed JWT and exchange for an OAuth Access Token. My article includes working Python source code. You could use my code as an example to create your own validation function.
Creating OAuth Access Tokens for REST API Calls
I am new to Google Cloud. I am trying to access google buckets to upload files. I use Google Storage object for accessing the bucket programmatically in Python. I am able to authenticate the storage object with 'key.json'. But I am unsure when the application will run in cloud how will it access 'key.json' file securely ? Also is there a way to authenticate storage object using access token in python ?
Thanks in advance!
But I am unsure when the application will run in cloud how will it
access 'key.json' file securely ?
Review the details that I wrote below. Once you have selected your environment you might not need to use a service account JSON file at all because the metadata server is available to provide your code with credentials. This is the best case and secure. On my personal website, I have written many articles that show how to create, manage and store Google credentials and secrets.
Also is there a way to authenticate storage object using access token
in python ?
All access is via an OAuth Access Token. The following link shows details using the metadata server which I cover in more detail below.
Authenticating applications directly with access tokens
There are three items to consider:
My code is not running in Google Cloud
My code is running in Google Cloud on a "compute" type of service with access to the metadata server
My code is running in Google Cloud without access to the metadata server.
1) My code is not running in Google Cloud
This means your code is running on your desktop or even in another cloud such as AWS. You are responsible for providing the method of authorization. There are two primary methods: 1) Service Account JSON key file; 2) Google OAuth User Authorization.
Service Account JSON key file
This is what you are using now with key.json. The credentials are stored in the file and are used to generate an OAuth Access Token. You must protect that file as it contains your Google Cloud secrets. You can specify the key.json directly in your code or via the environment variable GOOGLE_APPLICATION_CREDENTIALS
Google OAuth User Authorization
This method requires the user to log in to Google Accounts requesting an OAuth scope for Cloud Storage. The end result is an OAuth Access Token (just like a Service Account) that authorizes access to Cloud Storage.
Getting Started with Authentication
2) My code is running in Google Cloud on a "compute" type of service with access to the metadata server
Notice the word "metadata" server. For Google Cloud compute services, Google provides a metadata server that provides applications running on that compute service (Compute Engine, Cloud Functions, Cloud Run, etc) with credentials. If you use Google SDK Client libraries for your code, the libraries will automatically select the credentials for you. The metadata server can be disabled (denied access through role/scope removal), so you need to evaluate what you are running on.
Storing and retrieving instance metadata
3) My code is running in Google Cloud without access to the metadata server.
This is a similar scenario to #1. However, now you are limited to only using a service account unless this is a web server type of service that can present the Google Accounts authorization service to the user.
I am new to working with Azure DevOps, I am trying to create a pipeline using Azure DevOps for deploying my terraform code onto AWS, for authentication I am aware that we can use service principles but that will mean I will need to specify my acess and secret keys in azure DevOps which I do not want to do, so I wanted to check if there are any other ways of doing this?
For accessing/storing these kinds of secrets you can try the Azure Key Vault
Store all your secrets in Azure Key Vault secrets.
When you want to access secrets:
Ensure the Azure service connection has at least Get and List permissions on the vault. You can set these permissions in the Azure
portal:
Open the Settings blade for the vault, choose Access policies, then Add new.
In the Add access policy blade, choose Select principal and select the service principal for your client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are checked (ticked).
Choose OK to save the changes.
Reference
You can use
Secure Azure DevOps Variables or Variable Groups
Azure Key Vault
If you use a Service Principal, then you need a password / certificate as well to authenticate. Maybe you can also try to work with MSI (Managed Service Identity). In that case, the AAD will take care of the secret storage.
If you don't want to store credentials on Azure Devops itself, best way is to store credentials in a credential store (Azure Key Vault) and access it through a service connection. I assume that you are using YAML based pipelines. If so use the following steps to integrate your pipeline with the key vault,
Prerequisites,
Azure key vault is set up and keys are securely stored
Steps,
In edit mode of the pipeline click on the kebab menu (three dots on upper right corner) and select Triggers
On the opened menu click on the Variables tab and then Variable Groups
Open Manage variable groups in a new tab
Click on + Variable group button to add a new variable
Give a name and a description. Switch on the Link secrets from an Azure key vault as variables toggle.
Add a new service connection and once authenticated select the key vault name
Now add variables in to the variable group
Once done save the variable group and go back to the previous tab in step 2 and link the new variable group.
Once done save the pipeline
Important: You need to grant secret read permission to the service connection's service principal from your key vault.
Reference: Link secrets from an Azure key vault
Perhaps use the Azure Devops Libary > Variable Groups to securely store you keys.
Alternatively you may be able to use the Project Settings> Service connection. Perhaps using credentials connection or a generic on.
Service principals is the industry standard for this case. You should create a specific service principal for Azure DevOps and limit its scope to only what's necessary.
you can write variables into your powershell script file and can use powershell task into your pipeline. Now give powershell file path into this task and just give variables names. It will work like a charm.
For Service principle connection, you need to have
service principle id and service principle key
service principle id is same as application id
service principle key is there in certificates and secrets
You can use Azure Key Vault for storing all your keys and secrets. Give permission to your Azure pipeline to fetch keys from Key Vault.
Following link will guide you from scratch to develop a pipeline and fetch keys:
https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/
The only method to truly not store AWS credentials in Azure/Azure DevOps would be to make a hosted build pool inside your AWS account. These machines will have the azure DevOps agent installed and registered to your Organization and to a specific agent pool. Then add the needed permissions to the Iam instance profile attached to these build servers. When running your terraform commands using this agent pool, terraform will have access to the credentials on the instance. The same concept works for a container based build pool in AWS ECS.
You can use Managed identity in your pipeline to authenticate with the Azure Key Vault.
You can read more on Managed Identity here and Azure Key Vault here
You have to create a private key for Devops pipeline with limited services at your AWS machine
store the key in the Secure library of Devops Pipeline
from your AWS firewall disable the SSH connection from unknows IP addresses, and white-list Devops agents IP address, to get the list of the ips check this link https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#agent-ip-ranges
I want to deploy a node application on a google cloud compute engine micro instance from a source control repo.
As part of this deployment I want to use KMS to store database credentials rather than having them in my source control. To get the credentials from KMS I need to authenticate on the instance with GCLOUD in the first place.
Is it safe to just install the GCloud CLI as part of a startup script and let the default service account handle the authentication? Then use this to pull in the decrypted details and save them to a file?
The docs walkthrough development examples, but I've not found anything about how this should work in production, especially as I obviously don't want to store the GCloud credentials in source control either.
Yes, this is exactly what we recommend: use the default service account to authenticate to KMS and decrypt a file with the credentials in it. You can store the resulting data in a file, but I usually either pipe it directly to the service that needs it or put it in tmpfs so it's only stored in RAM.
You can check the encrypted credentials file into your source repository, store it in Google Cloud Storage, or elsewhere. (You create the encrypted file by using a different account, such as your personal account or another service account, which has wrap but not unwrap access on the KMS key, to encrypt the credentials file.)
If you use this method, you have a clean line of control:
Your administrative user authentication gates the ability to run code as the trusted service account.
Only that service account can decrypt the credentials.
There is no need to store a secret in cleartext anywhere
Thank you for using Google Cloud KMS!