What would be the best way to manage cloud credentials as part of an Azure DevOps build pipeline? - google-cloud-platform

We are going to be creating build/deploy pipelines in Azure DevOps to provision infrastructure in Google Cloud Platform (GCP) using Terraform. In order to execute the Terraform provisioning script, we have to provide the GCP credentials so it can connect to our GCP account. I have a credential file (JSON) that can be referenced in the Terraform script. However, being new to build/deploy pipelines, I'm not clear on exactly what to do with the credential file. That is something we don't want to hard-code in the TF script and we don't want to make it generally available to just anybody that has access to the TF scripts. Where exactly would I put the credential file to secure it from prying eyes while making it available to the build pipeline? Would I put it on an actual build server?

I'd probably use build variables or store variables in key vault and pull those at deployment time. storing secrets on the build agent is worse, because that means you are locked in to this build agent.
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch

Related

Gitlab CI/CD deploy to aws via aws-azure-cli authentication

When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/

Is there a way to run a GCP Cloud Function locally while authenticated as a service account?

I'm fairly new to GCP Cloud Functions.
I'm developing a cloud function within a GCP project which needs to access some other resources from the project (such as GCS, for instance). When I set up a cloud function, it gets a service account associated to it, so, I'm able give this service account the required permissions on the IAM and it works just fine in production.
I'm handling the required integrations by using the GCP SDKs and identifying the resources relative to the GCP project. For instance, if I need to access a GCS bucket within that project, it looks something like this:
const bucket = await storage.bucket("bucket-name");
The problem with this is that I'm not able to access these resources if I'm running the cloud function locally for development, so, I have to deploy it every time to test it, which is a process that takes some time and makes development fairly unproductive.
So, is there any way I can run this cloud function locally whilst keeping the access to the necessary project resources so that I'm able to test it while developing? I figured that running this function as it's service account could work, but I don't know how to do it and I'm also open to different approaches.
Yes, there is!
The only thing you need to do is setting the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of a service account json file and then the googleapis libraries handle the rest automatically, most of the time.

Can Google Cloud Repositories be shared cross projects?

I am trying to set up a continous developemnt system for creating an app and I would like to know if this is idea is feasible within GCP:
Project A - Hosts Cloud Source Repository
Project B - Cloud Run for the app
On project B, I have the Cloud Run option of 'Continously deploy new revisions from a source repository' which I would like to point to the CSR from project A.
My question is, Can CSR be shared cross-project or do I need to go for GitHub or BitBucket to be able to share code between projects?
You can access y our Cloud Source Repository from any project as long as your account (service or user) has the permission to access it.
However, you can't configure Cloud Build triggers on Cloud Source Repository that is in another project (the continuous deployment on Cloud Run configure a Cloud Build trigger behind the scene for you. It's simply a shortcut).
But you can also create a Cloud Build Trigger in your Cloud Source Repository project and grant the permission to the Cloud Build service account to deploy the Cloud Run service to the target project.
Because the continuous deployment on Cloud Run is a shortcut to configure Cloud Build trigger and deployment pipeline, you can do the same manually (longer and required more skill/experience with GCP), but it's not impossible!!

In a containerized application that runs in AWS/Azure but needs to access GCLOUD commands, what is the best way to setup gcloud authentication?

I am very new to GCP and I would greatly appreciate some help here ...
I have a docker containerized application that runs in AWS/Azure but needs to access gcloud SDK as well as through "Google cloud client libraries".
what is the best way to setup gcloud authentication from an application that runs outside of GCP?
In my Dockerfile, I have this (cut short for brevity)
ENV CLOUDSDK_INSTALL_DIR /usr/local/gcloud/
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:$CLOUDSDK_INSTALL_DIR/google-cloud-sdk/bin
RUN gcloud components install app-engine-java kubectl
This container is currently provisioned from an Azure app service & AWS Fargate. When a new container instance is spawned, we would like it to be gcloud enabled with a service account attached already so our application can deploy stuff on GCP using its deployment manager.
I understand gcloud requires us to run gcloud auth login to authenticate to your account. How we can automate the provisioning of our container if this step has to be manual?
Also, from what I understand, for cloud client libraries, we can store the path to service account key json file in an environment variable (GOOGLE_APPLICATION_CREDENTIALS). So this file either has to be stored inside the docker image itself OR has to be mounted from an external storage at the very least?
How safe is it to store this service account key file in an external storage. What are the best practices around this?
There are two main means of authentication in Google Cloud Platform:
User Accounts: Belong to people, represent people involved in your project and they're associated to a Google Account
Service Accounts: Used by an application or an instance.
Learn more about their differences here.
Therefore, you are not required to use the command gcloud auth login to perform gcloud commands.
You should be using gcloud auth activate-service-account instead, along with the --key-file=<path-to-key-file> flag, which will allow you to authenticate without the need of signing into a Google Account with access to your project every time you need to call an API.
This key should be stored securely, preferably encrypted in the platform of your choice. Learn how to do it in GCP here following these steps as an example.
Take a look at these useful links for storing secrets in Microsoft Azure and AWS.
On the other hand, you can deploy services to GCP programmatically either using Cloud Libraries with your programming language of choice, or using Terraform is very intuitive if you prefer to do so over using the Google Cloud SDK through the CLI.
Hope this helped.

How to manage environment specific files in AWS

I am having properties file specific for dev, test and other environments. I have to store this files in some secure place in aws. I am using AWS Native tools for build and deployment. Please let me know how to store these files in aws
There are many ways to deal with a secret in case of AWS, but one thing is clear it depends on the service where you will use and consume these secret.
But you explore these two
The simplest way is to use the Environment variable.
AWS Secrets Manager
s3 ( for keeping files)
One common approach is to pass your secret as an environment variables, but in case of AWS I will recommend to go with AWS Secrets Manager
AWS Secrets Manager is an AWS service that makes it easier for you to
manage secrets. Secrets can be database credentials, passwords,
third-party API keys, and even arbitrary text. You can store and
control access to these secrets centrally by using the Secrets Manager
console, the Secrets Manager command line interface (CLI), or the
Secrets Manager API and SDKs.
Basic Secrets Manager Scenario
The following diagram illustrates the most basic scenario. It shows
how you can store credentials for a database in Secrets Manager, and
then use those credentials in an application that needs to access the
database.
Compliance with Standards
AWS Secrets Manager has undergone auditing for the these standards and can be part of your solution when you need to obtain compliance certification.
You can explore this article to read and write secret.
If you need to maintain files, not only object then you can store in s3 and pull files during deployment. but better to enable server-side encprtion.
But I will prefer secret manager instead of s3 and environment variable.
You can for s3 here and here
Bajjuri,
As Adil said in answer,
AWS Secret Manager -- if you want to store data as key, value pairs.
AWS S3 -- if you want to store files securely.
Adding to his answer, you can use AWS CodeDeploy Environment
Variables to fetch the files according to the your environment.
Let's say you've CodeDeploy deployment group for dev environment with
name "DEV" and deployment group for prod environment with name "PROD",
you can use this variable in bash script and call it in life cycle
hooks of appspec file to fetch the files or secret accordingly.
I've been using this technique in production for long and it works like a charm.