I'm trying to incorporate Secret Manager with my projects for security but running into issues setting it up. I currently have a service account in project-b where I downloaded the JSON credential keys and have been using that to access my BigQuery table in my backend code.
My current setup:
I have project-a that uses Cloud Run to host my code.
I have project-b that uses BigQuery to hold some data for me.
From project-a, I'm trying to access the BigQuery table in project-b just like I've been doing with the JSON keys.
I keep running into this error:
PermissionDenied: 403 Permission 'secretmanager.versions.access' denied for resource 'projects/project-b/secrets/stockdata-secret/versions/1' (or it may not exist).
I have assigned the Secret Manager Secret Accessor and Secret Manager Viewer roles to a couple of my accounts but it still doesn't seem to work.
The client_email from the keys is set to the top service account in the screenshot below:
Permissions for the secret
Here is my part of my back-end code:
# Grabbing keys from Secret Manager, got this code from Google docs
def access_secret_version(project_id, secret_id, version_id):
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(request={"name": name})
payload = response.payload.data.decode("UTF-8")
return payload
---
# Routing to the page
#app.route('/projects/random-page')
def random_page():
payload = access_secret_version("project-b", "stockdata-secret", "1")
# Authenticating service account.
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = payload
# old way, which worked
google_cloud_service_account = "creds.json"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = google_cloud_service_account
Related
I have installed the airflow 1.10.15 in standalone server & trying to integrate aws secret manager with it but values are not coming.
I have added backend = airflow.contrib.secrets.aws_secrets_manager.SecretsManagerBackend and backend_kwargs = {"connections_prefix": "airflow/test} under secrets in airflow.cfg. Also, i have added role to ec2 server which has secret manager read/write access but still it is not taking value from secret manager.
You can use the airflow secret backend with the aws secret manager by creating a new airflow secret and then setting the backend to awssecretmanager.
I am trying to sign URL in GCP storage from AWS EC2 or Lambda, I have generated a JSON file for permissions providing my AWS account ID and role which is given to EC2 or Lambda. When I call sign URL even with storage admin or owner permission I get: Error: The caller does not have permission.
I used the code provided by GCP documentation.
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const options = {
version: 'v4',
action: 'read',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
};
// Get a v4 signed URL for reading the file
const [url] = await storage
.bucket(bucketName)
.file(fileName)
.getSignedUrl(options);
Can anybody tell me what did I miss? What is wrong?
Seems the pro
*** update.
I am creating a service account, granting this service account storage admin to my project, then creating pull in Workload Identity Pools, setting AWS and my AWS account ID, then granting access by my AWS identities matching role, downloading JSON, and putting environment variables - GOOGLE_APPLICATION_CREDENTIALS - path to my JSON file and GOOGLE_CLOUD_PROJECT - my project ID. How to correctly load that clientLibraryConfig.json file to run functions I need?
update ** 2
my clientLibraryConfig JSON has the following content..
{
"type": "external_account",
"audience": "..",
"subject_token_type": "..",
"service_account_impersonation_url": "..",
"token_url": "..",
"credential_source": {
"environment_id": "aws1",
"region_url": "..",
"url": "..",
"regional_cred_verification_url": ".."
}
}
How can I generate an access token in node js SDK from this config file to access GCP storage from AWS ec2?
You have to set up the following permissions for the IAM service account:
Storage Object Creator: This is to create signed URLs.
Service Account Token Creator role: This role enables impersonation
of service accounts to create OAuth2 access tokens, sign blobs, or sign JWTs.
Also, you can try to run locally in GCP to sign the URL with the service account.
You can use an existing private key for a service account. The key can be in JSON or PKCS12 format.
Use the command gsutil signurl and pass the path to the private key from the previous step, along with the name of the bucket and object.
For example, if you use a key stored in the folder Desktop, the following command will generate a signed URL for users to view the object cat.jpegfor for 10 minutes.
gsutil signurl -d 10m Desktop/private-key.json gs://example-bucket/cat.jpeg
If successful, the response should look like this:
URL HTTP Method Expiration Signed URL
gs://example-bucket/cat.jpeg GET 2018-10-26 15:19:52 https://storage.googleapis.
com/example-bucket/cat.jpeg?x-goog-signature=2d2a6f5055eb004b8690b9479883292ae74
50cdc15f17d7f99bc49b916f9e7429106ed7e5858ae6b4ab0bbbdb1a8ccc364dad3a0da2caebd308
87a70c5b2569d089ceb8afbde3eed4dff5116f0db5483998c175980991fe899fbd2cd8cb813b0016
5e8d56e0a8aa7b3d7a12ee1baa8400611040f05b50a1a8eab5ba223fe5375747748de950ec7a4dc5
0f8382a6ffd49941c42498d7daa703d9a414d4475154d0e7edaa92d4f2507d92c1f7e811a7cab64d
f68b5df4857589259d8d0bdb5dc752bdf07bd162d98ff2924f2e4a26fa6b3cede73ad5333c47d146
a21c2ab2d97115986a12c28ff37346d6c2ca83e5618ec8ad95632710b489b75c35697d781c38e&
x-goog-algorithm=GOOG4-RSA-SHA256&x-goog-credential=example%40example-project.
iam.gserviceaccount.com%2F20181026%2Fus%2Fstorage%2Fgoog4_request&x-goog-date=
20201026T211942Z&x-goog-expires=3600&x-goog-signedheaders=host
The signed URL is the string that starts with https://storage.googleapis.com, and it is likely to span multiple lines. Anyone can use the URL to access the associated resource (in this case, cat.jpeg) during the designated time frame (in this case, 10 minutes).
So if this works locally, then you can start configuring Workload Identity Federation to impersonate your service account. In this link, you will find a guide to deploy it.
To access resources from AWS using your Workload Identity Federation you will need to review if the following requirements have been already configured:
The workload identity pool has been created.
AWS has been added as an identity provider in the workload identity
pool (The Google organization policy needs to allow federation from
AWS).
The permissions to impersonate a service account have been granted to the external account.
I will add this guide to configure the Workload Identity Federation.
Once the previous requirements have been completed, you will need to generate the service account credential, this file will only contain non sensitive metadata in order to instruct the library on how to retrieve external subject tokens and exchange them for service accounts tokens, as you mentioned the file could be an config.json and could be generated running the following command:
# Generate an AWS configuration file.
gcloud iam workload-identity-pools create-cred-config \
projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$AWS_PROVIDER_ID \
--service-account $SERVICE_ACCOUNT_EMAIL \
--aws \
--output-file /path/to/generated/config.json
Where the following variables need to be substituted:
$PROJECT_NUMBER: The Google Cloud project number.
$POOL_ID:The workload identity pool ID.
$AWS_PROVIDER_ID: The AWS provider ID.
$SERVICE_ACCOUNT_EMAIL: The email of the service account to
impersonate.
Once you generate the JSON credentials configuration file for your external identity, you can store the path at the GOOGLE_APPLICATION_CREDENTIALS environment variable.
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/config.json
So, with this, the library can automatically choose the right type of client and initialize the credential from the configuration file. Please note that the service account will also need the roles/browser when using external identities with Application Default Credentials in Node.js or you can pass the project ID to avoid the need to grant roles/browser to the service account as is shown in the bellow code:
async function main() {
const auth = new GoogleAuth({
scopes: 'https://www.googleapis.com/auth/cloud-platform'
// Pass the project ID explicitly to avoid the need to grant `roles/browser` to the service account
// or enable Cloud Resource Manager API on the project.
projectId: 'CLOUD_RESOURCE_PROJECT_ID',
});
const client = await auth.getClient();
const projectId = await auth.getProjectId();
// List all buckets in a project.
const url = `https://storage.googleapis.com/storage/v1/b?project=${projectId}`;
const res = await client.request({ url });
console.log(res.data);
}
I want to create a secret in secrets manager to be rotated periodically every 30 days but without specifying the end service. Is it possible to remove the setSecret and testSecret sections from my lambda? or will it gives me errors?
Having an empty implementation has worked for me:
def set_secret(service_client, arn, token):
"""Set the secret
This method should set the AWSPENDING secret in the service that the secret belongs to. For example, if the secret is a database
credential, this method should take the value of the AWSPENDING secret and set the user's password to this value in the database.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
"""
# This is where the secret should be set in the service
pass
I've been reading this page: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster
The example there is mainly for the provisioned database, I'm new to the serverless database, is there an Terraform example to create a serverless Aurora database cluster (SQL db), using the secret stored in the secret manager?
Many thanks.
I'm guessing you want to randomize the master_password?
You can do something like this:
master_password = random_password.DatabaseMasterPassword.result
The SSM parameter can be created like so:
resource "aws_ssm_parameter" "SSMDatabaseMasterPassword" {
name = "database-master-password"
type = "SecureString"
value = random_password.DatabaseMasterPassword.result
}
The random password can be defined like so:
resource "random_password" "DatabaseMasterPassword" {
length = 24
special = true
override_special = "!#$%^*()-=+_?{}|"
}
The basic example of creating serveless aurora is:
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
engine_mode = "serverless"
database_name = "myauroradb"
enable_http_endpoint = true
master_username = "root"
master_password = "chang333eme321"
backup_retention_period = 1
skip_final_snapshot = true
scaling_configuration {
auto_pause = true
min_capacity = 1
max_capacity = 2
seconds_until_auto_pause = 300
timeout_action = "ForceApplyCapacityChange"
}
}
I'm not sure what do you want to do with secret manager. Its not clear from your question, so I'm providing any example for it.
The accepted answer will just create the Aurora RDS instance with a pre-set password -- but doesn't include Secrets Manager. It's a good idea to use Secrets Manager, so that your database and the applications (Lambdas, EC2, etc) can access the password from Secrets Manager, without having to copy/paste it to multiple locations (such as application configurations).
Additionally, by terraforming the password with random_password it will be stored in plaintext in your terraform.tfstate file which might be a concern. To resolve this concern you'd also need to enable Secrets Manager Automatic Secret Rotation.
Automatic rotation is a somewhat advanced configuration with Terraform. It involves:
Deploying a Lambda with access to the RDS instance and to the Secret
Configuring the rotation via the aws_secretsmanager_secret_rotation resource.
AWS provides ready-to-use Lambdas for many common rotation scenarios. The specific Lambda will vary depending on database engine (MySQL vs. Postgres vs. SQL Server vs. Oracle, etc), as well as whether you'll connecting to the database with the same credentials that you're rotating.
For example, when the secret rotates the process is something like:
Invoke the rotation lambda, Secrets Manager will pass the name of the secret as a parameter
The Lambda will use the details within the secret (DB Host, Post, Username, Password) to connect to RDS
The Lambda will generate a new password and run the "Update password" command, which can vary based on DB Engine
The Lambda will update the new credentials to Secrets Manager
For all this to work you'll also need to think about the permissions Lambda will need -- such as network connectivity to the RDS instance and IAM permissions to read/write secrets.
As mentioned it's somewhat advanced--but results in Secrets Manager being the only persistent location of the password. Once setup it works quite nicely though, and your apps can securely retrieve the password from Secrets Manager (one last tip -- it's ok to cache the secret in your app to reduce Secrets Manager calls but be sure to flush that cache on connection failures so that your apps will handle an automatic rotation).
I am trying to get a service account to create blobs in Google Cloud Storage
from within a Python script, but I am having issues with the credentials.
1) I create the service account for my project and then download the key file in json:
"home/user/.config/gcloud/service_admin.json"
2) I give the service account the necessary credentials (via gcloud in a subprocess)
roles/viewer, roles/storage.admin, roles/resourcemanager.projectCreator, roles/billing.user
Then I would like to access a bucket in GCS
from google.cloud import storage
import google.auth
credentials, project = google.auth.default()
client = storage.Client('myproject', credentials=credentials)
bucket = client.get_bucket('my_bucket')
Unfortunately, this results in:
google.api_core.exceptions.Forbidden: 403 GET
https://www.googleapis.com/storage/v1/b/my_bucket?projection=noAcl:
s_account#myproject.iam.gserviceaccount.com does not have
storage.buckets.get access to my_bucket
I have somewhat better luck if I set the environment variable
export GOOGLE_APPLICATION_CREDENTIALS="home/user/.config/gcloud/service_admin.json"
and rerun the script. However, I want it all to run in one single instance of the script that creates the accounts and continues to create the necessary files in the buckets. How can I access my_bucket if I know where my json credential file is.
Try this example from the Documentation for Server to Server Authentication:
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key file.
storage_client = storage.Client.from_service_account_json('service_account.json')
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
This way you point the file containing the key of the Service Account directly in your code.