Multiple Firestore connection - google-cloud-platform

I need help regarding tackling this scenario where I need to connect to multiple firestores in different google cloud projects.
Right now. I am using NestJs to retrieve data from my Firestore. Connecting to it using a JSON key generated from a Service Account.
I am planning to make this primary Firestore store data that would tell what database should the app connect to. However, I'm oblivious to how can I do the switching of service accounts/JSON keys. Since, from what I understood so far, is 1 JSON key is for 1 Firestore. I also think that it's not a good practice to store those JSON key files.
What are my possible options here?

You can use Secret Manager to store your Firestore configurations. To start:
Create a secret by navigating to Cloud Console > Secret Manager. You could also click this link.
You should enable the Secret Manager API if you haven't done so.
Click Create Secret.
Fill up the Name, for e.g. FIRESTORE.
On Secret value, you could either upload the JSON file or paste the Secret Value.
Click Create Secret.
After creating a secret, go to your project and install the #google-cloud/secret-manager:
npm i #google-cloud/secret-manager
then initiate it like this:
import {SecretManagerServiceClient} from '#google-cloud/secret-manager';
const client = new SecretManagerServiceClient();
You could now use the stored configuration on your project. See code below for reference:
import { initializeApp } from "firebase/app";
import * as functions from 'firebase-functions';
import { getFirestore, serverTimestamp, addDoc, collectionGroup, collection, query, where, getDoc, getDocs, doc, updateDoc, setDoc, arrayRemove, arrayUnion, onSnapshot, orderBy, limit, increment } from "firebase/firestore";
const client = new SecretManagerServiceClient();
// Must follow expected format: projects/*/secrets/*/versions/*
// You can always use `latest` if you want to use the latest uploaded version.
const name = 'projects/PROJECT-ID/secrets/FIRESTORE/versions/latest'
async function accessSecretVersion() {
const [version] = await client.accessSecretVersion({
name: name,
});
// Extract the payload as a string.
const payload = version?.payload?.data?.toString();
// WARNING: Do not print the secret in a production environment - this
const config = JSON.parse(payload);
const firebaseApp = initializeApp({
apiKey: config.apiKey,
authDomain: config.authDomain,
databaseURL: config.databaseURL,
projectId: config.projectId,
storageBucket: config.storageBucket,
messagingSenderId: config.messagingSenderId,
appId: config.appId,
measurementId: config.measurementId
});
const db = getFirestore(firebaseApp);
const docRef = doc(db, "cities", "SF");
const docSnap = await getDoc(docRef);
if (docSnap.exists()) {
console.log("Document data:", docSnap.data());
} else {
// doc.data() will be undefined in this case
console.log("No such document!");
}
}
accessSecretVersion();
You should also create Secrets on your different projects and make sure that each project's IAM permissions are set to access each other. You can easily choose/switch your Firestore by modifying the secret name here:
const name = 'projects/PROJECT-ID/secrets/FIRESTORE/versions/latest'
For convenience, you can identically name the secrets given that they are different projects. You can then just change the PROJECT-ID which you want to access the Firestore.
Creating and accessing secrets
Managing Secrets
Managing Secret Versions
API Reference Documentation
You may also want to checkout Secret Manager Best Practices.

Related

FHIR works on AWS server not allowing to keep customized id as primary key

We are working for FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the CloudFormation template given by AWS in our AWS environment.Following is the template that we have deployed
https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html
Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or mainain client specific (customized ) ids as primary key .Infact , in the runtime, it is generating its own ids and ignoring the id given by us.
The FHIR spec allows for you to define your own IDs when using "update as create". This is when you create a new resource in the server, but use a PUT (update) request to the ID you want to create, such as Patient/1, instead of a POST (create) request to the resource URL. The server should return a 201 Created status instead of 200 OK. For more information see https://hl7.org/fhir/http.html#upsert
Not every FHIR server supports this, but if AWS does this is likely how it would work. The field in the CapabilityStatement for this feature is CapabilityStatement.rest.resource.updateCreate
EDIT:
This is possible by modifying the parameters passed to the DynamoDbDataService constructor in the deployment repo's src/config.ts
By default supportUpdateCreate, the second parameter, is set to false
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, false, { enableMultiTenancy });
but you can set it to true to enable this functionality
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, true, { enableMultiTenancy });

How do I store a private key using Google Cloud's Cloud KMS?

I've been trying to read documentation, and watching a few of their videos but I'm not entirely clear on how I store a private key using GCP's Cloud KMS.
Is the idea for me to store the private key in storage, then use Cloud KMS to encrypt it? How can I make this key available as a secret to my application?
I know this is a very basic question, but I couldn't find an easy breakdown on how to do this - I'm looking for a simple explanation about the concept. Thanks!
Please read for yourself: https://cloud.google.com/kms/docs ...maybe you'll come up with a more focused question. And I think that there's a slight misunderstanding - you'd only be able to retrieve these on the server-side, but not client-side (else the client would need to have the RSA private key of the service account, which has access to Cloud KMS, which is a security breach by itself). So this is generally only useful for a) server-side applications and b) eg. Google Cloud Build.
Generally one has to:
create the keyring with gcloud kms keyrings create
create the key with gcloud kms keys create
then use gcloud kms encrypt and gcloud kms decrypt
I can also provide a usage example (it assumes a key-ring with a key).
Just to show, that one doesn't necessarily have to setup secrets.
gcloud kms can well provide build secrets - assuming that one can use a service-account with role roles/cloudkms.cryptoKeyEncrypterDecrypter. The given example decrypts all kinds of build secrets - without having to deal with any base64 encoded binary files in meta-data (which is rather a workaround than an actual solution).
This is a high level description of storing a private key on Google Cloud Platform (GCP):
I ended up using Google KMS, specifically asymmetric encryption feature.
The general steps to create an asymmetric key are:
Create a keyring within your project
Create a key with ENCRYPT_DECRYPT purpose (if like me, you're trying to do this using Terraform, there's some documentation here
Once we've created the key, we can now encrypt some data we want to secure using the public key from the asymmetric key we created in the previous step.
It is important to note that with an asymmetric key, there is a public-private key pair, and we never handle the private key (i.e. only GCP knows the private key).
Here's how you'd encrypt some data from your local computer:
echo -n my-secret-password | gcloud kms encrypt \
> --project my-project \
> --location us-central1 \
> --keyring my-key-ring \
> --key my-crypto-key \
> --plaintext-file - \
> --ciphertext-file - \
> | base64
This will output some cyphertext with base64 encoding, for eg:
CiQAqD+xX4SXOSziF4a8JYvq4spfAuWhhYSNul33H85HnVtNQW4SOgDu2UZ46dQCRFl5MF6ekabviN8xq+F+2035ZJ85B+xTYXqNf4mZs0RJitnWWuXlYQh6axnnJYu3kDU=
This cyphertext then needs to be stored as a secret. Once stored as a secret, we need to do the following in our application code in order to decrypt the cyphertext into a usable format.
Here is an example of decrypting the cyphertext using the #google-cloud/kms module: https://cloud.google.com/kms/docs/hsm#kms-decrypt-symmetric-nodejs
This is what it looks like in Nodejs:
//
// TODO(developer): Uncomment these variables before running the sample.
//
// const projectId = 'my-project';
// const locationId = 'us-east1';
// const keyRingId = 'my-key-ring';
// const keyId = 'my-key';
// Ciphertext must be either a Buffer object or a base-64 encoded string
// const ciphertext = Buffer.from('...');
// Imports the Cloud KMS library
const {KeyManagementServiceClient} = require('#google-cloud/kms');
// Instantiates a client
const client = new KeyManagementServiceClient();
// Build the key name
const keyName = client.cryptoKeyPath(projectId, locationId, keyRingId, keyId);
// Optional, but recommended: compute ciphertext's CRC32C.
const crc32c = require('fast-crc32c');
const ciphertextCrc32c = crc32c.calculate(ciphertext);
async function decryptSymmetric() {
const [decryptResponse] = await client.decrypt({
name: keyName,
ciphertext: ciphertext,
ciphertextCrc32c: {
value: ciphertextCrc32c,
},
});
// Optional, but recommended: perform integrity verification on decryptResponse.
// For more details on ensuring E2E in-transit integrity to and from Cloud KMS visit:
// https://cloud.google.com/kms/docs/data-integrity-guidelines
if (
crc32c.calculate(decryptResponse.plaintext) !==
Number(decryptResponse.plaintextCrc32c.value)
) {
throw new Error('Decrypt: response corrupted in-transit');
}
const plaintext = decryptResponse.plaintext.toString();
console.log(`Plaintext: ${plaintext}`);
return plaintext;
}
return decryptSymmetric();

Concatenate AWS Secrets in aws-cdk for ECS container

how do you go about making a postgres URI connection string from a Credentials.fromGeneratedSecret() call without writing the secrets out using toString()?
I think I read somewhere making a lambda that does that, but man that seems kinda overkill-ish
const dbCreds = Credentials.fromGeneratedSecret("postgres")
const username = dbCreds.username
const password = dbCreds.password
const uri = `postgresql://${username}:${password}#somerdurl/mydb?schema=public`
Pretty sure I can't do the above. However my hasura and api ECS containers need connection strings like the above, so I figure this is probably a solved thing?
If you want to import a secret that already exists in the Secret's Manager you could just do a lookup of the secret by name or ARN. Take a look at the documentation referring how to get a value from AWS Secrets Manager.
Once you have your secret in the code it is easy to pass it on as an environment variable to your application. With CDK it is even possible to pass secrets from Secrets Manager or AWS Systems Manager Param Store directly onto the CDK construct. One such example would be (as pointed in the documentation):
taskDefinition.addContainer('container', {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
memoryLimitMiB: 1024,
environment: { // clear text, not for sensitive data
STAGE: 'prod',
},
environmentFiles: [ // list of environment files hosted either on local disk or S3
ecs.EnvironmentFile.fromAsset('./demo-env-file.env'),
ecs.EnvironmentFile.fromBucket(s3Bucket, 'assets/demo-env-file.env'),
],
secrets: { // Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up.
SECRET: ecs.Secret.fromSecretsManager(secret),
DB_PASSWORD: ecs.Secret.fromSecretsManager(dbSecret, 'password'), // Reference a specific JSON field, (requires platform version 1.4.0 or later for Fargate tasks)
PARAMETER: ecs.Secret.fromSsmParameter(parameter),
}
});
Overall, in this case, you would not have to do any parsing or printing of the actual secret within the CDK. You can handle all of that processing within you application using properly set environment variables.
However, only from your question it is not clear what exactly you are trying to do. Still, the provided resources should get you in the correct direction.

How to get google cloud project number programmaticaly?

I want to use Google Secret Manager in my project. To access a saved secret it is necessary to provide a secret name which contains Google project number. It will be convinient to get this number proramatically to form secret name and no to save it in the enviroment variable. I use node.js runtime for my project. I know there is a library google-auth-library which allow to get project id. Is it possible to get project number somehow?
You can access secrets by project_id or project_number. The following are both valid resource IDs that point to the same secret:
projects/my-project/secrets/my-secret
projects/1234567890/secrets/my-secret
You can get metadata, including project_id and project_number from the metadata service. There are many default values. The ones you're looking for are numeric-project-id and project-id.
Here is an example using curl to access the metadata service. You would run this inside your workload, typically during initial boot:
curl "https://metadata.google.internal/computeMetadata/v1/project/project-id" \
--header "Metadata-Flavor: Google"
Note: the Metadata-Flavor: Google header is required.
To access these values from Node, you can construct your own http client. Alternatively, you can use the googleapis/gcp-metadata package:
const gcpMetadata = require('gcp-metadata');
async function projectID() {
const id = await gcpMetadata.project('project-id');
return id
}
You can send a GET request to the Resource Manager API
https://cloudresourcemanager.googleapis.com/v1/projects/PROJECT_ID?alt=json
Not sure if the following method can be useful in your case, but I put it here, just in case:
gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)"
it should return the project number based on the project identifier (in the PROJECT_ID variable), under assumption, that a user (or a service account) who/which runs that command has relevant permissions.
If you're doing this from outside a Cloud VM, so that the metadata service is not available, you can use the Resource Manager API to convert the project name to project number:
const {ProjectsClient} = require('#google-cloud/resource-manager').v3;
const resourcemanagerClient = new ProjectsClient();
let projectId = 'your-project-id-123'; // TODO: replace with your project ID
const [response] = await resourcemanagerClient.getProject({name: projectId});
let projectNumber = response.name.split('/')[1];

DynamoDB + Flutter

I am trying to create an app that uses AWS Services, I already use Cognito plugin for flutter but can't get it to work with DynamoDB, should I use a lambda function and point to it or is it possible to get data form a table directly from flutter, if that's the case which URL should I use?
I am new in AWS Services don’t know if is it possible to access a dynamo table with a URL or I should just use a lambda function
Since this is kind of an open-ended question and you mentioned Lambdas, I would suggest checking out the Serverless framework. They have a couple of template applications in various languages/frameworks. Serverless makes it really easy to spin up Lambdas configured to an API Gateway, and you can start with the default proxy+ resource. You can also define DynamoDB tables to be auto-created/destroyed when you deploy/destroy your serverless application. When you successfully deploy using the command 'serverless deploy' it will output the URL to access your API Gateway which will trigger your Lambda seamlessly.
Then once you have a basic "hello-word" type API hosted on AWS, you can just follow the docs along for how to set up the DynamoDB library/sdk for your given framework/language.
Let me know if you have any questions!
-PS: I would also, later on, recommend using the API Gateway Authorizer against your Cognito User Pool, since you already have auth on the Flutter app, then all you have to do is pass through the token. The Authorizer can also be easily set up via the Serverless Framework! Then your API will be authenticated at the Gateway level, leaving AWS to do all the hard work :)
If you want to read directly from Dynamo It is actually pretty easy.
First add this package to your project.
Then create your models you want to read and write. Along with conversion methods.
class Parent {
String name;
late List<Child> children;
factory Parrent.fromDBValue(Map<String, AttributeValue> dbValue) {
name = dbValue["name"]!.s!;
children = dbValue["children"]!.l!.map((e) =>Child.fromDB(e)).toList();
}
Map<String, AttributeValue> toDBValue() {
Map<String, AttributeValue> dbMap = Map();
dbMap["name"] = AttributeValue(s: name);
dbMap["children"] = AttributeValue(
l: children.map((e) => AttributeValue(m: e.toDBValue())).toList());
return dbMap;
}
}
(AttributeValue comes from the package)
Then you can consume dynamo db api as per normal.
Create Dynamo service
class DynamoService {
final service = DynamoDB(
region: 'af-south-1',
credentials: AwsClientCredentials(
accessKey: "someAccessKey",
secretKey: "somesecretkey"));
Future<List<Map<String, AttributeValue>>?> getAll(
{required String tableName}) async {
var reslut = await service.scan(tableName: tableName);
return reslut.items;
}
Future insertNewItem(Map<String, AttributeValue> dbData, String tableName) async {
service.putItem(item: dbData, tableName: tableName);
}
}
Then you can convert when getting all data from dynamo.
List<Parent> getAllParents() {
List<Map<String, AttributeValue>>? parents =
await dynamoService.getAll(tableName: "parents");
return parents!.map((e) =>Parent.fromDbValue(e)).toList()
}
You can check all Dynamo operations from here