Cleardb instance hosted in cloud foundry - cloud-foundry

What is the use of service key in cloud foundry ? Today i create one cleardb service and then pick the VCAP_SERVICES credentials . Then easily connect the cleardb through HeidiSQL tool with these credentials.
Also tried the same things through ssh without service key and its connect successfully .
https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-services.html
Please let me know the importance of service key .
Regards
Mukul K

Service keys allow you to get a set of (new) credentials for an app/use case outside of running CF apps. For example, you could get a new temporary ClearDB URI and pass it to mysqlsh on your laptop.
It is preferred to generate new creds for each use case/user rather than borrowing the creds from an app’s VCAP_SERVICE

Related

GKE Secrets OR Google Secret manager

Does anyone know in which case choose Kubernetes secrets instead of google secret manager and the reverse ? Differences between the two ?
With Kubernetes secret (K8S Secret), you use a built in feature of K8S. You load your secrets in config maps, and you mount them on the pods that require them.
PRO
If a day you want to deploy on AWS, Azure or on prem, still on K8S, the behavior will be the same, no update to perform in your code.
CONS
The secrets are only accessible by K8S cluster, impossible to reuse them with another GCP services
Note: With GKE, no problem the ETCD component is automatically encrypted with a key form KMS service to keep the secret encrypted at rest. But, it's not always the same for every K8S installation, especially on premise, where the secrets are kept in plain text. Be aware about this part of the security.
Secret Manager is a vault managed by Google. You have API to read and write them and the IAM service checks the authorization.
PRO
It's a Google Cloud service and you can access it from any GCP services (Compute Engine, Cloud Run, App Engine, Cloud Functions, GKE,....) as long as you are authorized for
CONS
It's Google Cloud specific product, you are locked in.
You can use them together via this sync service: https://external-secrets.io/

how to set credentials to use GCP API from Dataproc instance

I am trying to access some credentials stored in google Secret Manager. To access this its required to have credentials setup in the Cluster machine where the jar is running.
I have SSH into the master instance, and seen there is nothing configured for GOOGLE_APPLICATION_CREDENTIALS.
I am curious to know how to assign GOOGLE_APPLICATION_CREDENTIALS or any other alternative that allows to use GCP APIs that require credentials.
If you are running on Dataproc clusters, default GCE service account should be already configured for you. Assuming your clusters are running outside GCP environment, in that case you want to follow this instruction to manually set up a service account that has editor/owner role for Google Secret Manager, and download the credential key file and point GOOGLE_APPLICATION_CREDENTIALS to it.

Do we really need to bind Oracle service in PCF , can't we just use credentials mentioned in service?

I have a question what is the difference if I just use a Oracle/MySQL service provided by PCF without binding it? What difference will it create. I can anyway access DB using the credentials
There are two differences that come to mind:
When you create a service through the Cloud Foundry marketplace, that will create backing resources for the service but in most cases it does not create credentials. The act of binding a service to your app, in most cases with most service brokers, will actually create service credentials for you. When you unbind, again with most brokers, the service credentials are destroyed. This makes it easy to regenerate your service credentials, just unbind/rebind the service and restart your app. The net result is that if you don't bind, there are no credentials.
Most people do not want to include credentials with the actual application (see https://12factor.net/ for details why). They want to be able to provide configuration external to the app. On Cloud Foundry this commonly amounts to binding a service.
Having said that, how do you want to provide the credentials to your application?
Service bindings are there to try and make life as a developer easier but you don't have to use them. If you want to pass in the configuration some other way, like via environment variables, a config file, or using a config service (Spring Cloud Config Server or Vault) those are fine options too.
If you do not want to bind a service to your app, the only thing you'll need to do is to create a service key instead. A service key is like a binding, but not associated with an application. It will also generate a set of unique credentials. You can then take the credentials from your service key and feed them to your app in the way that works best for you.
Ex:
cf create-service-key service-instance key-name
cf service-key service-instance key-name
The first command creates the service key, the second will display its credentials.

Azure DevOps Pipelines

I am new to working with Azure DevOps, I am trying to create a pipeline using Azure DevOps for deploying my terraform code onto AWS, for authentication I am aware that we can use service principles but that will mean I will need to specify my acess and secret keys in azure DevOps which I do not want to do, so I wanted to check if there are any other ways of doing this?
For accessing/storing these kinds of secrets you can try the Azure Key Vault
Store all your secrets in Azure Key Vault secrets.
When you want to access secrets:
Ensure the Azure service connection has at least Get and List permissions on the vault. You can set these permissions in the Azure
portal:
Open the Settings blade for the vault, choose Access policies, then Add new.
In the Add access policy blade, choose Select principal and select the service principal for your client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are checked (ticked).
Choose OK to save the changes.
Reference
You can use
Secure Azure DevOps Variables or Variable Groups
Azure Key Vault
If you use a Service Principal, then you need a password / certificate as well to authenticate. Maybe you can also try to work with MSI (Managed Service Identity). In that case, the AAD will take care of the secret storage.
If you don't want to store credentials on Azure Devops itself, best way is to store credentials in a credential store (Azure Key Vault) and access it through a service connection. I assume that you are using YAML based pipelines. If so use the following steps to integrate your pipeline with the key vault,
Prerequisites,
Azure key vault is set up and keys are securely stored
Steps,
In edit mode of the pipeline click on the kebab menu (three dots on upper right corner) and select Triggers
On the opened menu click on the Variables tab and then Variable Groups
Open Manage variable groups in a new tab
Click on + Variable group button to add a new variable
Give a name and a description. Switch on the Link secrets from an Azure key vault as variables toggle.
Add a new service connection and once authenticated select the key vault name
Now add variables in to the variable group
Once done save the variable group and go back to the previous tab in step 2 and link the new variable group.
Once done save the pipeline
Important: You need to grant secret read permission to the service connection's service principal from your key vault.
Reference: Link secrets from an Azure key vault
Perhaps use the Azure Devops Libary > Variable Groups to securely store you keys.
Alternatively you may be able to use the Project Settings> Service connection. Perhaps using credentials connection or a generic on.
Service principals is the industry standard for this case. You should create a specific service principal for Azure DevOps and limit its scope to only what's necessary.
you can write variables into your powershell script file and can use powershell task into your pipeline. Now give powershell file path into this task and just give variables names. It will work like a charm.
For Service principle connection, you need to have
service principle id and service principle key
service principle id is same as application id
service principle key is there in certificates and secrets
You can use Azure Key Vault for storing all your keys and secrets. Give permission to your Azure pipeline to fetch keys from Key Vault.
Following link will guide you from scratch to develop a pipeline and fetch keys:
https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/
The only method to truly not store AWS credentials in Azure/Azure DevOps would be to make a hosted build pool inside your AWS account. These machines will have the azure DevOps agent installed and registered to your Organization and to a specific agent pool. Then add the needed permissions to the Iam instance profile attached to these build servers. When running your terraform commands using this agent pool, terraform will have access to the credentials on the instance. The same concept works for a container based build pool in AWS ECS.
You can use Managed identity in your pipeline to authenticate with the Azure Key Vault.
You can read more on Managed Identity here and Azure Key Vault here
You have to create a private key for Devops pipeline with limited services at your AWS machine
store the key in the Secure library of Devops Pipeline
from your AWS firewall disable the SSH connection from unknows IP addresses, and white-list Devops agents IP address, to get the list of the ips check this link https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#agent-ip-ranges

Connecting to AWS RDS from java without exposing password

I was successfully able to connect to RDS like any other database connection.
I use spring jpa data ( repository ) to do CRUD operation on postgres db.
currently I provide the db url and the credential in the properties file
spring:
datasource:
url: jdbc:postgresql://<rds-endpoint>:5432/<dbschema>
username: <dbuser>
password: <dbpassword>
However this is not an option while connecting to production or preproduction.
what is the best practise here.
Does AWS provide any inbuild mechanism to read these details from an endpoint like in the case of accessing S3 ?
My intention is not expose the password.
Several options are available to you:
Use the recently announced IAM access to Postgres RDS
Use Systems Manager Parameter Store to store the password
Use Secrets Manager to store the password and automatically rotate credentials
For 2 and 3, look up the password on application start in Spring using a PropertyPlaceholderConfiguration and the AWSSimpleSystemsManagement client (GetParameter request). SystemsManager can proxy requests to SecretsManager to keep a single interface in your code to access parameters.
IAM credentials is more secure in that:
If using EC2 instance profiles, access to the database uses short lived temporary credentials.
If not on EC2 you can generate short lived authentication tokens.
The password is not stored in your configuration.
If you have a separate database team they can manage access independent of the application user.
Removing access can be done via IAM
another generic option I found was to use AWS Secret Manager
(doc link)
RDS specific solution is to connect to DB Instance Using the AWS SDK using IAMDBAuth