Terraform provider using dynamic IP - google-cloud-platform

So i am with a tricky problem, i am using terraform to create an infrastructure on cloud and using the ip of the load balancer created by GCP to the IP address needed for the vault provider
provider "vault" {
address = local.vault_add
token = ""
version = "~> 2.14.0"
}
but the terraform apply gives an error because it wont wait until the LB IP is generated and it will try to communicate with the Vault using the default value localhost. Is any way to solve this problem without split cofiguration of the Vault with the rest ?

Not really - you will almost certainly find it best to configure Vault separately.
If you think about it for a moment, you will see that you have a chicken-and-egg situation: you want the Vault provider to pull secrets from Vault to support the creation of your infrastructure, but Vault doesn't exist yet, so there's nowhere to pull the secrets from. So you need Vault to set up your infrastructure, but you need to set up your infrastructure to have Vault.
Your best approach will be to set up Vault separately, then it will be running, unsealed, populated, and available to use for your other Terraform operations.

Related

Export data from OpenSearch in private VPC and import it to local running container - aws opensearch

I'm using aws OpenSearch in a private vpc.
I've about 10000 entries under some index.
For local development i'm running an local OpeanSearch container and i'd like to export all the entries from the OpenSearch service into my local container.
I can get all the entries from the OpeanSerch API but the format of the response is different then the format that should be when doing _bulk operation.
Can someone please tell me how should i do it?
Anna,
There are different strategies you can take to accomplish this, considering the fact that your domain is running in a private VPC.
Option 1: Exporting and Importing Snapshots
From the security standpoint, this is the recommended option, as you are moving entire indices out of the service without exposing the data. Please follow the AWS official documentation about how to create custom index snapshots. Once you complete the steps, you will have an index snapshot stored on an Amazon S3 bucket. After this, you can securely download the index snapshot to your local machine, then follow the instructions on the official OpenSearch documentation about how to restore the index snapshots.
Option 2: Using VPC Endpoints
Another way for you to export the data from your OpenSearch domain is accessing the data via a alternate endpoint using the VPC Endpoints feature from AWS OpenSearch. It allows you to to expose additional endpoints running on public or private subnets within the same VPC, different VPC, or different AWS accounts. In this case, you are essentially create a venue to access the OpenSearch REST APIs outside of the private VPC, to which you need to take care of who other than you will be able to do so as well. Please follow the best practices related to secure endpoints if you follow this option.
Option 3: Using the ElasticDump Open Source Utility
The ElasticDump utility allows you to retrieve data from Elasticsearch/OpenSearch clusters in a format of your preference, and then import that data back to another cluster. It is a very flexible way for you to move data around—but it requires the utility to access the REST API endpoints from the cluster. Run this utility in a bastion server that has ingress access to your OpenSearch domain in the private VPC. Keep in mind, though, that AWS doesn't provide any support to this utility, and you must use it at your own risk.
I hope that helps with your question. Let us know if you need any more help on this. đŸ™‚

How to retrieve ARN through AWS SDK for Go

Is there a way to retrieve ARN of a resource through AWS SDK for Go? I created a couple of tables in DynamoDB and I want to retrieve the ARNs.
The ARN format:
arn:aws:service:region:account-id:resource-type:resource-id
How to retrieve the account-id and region via SDK for Go?
There is no generic way to get region from AWS SDK. By generic, here we consider simple code that returns a correct AWS region for your service deployed to ANY environment.
AWS assumes the opposite process. As a client, you are expected to know where your AWS resources deployed, and you have to inject region into an app that connects to AWS.
Think about your code running on your local machine in Europe, accessing AWS DynamoDB deployed in us-east-2 region, or code that needs to copy data from DB in region1 to DB in region2. In both these cases, the application cannot get the correct region without a hint.
In many cases, though, the environment where your code is deployed can provide that hint.
A few examples:
For local environment, you can configure default region for AWS SDK - https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-region. Your service picks up that region if you create client using config.LoadDefaultConfig
Another example is running your service on EC2. EC2 provides AWS metadata that includes current region and account. Current region can be requested using client.GetMetadata. GetInstanceIdentityDocument API returns both Region and Account ID.
If you control how your service is deployed, you can try to get current Region and Account ID from environment, otherwise common practice is setting ENV variables with Region and Account ID when you deploy your code.

get aws credentials from ec2 metadata services in Go

How can I make the GO SDK fetch the access keys for AWS from the Instance Metadata Service (169.254.169.254) provided by AWS.
I checked the official AWS SDK for go documentation and there seems to be only ways of fetching the access keys from environment variables, but no credentials retriever from IMS.
How is this done in go?
I checked the official AWS SDK for go documentation and there seems to be only ways of fetching the access keys from environment variables, but no credentials retriever from IMS.
You just missed it. The Go SDK supports the instance metadata service as well as every other common credentials provider.
From https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html:
If you have configured your instance to use IAM roles, the SDK uses these credentials for your application automatically.
You don't have to do anything to configure this. It should just work. If you're having problems, make sure that you're not manually configuring some other credentials source.
Usually you don't have to do anything more than something like:
sess := session.Must(
session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}),
)
And with or without CLI configuration, metadata service, or environment variables, it should just work wherever you run it.

GKE Secrets OR Google Secret manager

Does anyone know in which case choose Kubernetes secrets instead of google secret manager and the reverse ? Differences between the two ?
With Kubernetes secret (K8S Secret), you use a built in feature of K8S. You load your secrets in config maps, and you mount them on the pods that require them.
PRO
If a day you want to deploy on AWS, Azure or on prem, still on K8S, the behavior will be the same, no update to perform in your code.
CONS
The secrets are only accessible by K8S cluster, impossible to reuse them with another GCP services
Note: With GKE, no problem the ETCD component is automatically encrypted with a key form KMS service to keep the secret encrypted at rest. But, it's not always the same for every K8S installation, especially on premise, where the secrets are kept in plain text. Be aware about this part of the security.
Secret Manager is a vault managed by Google. You have API to read and write them and the IAM service checks the authorization.
PRO
It's a Google Cloud service and you can access it from any GCP services (Compute Engine, Cloud Run, App Engine, Cloud Functions, GKE,....) as long as you are authorized for
CONS
It's Google Cloud specific product, you are locked in.
You can use them together via this sync service: https://external-secrets.io/

Azure DevOps Pipelines

I am new to working with Azure DevOps, I am trying to create a pipeline using Azure DevOps for deploying my terraform code onto AWS, for authentication I am aware that we can use service principles but that will mean I will need to specify my acess and secret keys in azure DevOps which I do not want to do, so I wanted to check if there are any other ways of doing this?
For accessing/storing these kinds of secrets you can try the Azure Key Vault
Store all your secrets in Azure Key Vault secrets.
When you want to access secrets:
Ensure the Azure service connection has at least Get and List permissions on the vault. You can set these permissions in the Azure
portal:
Open the Settings blade for the vault, choose Access policies, then Add new.
In the Add access policy blade, choose Select principal and select the service principal for your client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are checked (ticked).
Choose OK to save the changes.
Reference
You can use
Secure Azure DevOps Variables or Variable Groups
Azure Key Vault
If you use a Service Principal, then you need a password / certificate as well to authenticate. Maybe you can also try to work with MSI (Managed Service Identity). In that case, the AAD will take care of the secret storage.
If you don't want to store credentials on Azure Devops itself, best way is to store credentials in a credential store (Azure Key Vault) and access it through a service connection. I assume that you are using YAML based pipelines. If so use the following steps to integrate your pipeline with the key vault,
Prerequisites,
Azure key vault is set up and keys are securely stored
Steps,
In edit mode of the pipeline click on the kebab menu (three dots on upper right corner) and select Triggers
On the opened menu click on the Variables tab and then Variable Groups
Open Manage variable groups in a new tab
Click on + Variable group button to add a new variable
Give a name and a description. Switch on the Link secrets from an Azure key vault as variables toggle.
Add a new service connection and once authenticated select the key vault name
Now add variables in to the variable group
Once done save the variable group and go back to the previous tab in step 2 and link the new variable group.
Once done save the pipeline
Important: You need to grant secret read permission to the service connection's service principal from your key vault.
Reference: Link secrets from an Azure key vault
Perhaps use the Azure Devops Libary > Variable Groups to securely store you keys.
Alternatively you may be able to use the Project Settings> Service connection. Perhaps using credentials connection or a generic on.
Service principals is the industry standard for this case. You should create a specific service principal for Azure DevOps and limit its scope to only what's necessary.
you can write variables into your powershell script file and can use powershell task into your pipeline. Now give powershell file path into this task and just give variables names. It will work like a charm.
For Service principle connection, you need to have
service principle id and service principle key
service principle id is same as application id
service principle key is there in certificates and secrets
You can use Azure Key Vault for storing all your keys and secrets. Give permission to your Azure pipeline to fetch keys from Key Vault.
Following link will guide you from scratch to develop a pipeline and fetch keys:
https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/
The only method to truly not store AWS credentials in Azure/Azure DevOps would be to make a hosted build pool inside your AWS account. These machines will have the azure DevOps agent installed and registered to your Organization and to a specific agent pool. Then add the needed permissions to the Iam instance profile attached to these build servers. When running your terraform commands using this agent pool, terraform will have access to the credentials on the instance. The same concept works for a container based build pool in AWS ECS.
You can use Managed identity in your pipeline to authenticate with the Azure Key Vault.
You can read more on Managed Identity here and Azure Key Vault here
You have to create a private key for Devops pipeline with limited services at your AWS machine
store the key in the Secure library of Devops Pipeline
from your AWS firewall disable the SSH connection from unknows IP addresses, and white-list Devops agents IP address, to get the list of the ips check this link https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#agent-ip-ranges