How to get clientid from aws app config service - amazon-web-services

I have created application and configuration profile with s3 bucket in AWS app config service .
While trying to fetch configuration data from s3 through app config below parameters required pass but clientid didn't see any where in app config deployment process and its mandatory field.
GetConfigurationRequest request = new GetConfigurationRequest();
request.setApplication("TEST");
request.setEnvironment("test-env");
request.setConfiguration("test-s3");
request.setClientId(""); // mandatory field
request.setClientConfigurationVersion("2");
GetConfigurationResult result = appConfig.getConfiguration(request);
Please help to get clientid and how to configure appconfig service in aws.

From AWS Documentations:
The client-id parameter in the following command is a unique, user-specified ID to identify the client for the configuration.
A unique application instance identifier called a client ID.
You can give any unique client-id in the request by which you can identify the source of the request.
This id also enables AWS AppConfig to deploy the configuration in intervals, as defined in the deployment strategy.

Related

Secret Manager access issues

I'm trying to incorporate Secret Manager with my projects for security but running into issues setting it up. I currently have a service account in project-b where I downloaded the JSON credential keys and have been using that to access my BigQuery table in my backend code.
My current setup:
I have project-a that uses Cloud Run to host my code.
I have project-b that uses BigQuery to hold some data for me.
From project-a, I'm trying to access the BigQuery table in project-b just like I've been doing with the JSON keys.
I keep running into this error:
PermissionDenied: 403 Permission 'secretmanager.versions.access' denied for resource 'projects/project-b/secrets/stockdata-secret/versions/1' (or it may not exist).
I have assigned the Secret Manager Secret Accessor and Secret Manager Viewer roles to a couple of my accounts but it still doesn't seem to work.
The client_email from the keys is set to the top service account in the screenshot below:
Permissions for the secret
Here is my part of my back-end code:
# Grabbing keys from Secret Manager, got this code from Google docs
def access_secret_version(project_id, secret_id, version_id):
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(request={"name": name})
payload = response.payload.data.decode("UTF-8")
return payload
---
# Routing to the page
#app.route('/projects/random-page')
def random_page():
payload = access_secret_version("project-b", "stockdata-secret", "1")
# Authenticating service account.
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = payload
# old way, which worked
google_cloud_service_account = "creds.json"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = google_cloud_service_account

GCP Airflow connection by using secret manager

I am trying to add Airflow connection for GCP cloud(SA key should be fetched from secret manager) but in my Airflow UI(version 2.1.4) i couldnt find option for adding by using secret manager. is it because of version problem?
enter image description here
if so can we add the airflow connection (by using secret manager) via command line(gcloud) or via programmatically to add it
I tried via command line but it throws below error:
gcloud composer environments run project_id --location europe-west2 connections add -- edw_test --conn-type=google_cloud_platform --conn-extra '{"extra__google_cloud_platform__project": "proejct", "extra__google_cloud_platform__key_secret_name": "test_edw","extra__google_cloud_platform__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for europe-west2--902058d8-gke.
Unable to connect to the server: dial tcp 172.16.10.2:443: i/o timeout
ERROR: (gcloud.composer.environments.run) kubectl returned non-zero status code.
I have upgraded both composer and airflow version which paved the way for creating the airflow connection by keeping the keys in secret manager
You can do this by configuring airflow to use Secret Manager as a secrets backend. For this to work, however, the service account you use to access the backend needs to have permission to access secrets.
Secrets Backend
For example, you can set the value directly in airflow.cfg:
[secrets]
backend = airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Via environment variable:
export AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
Creating Connection
Then you can create a secret directly in Secret Manager. If you have configured your Airflow instance to use Secret Manager as the secrets backend, it will pick up any secrets that have the correct prefix.
The default prefixes are:
airflow-connections
airflow-variables
airflow-config
In your case, you would create a secret named airflow-connections-edw_test, and set the value to google-cloud-platform://?extra__google_cloud_platform__project=project&extra__google_cloud_platform__key_secret_name=test_edw&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform
Note that the parameters have to be url encoded.
More info:
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/secrets-backends/google-cloud-secret-manager-backend.html#enabling-the-secret-backend
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/connections/gcp.html

"The caller does not have permission" when signing GCP storage file URL from AWS lambda

I am trying to sign URL in GCP storage from AWS EC2 or Lambda, I have generated a JSON file for permissions providing my AWS account ID and role which is given to EC2 or Lambda. When I call sign URL even with storage admin or owner permission I get: Error: The caller does not have permission.
I used the code provided by GCP documentation.
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const options = {
version: 'v4',
action: 'read',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
};
// Get a v4 signed URL for reading the file
const [url] = await storage
.bucket(bucketName)
.file(fileName)
.getSignedUrl(options);
Can anybody tell me what did I miss? What is wrong?
Seems the pro
*** update.
I am creating a service account, granting this service account storage admin to my project, then creating pull in Workload Identity Pools, setting AWS and my AWS account ID, then granting access by my AWS identities matching role, downloading JSON, and putting environment variables - GOOGLE_APPLICATION_CREDENTIALS - path to my JSON file and GOOGLE_CLOUD_PROJECT - my project ID. How to correctly load that clientLibraryConfig.json file to run functions I need?
update ** 2
my clientLibraryConfig JSON has the following content..
{
"type": "external_account",
"audience": "..",
"subject_token_type": "..",
"service_account_impersonation_url": "..",
"token_url": "..",
"credential_source": {
"environment_id": "aws1",
"region_url": "..",
"url": "..",
"regional_cred_verification_url": ".."
}
}
How can I generate an access token in node js SDK from this config file to access GCP storage from AWS ec2?
You have to set up the following permissions for the IAM service account:
Storage Object Creator: This is to create signed URLs.
Service Account Token Creator role: This role enables impersonation
of service accounts to create OAuth2 access tokens, sign blobs, or sign JWTs.
Also, you can try to run locally in GCP to sign the URL with the service account.
You can use an existing private key for a service account. The key can be in JSON or PKCS12 format.
Use the command gsutil signurl and pass the path to the private key from the previous step, along with the name of the bucket and object.
For example, if you use a key stored in the folder Desktop, the following command will generate a signed URL for users to view the object cat.jpegfor for 10 minutes.
gsutil signurl -d 10m Desktop/private-key.json gs://example-bucket/cat.jpeg
If successful, the response should look like this:
URL HTTP Method Expiration Signed URL
gs://example-bucket/cat.jpeg GET 2018-10-26 15:19:52 https://storage.googleapis.
com/example-bucket/cat.jpeg?x-goog-signature=2d2a6f5055eb004b8690b9479883292ae74
50cdc15f17d7f99bc49b916f9e7429106ed7e5858ae6b4ab0bbbdb1a8ccc364dad3a0da2caebd308
87a70c5b2569d089ceb8afbde3eed4dff5116f0db5483998c175980991fe899fbd2cd8cb813b0016
5e8d56e0a8aa7b3d7a12ee1baa8400611040f05b50a1a8eab5ba223fe5375747748de950ec7a4dc5
0f8382a6ffd49941c42498d7daa703d9a414d4475154d0e7edaa92d4f2507d92c1f7e811a7cab64d
f68b5df4857589259d8d0bdb5dc752bdf07bd162d98ff2924f2e4a26fa6b3cede73ad5333c47d146
a21c2ab2d97115986a12c28ff37346d6c2ca83e5618ec8ad95632710b489b75c35697d781c38e&
x-goog-algorithm=GOOG4-RSA-SHA256&x-goog-credential=example%40example-project.
iam.gserviceaccount.com%2F20181026%2Fus%2Fstorage%2Fgoog4_request&x-goog-date=
20201026T211942Z&x-goog-expires=3600&x-goog-signedheaders=host
The signed URL is the string that starts with https://storage.googleapis.com, and it is likely to span multiple lines. Anyone can use the URL to access the associated resource (in this case, cat.jpeg) during the designated time frame (in this case, 10 minutes).
So if this works locally, then you can start configuring Workload Identity Federation to impersonate your service account. In this link, you will find a guide to deploy it.
To access resources from AWS using your Workload Identity Federation you will need to review if the following requirements have been already configured:
The workload identity pool has been created.
AWS has been added as an identity provider in the workload identity
pool (The Google organization policy needs to allow federation from
AWS).
The permissions to impersonate a service account have been granted to the external account.
I will add this guide to configure the Workload Identity Federation.
Once the previous requirements have been completed, you will need to generate the service account credential, this file will only contain non sensitive metadata in order to instruct the library on how to retrieve external subject tokens and exchange them for service accounts tokens, as you mentioned the file could be an config.json and could be generated running the following command:
# Generate an AWS configuration file.
gcloud iam workload-identity-pools create-cred-config \
projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$AWS_PROVIDER_ID \
--service-account $SERVICE_ACCOUNT_EMAIL \
--aws \
--output-file /path/to/generated/config.json
Where the following variables need to be substituted:
$PROJECT_NUMBER: The Google Cloud project number.
$POOL_ID:The workload identity pool ID.
$AWS_PROVIDER_ID: The AWS provider ID.
$SERVICE_ACCOUNT_EMAIL: The email of the service account to
impersonate.
Once you generate the JSON credentials configuration file for your external identity, you can store the path at the GOOGLE_APPLICATION_CREDENTIALS environment variable.
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/config.json
So, with this, the library can automatically choose the right type of client and initialize the credential from the configuration file. Please note that the service account will also need the roles/browser when using external identities with Application Default Credentials in Node.js or you can pass the project ID to avoid the need to grant roles/browser to the service account as is shown in the bellow code:
async function main() {
const auth = new GoogleAuth({
scopes: 'https://www.googleapis.com/auth/cloud-platform'
// Pass the project ID explicitly to avoid the need to grant `roles/browser` to the service account
// or enable Cloud Resource Manager API on the project.
projectId: 'CLOUD_RESOURCE_PROJECT_ID',
});
const client = await auth.getClient();
const projectId = await auth.getProjectId();
// List all buckets in a project.
const url = `https://storage.googleapis.com/storage/v1/b?project=${projectId}`;
const res = await client.request({ url });
console.log(res.data);
}

AWS EC2 | using Rusoto SDK: Couldn't find AWS credentials

I am trying to work with the new Instance Metadata Service Version 2 (IMDSv2) API.
It works as expected when I try to query the metadata manually as described on Retrieve instance metadata - Amazon Elastic Compute Cloud.
However, if I try to query for the instance tags it fails with error message:
Couldn't find AWS credentials in environment, credentials file, or IAM role
The tags query is done by the Rusoto SDK that I am using, that works when I set --http-tokens optional as described on Configure the instance metadata options - Amazon Elastic Compute Cloud.
I don't fully understand why setting the machine to work with IMDSv2 would effect the DescribeTags request, as I believe it's not using the same API - so I am guessing that's a side effect.
If I try and do a manual query using curl (instead of using the SDK):
https://ec2.amazonaws.com/?Action=DescribeTags&Filter.1.Name=resource-id&Filter.1.Value.1=ami-1a2b3c4d
I get:
The action DescribeTags is not valid for this web service
Thanks :)
The library that I was using (Rusoto SDK 0.47.0) doesn't support fetching the credentials needed when the host is set to work with the IMDSv2.
The workaround was to manually query for the IAM role credentials.
First, you get the token:
GET /latest/api/token
Next, use the token header "X-aws-ec2-metadata-token" with the value from the previous:
GET /meta-data/iam/security-credentials
Afterwards, use the result from the previous query (and don't forget to set the token header), and query:
GET /meta-data/iam/security-credentials/<query 2 result>
This will provide with the following data:
struct SecurityCredentials {
#[serde(rename = "AccessKeyId")]
access_key_id: String,
#[serde(rename = "SecretAccessKey")]
secret_access_key: String,
#[serde(rename = "Token")]
token: String,
}
Then what I needed to do was to build a custom credentials provider using that data (but this part is already lib specific).

Getting Error(No RegionEndpoint or ServiceURL configured) on EC2 instance even if IAM roles are setup in .net web application

I've configured IAM roles for my different services on EC2 server.with the help of below link IAM Role Setup. According to AWS docs after setting IAM role we don't need any credentials to be stored in our application it takes the credential details from EC2 instance metadata.
However, I am getting error when I removed AWS key from my web.config."No RegionEndpoint or ServiceURL configured" After some time when I added region point entry to my Web.config entry then it started working.
<add key="AWSRegion" value="us-east-1" />
Please note in another application where I am accessing only AWS DynamoDB on the same server, it works without adding region point entry in config. Any kind of help is appreciated.Thank you in advance.
IAM role is only for fetching credentials from the metadata server, not for the region you are trying to connect to. So, you have to specify the region in the config file, not the credentials. Some services default to a region (like us-east-1) but many services expect the region to be configured or passed when creating a client object.