How to send sms in pinpoint aws - amazon-web-services

API Url requested-https://pinpoint.us-east-1.amazonaws.com/v1/apps/c29387d21e1744d682f6f7a0803327c8/messages
Request body
{
"Context": {},
"MessageConfiguration": {
"SMSMessage": {
"Body": "string",
"Substitutions": {},
"SenderId": "string",
"MessageType": "TRANSACTIONAL"
}
},
"Addresses": {},
"Endpoints":{"destinations":"+91xxxxxxxxxxx"}
}
We have to send SMS using AWS Pinpoint service so anyone who has worked on its REST APIs.

Calling Pinpoint REST API using AWSCLI details: https://aws.amazon.com/cli/
Documentation with example request are provided here: https://docs.aws.amazon.com/pinpoint/latest/apireference/welcome.html
Here is the step by step instruction:
Go to IAM Management Console and find the user associated with the account you want to use.
Find AccessKey and SecretKey associated with that IAM User
Configure your AWSCLI using the credentials found in step 2
Here is a cli example:
aws --region=$REGION_ARG pinpoint send-messages --application-id $APP_ID --message-request "{
\"Addresses\": {
\"$PHONE_NUMBER\": {
\"ChannelType\": \"SMS\"
}
},
\"MessageConfiguration\": {
\"SMSMessage\": {
\"Body\": \"MSS Test\",
\"MessageType\": \"TRANSACTIONAL\"
}
}
}"

Related

Gitlab connection to GCP Workload Identity Federation return invalid_grant

yesterday I saw that Gitlab has enabled OIDC JWT tokens for jobs on ci/cd. I know that CI_JOB_JWT_V2 is marked as an alpha feature.
I was trying to use it with Workflow Identity Federation(WIF) on Gitlab runner with gcloud cli but I'm getting an error. When tried to do it through STS API I'm getting the same error. What am I missing?
{
"error": "invalid_grant",
"error_description": "The audience in ID Token [https://gitlab.com] does not match the expected audience."
}
My Gitlab JWT token after decoding looks mostly like that (ofc without details)
{
"namespace_id": "1111111111",
"namespace_path": "xxxxxxx/yyyyyyyy/zzzzzzzzzzz",
"project_id": "<project_id>",
"project_path": "xxxxxxx/yyyyyyyy/zzzzzzzzzzz/hf_service",
"user_id": "<user_id>",
"user_login": "<username>",
"user_email": "<user_email>",
"pipeline_id": "456971569",
"pipeline_source": "push",
"job_id": "2019605390",
"ref": "develop",
"ref_type": "branch",
"ref_protected": "true",
"environment": "develop",
"environment_protected": "false",
"jti": "<jti>",
"iss": "https://gitlab.com",
"iat": <number>,
"nbf": <number>,
"exp": <number>,
"sub": "project_path:xxxxxxx/yyyyyyyy/zzzzzzzzzzz/hf_service:ref_type:branch:ref:develop",
"aud": "https://gitlab.com"
}
In GCP console I have WIF pool with one provider set to OIDC named gitlab and issuer url from https://gitlab.com/.well-known/openid-configuration.
I have tried to give Service Account access to whole pool but no difference. Config created for this SA looks like below
{
"type": "external_account",
"audience": "//iam.googleapis.com/projects/<projectnumber>/locations/global/workloadIdentityPools/<poolname>/providers/gitlab",
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/gitlab-deployer#<projectid>.iam.gserviceaccount.com:generateAccessToken",
"credential_source": {
"file": "gitlab_token",
"format": {
"type": "text"
}
}
}
By default, workload identity federation expects the aud claim to contain the URL of the workload identity pool provider. This URL looks like this:
https://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID
But your token seems to use https://gitlab.com as audience.
Either reconfigure GitHub to use the workload identity pool provider URL as audience, or reconfigure the pool to use a custom audience by running
gcloud iam workload-identity-pools providers update-oidc ... \
--allowed-audiences=https://gitlab.com

How do I use Hashicorp Vault in Cloud Foundry

So I have a nodejs webservice which I push into Cloud Foundry (PCF), then I am storing some credentials in Vault so when a user hits my web service endpoint with some credentials I extract the credentials from the Vault, compare them against the credentials from the request and if the match I allow the request to be processed else I reject the request.
So to install Vault in PCF I use the next command:
cf create-service hashicorp-vault shared foo-vault
Then I create a key using this command:
create-service-key foo-vault foo-vault-key
Then I bind the service to the app like this:
cf bind-service foo-ws foo-vault
I restage the web service and when I print the environmental variables using this command:
cf restage foo-ws
I get this values:
{
"hashicorp-vault": [{
"credentials": {
"address": "http://somehost:433/",
"auth": {
"accessor": "kMr3iCSlekSN2d1vpPjbjzUk",
"token": "some token"
},
"backends": {
"generic": [
"cf/7f1a12a9-4a52-4151-bc96-874380d30182/secret",
"cf/c4073566-baee-48ae-88e9-7c7c7e0118eb/secret"
],
"transit": [
"cf/7f1a12a9-4a52-4151-bc96-874380d30182/transit",
"cf/c4073566-baee-48ae-88e9-7c7c7e0118eb/transit"
]
},
"backends_shared": {
"organization": "cf/8d4b992f-cca3-4876-94e0-e49170eafb67/secret",
"space": "cf/bdace353-e813-4efb-8122-58b9bd98e3ab/secret"
}
},
"label": "hashicorp-vault",
"name": "my-vault",
"plan": "shared",
"provider": null,
"syslog_drain_url": null,
"tags": [],
"volume_mounts": []
}]
}
So my question is if there is a way to define the backends, token and address?
Thanks in advance for your help.
Greetings

How do I list deleted secrets in AWS Secrets Manager?

Looking at the man page for list-secrets, there is no special options to show deleted or not. It does not list deleted secrets. However, the output definition includes a "DeletedDate" timestamp.
The ListSecrets API does not show any option for deleted secrets. But again the response includes a DeletedDate.
The boto3 docs for list_secrets() are the same.
However, in the AWS console I can see deleted secrets. A quick look at the dev tools and I can see my request payload to the Secrets Manager endpoint looks like:
{
"method": "POST",
"path": "/",
"headers": {
"Content-Type": "application/x-amz-json-1.1",
"X-Amz-Target": "secretsmanager.ListSecrets",
"X-Amz-Date": "Fri, 27 Nov 2020 13:19:06 GMT"
},
"operation": "ListSecrets",
"content": {
"MaxResults": 100,
"IncludeDeleted": true,
"SortOrder": "asc"
},
"region": "eu-west-2"
}
Is there any way to pass "IncludeDeleted": true to the CLI?
Is this a bug? Where do I report it? (I know there is a cloudformation bug tracker on github, I assume secretsmanager would have something similar somewhere..?)
Save the following file to ~/.aws/models/secretsmanager/2017-10-17/service-2.sdk-extras.json:
{
"version": 1.0,
"merge": {
"shapes": {
"ListSecretsRequest": {
"members": {
"IncludeDeleted": {
"shape": "BooleanType",
"documentation": "<p>If set, includes secrets that are disabled.</p>"
}
}
}
}
}
}
Then you can list secrets with the CLI as follows:
aws secretsmanager list-secrets --include-deleted
or with boto3:
import boto3
def list_secrets(session, **kwargs):
client = session.client("secretsmanager")
for page in client.get_paginator("list_secrets").paginate(, **kwargs):
yield from page["SecretList"]
if __name__ == "__main__":
session = boto3.Session()
for secret in list_secrets(session, IncludeDeleted=True):
if "DeletedDate" in secret:
print(secret)
This is using the botocore loader mechanism to augment the service model for Secrets Manager, and tell boto3 that "IncludeDeleted" is a parameter for the ListSecrets API.
If you want more detail, I've just posted a blog post explaining what else I tried and how I got to this solution – and thanks to OP, whose dev tool experiments were a useful clue.

How to find endpoint URL of an API Gateway in AWS

I have made a few API Gateways in AWS.
At this point , I would like curl to the endpoint of each API Gateway in order to start testing. But, I cannot find the endpoint URL:
For example, via CLI, I ran:
aws --profile dev apigateway get-rest-apis --output json
I get: (no endpoint URL)
"apiKeySource": "HEADER",
"description": "API one",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
},
"createdDate": 1570655986,
"policy": "{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\\"*\\\",\\\"Action\\\":\\\"execute-api:Invoke\\\",\\\"Resource\\\":\\\"arn:aws:FOOBAR\\/*\\/*\\/*\\\",\\\"Condition\\\":{\\\"IpAddress\\\":{\\\"aws:SourceIp\\\":[\\\"0.0.0.0\\/0\\\""]}}}]}",
"id": "foobar",
"name": "foobar"
}
you have to deploy the APIs.
you find then the https address in the deployed stage.
or you can use a custom domain name

Backing up API Keys for recovery

I am designing and implementing a backup plan to restore my client API keys. How to go about this ?
To fasten the recovery process, I am trying to create a backup plan for taking the backup of Client API keys, probably in s3 or local. I am scratching my head from past 2 days on how to achieve this ? May be some python script or something which will take the values from apigateway and dump into some new s3 bucket. But not sure how to implement this.
You can get all apigateway API keys list using apigateway get-api-keys. Here is the full AWS CLI command.
aws apigateway get-api-keys --include-values
Remember --include-values is must to use otherwise actual API Key will not be included in the result.
It will display the result in the below format.
"items": [
{
"id": "j90yk1111",
"value": "AAAAAAAABBBBBBBBBBBCCCCCCCCCC",
"name": "MyKey1",
"description": "My Key1",
"enabled": true,
"createdDate": 1528350587,
"lastUpdatedDate": 1528352704,
"stageKeys": []
},
{
"id": "rqi9xxxxx",
"value": "Kw6Oqo91nv5g5K7rrrrrrrrrrrrrrr",
"name": "MyKey2",
"description": "My Key 2",
"enabled": true,
"createdDate": 1528406927,
"lastUpdatedDate": 1528406927,
"stageKeys": []
},
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
}
]
To get API Key detail of a single API Key, use below AWS CLI command.
aws apigateway get-api-key --include-value --api-key lse3o7xxxx
It should display the below result.
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
Similar to get-api-keys call, --include-value is must here, otherwise actual API Key will not be included in the result
Now you need to convert the output in a format which can be saved on s3 and later can be imported to apigateway.
You can import keys with import-api-keys
aws apigateway import-api-keys --body <value> --format <value>
--body (blob)
The payload of the POST request to import API keys. For the payload
format
--format (string)
A query parameter to specify the input format to imported API keys.
Currently, only the CSV format is supported. --format csv
Simplest style is with two fields only e.g Key,name
Key,name
apikey1234abcdefghij0123456789,MyFirstApiKey
You can see the full detail of formats from API Gateway API Key File Format.
I have implemented it in python using a lambda for backing up APIs keys. Used boto3 APIs similar to the above answer.
However, I am looking for a way to trigger the lambda with an event of "API key added/removed" :-)