In AWS AppConfig I created application Testing and configuration profile TestingFlags. I want to create hosted configuration version using CLI.
aws appconfig create-hosted-configuration-version --application-id APP_ID --configuration-profile-id PROF_ID --content eyJ0ZXN0IjogeyJlbmFibGVkIjogZmFsc2V9fQ== --content-type "application/json" out
As a result I got the following error:
An error occurred (BadRequestException) when calling the CreateHostedConfigurationVersion operation: Error invoking extension AppConfig Feature Flags Helper: Invalid 'Content' data
P.S. eyJ0ZXN0IjogeyJlbmFibGVkIjogZmFsc2V9fQ== is the base64 decoded string {"test": {"enabled": false}}
Probably content should be structure in some specific way. I read this guide https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-configuration-and-profile.html#appconfig-creating-feature-flag-configuration-commandline
I will appreciate any help.
Documentation example was incorrect. Correct form is:
aws appconfig create-hosted-configuration-version --application-id APP_ID --configuration-profile-id PROF_ID --cont
ent BASE_64_CONTENT --content-type application/json output.json
Where BASE_64_CONTENT is base64 encoded data, in a form:
{
"flags": {
"flagkey": {
"name": "flagkey"
}
},
"values": {
"flagkey": {
"enabled": false
}
},
"version": "1"
}
Related
thanks for greate packages!
I have problem when i create development with localstack using S3 service to create presignedurl post.
I have run localstack with SERVICES=s3 DEBUG=1 S3_SKIP_SIGNATURE_VALIDATION=1 localstack start
I have settings AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_DEFAULT_REGION=us-east-1 AWS_ENDPOINT_URL=http://localhost:4566 S3_Bucket=my-bucket
I make sure have the bucket
> awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-bucket",
"CreationDate": "2021-11-16T08:43:23+00:00"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
I try create presigned url, and running in console with this
s3_client_sync.create_presigned_post(bucket_name=settings.S3_Bucket, object_name="application/test.png", fields={"Content-Type": "image/png"}, conditions=[["Expires", 3600]])
and have return like this
{'url': 'http://localhost:4566/kredivo-thailand',
'fields': {'Content-Type': 'image/png',
'key': 'application/test.png',
'AWSAccessKeyId': 'test',
'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19',
'signature': 'LfFelidjG+aaTOMxHL3fRPCw/xM='}}
And i test using insomnia
and i have read log in localstack
2021-11-16T10:54:04:DEBUG:localstack.services.s3.s3_utils: Received presign S3 URL: http://localhost:4566/my-bucket/application/test.png?AWSAccessKeyId=test&Policy=eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19&Signature=LfFelidjG%2BaaTOMxHL3fRPCw%2FxM%3D&Expires=3600
2021-11-16T10:54:04:WARNING:localstack.services.s3.s3_utils: Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1
2021-11-16T10:54:04:INFO:localstack.services.s3.s3_utils: Presign signature calculation failed: <Response [403]>
what i missing, so i cannot create the presignedurl post ?
The problem is with your AWS configuration -
AWS_ACCESS_KEY_ID=test // Should be an Actual access Key for the IAM user
AWS_SECRET_ACCESS_KEY=test // Should be an Actual Secret Key for the IAM user
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT_URL=http://localhost:4566 // Endpoint seems wrong
S3_Bucket=my-bucket // Actual Bucket Name in AWS S3 console
For more information, try to read here and setup your environment with correct AWS credentials - Setup AWS Credentials
Looking at the man page for list-secrets, there is no special options to show deleted or not. It does not list deleted secrets. However, the output definition includes a "DeletedDate" timestamp.
The ListSecrets API does not show any option for deleted secrets. But again the response includes a DeletedDate.
The boto3 docs for list_secrets() are the same.
However, in the AWS console I can see deleted secrets. A quick look at the dev tools and I can see my request payload to the Secrets Manager endpoint looks like:
{
"method": "POST",
"path": "/",
"headers": {
"Content-Type": "application/x-amz-json-1.1",
"X-Amz-Target": "secretsmanager.ListSecrets",
"X-Amz-Date": "Fri, 27 Nov 2020 13:19:06 GMT"
},
"operation": "ListSecrets",
"content": {
"MaxResults": 100,
"IncludeDeleted": true,
"SortOrder": "asc"
},
"region": "eu-west-2"
}
Is there any way to pass "IncludeDeleted": true to the CLI?
Is this a bug? Where do I report it? (I know there is a cloudformation bug tracker on github, I assume secretsmanager would have something similar somewhere..?)
Save the following file to ~/.aws/models/secretsmanager/2017-10-17/service-2.sdk-extras.json:
{
"version": 1.0,
"merge": {
"shapes": {
"ListSecretsRequest": {
"members": {
"IncludeDeleted": {
"shape": "BooleanType",
"documentation": "<p>If set, includes secrets that are disabled.</p>"
}
}
}
}
}
}
Then you can list secrets with the CLI as follows:
aws secretsmanager list-secrets --include-deleted
or with boto3:
import boto3
def list_secrets(session, **kwargs):
client = session.client("secretsmanager")
for page in client.get_paginator("list_secrets").paginate(, **kwargs):
yield from page["SecretList"]
if __name__ == "__main__":
session = boto3.Session()
for secret in list_secrets(session, IncludeDeleted=True):
if "DeletedDate" in secret:
print(secret)
This is using the botocore loader mechanism to augment the service model for Secrets Manager, and tell boto3 that "IncludeDeleted" is a parameter for the ListSecrets API.
If you want more detail, I've just posted a blog post explaining what else I tried and how I got to this solution – and thanks to OP, whose dev tool experiments were a useful clue.
I am designing and implementing a backup plan to restore my client API keys. How to go about this ?
To fasten the recovery process, I am trying to create a backup plan for taking the backup of Client API keys, probably in s3 or local. I am scratching my head from past 2 days on how to achieve this ? May be some python script or something which will take the values from apigateway and dump into some new s3 bucket. But not sure how to implement this.
You can get all apigateway API keys list using apigateway get-api-keys. Here is the full AWS CLI command.
aws apigateway get-api-keys --include-values
Remember --include-values is must to use otherwise actual API Key will not be included in the result.
It will display the result in the below format.
"items": [
{
"id": "j90yk1111",
"value": "AAAAAAAABBBBBBBBBBBCCCCCCCCCC",
"name": "MyKey1",
"description": "My Key1",
"enabled": true,
"createdDate": 1528350587,
"lastUpdatedDate": 1528352704,
"stageKeys": []
},
{
"id": "rqi9xxxxx",
"value": "Kw6Oqo91nv5g5K7rrrrrrrrrrrrrrr",
"name": "MyKey2",
"description": "My Key 2",
"enabled": true,
"createdDate": 1528406927,
"lastUpdatedDate": 1528406927,
"stageKeys": []
},
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
}
]
To get API Key detail of a single API Key, use below AWS CLI command.
aws apigateway get-api-key --include-value --api-key lse3o7xxxx
It should display the below result.
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
Similar to get-api-keys call, --include-value is must here, otherwise actual API Key will not be included in the result
Now you need to convert the output in a format which can be saved on s3 and later can be imported to apigateway.
You can import keys with import-api-keys
aws apigateway import-api-keys --body <value> --format <value>
--body (blob)
The payload of the POST request to import API keys. For the payload
format
--format (string)
A query parameter to specify the input format to imported API keys.
Currently, only the CSV format is supported. --format csv
Simplest style is with two fields only e.g Key,name
Key,name
apikey1234abcdefghij0123456789,MyFirstApiKey
You can see the full detail of formats from API Gateway API Key File Format.
I have implemented it in python using a lambda for backing up APIs keys. Used boto3 APIs similar to the above answer.
However, I am looking for a way to trigger the lambda with an event of "API key added/removed" :-)
API Url requested-https://pinpoint.us-east-1.amazonaws.com/v1/apps/c29387d21e1744d682f6f7a0803327c8/messages
Request body
{
"Context": {},
"MessageConfiguration": {
"SMSMessage": {
"Body": "string",
"Substitutions": {},
"SenderId": "string",
"MessageType": "TRANSACTIONAL"
}
},
"Addresses": {},
"Endpoints":{"destinations":"+91xxxxxxxxxxx"}
}
We have to send SMS using AWS Pinpoint service so anyone who has worked on its REST APIs.
Calling Pinpoint REST API using AWSCLI details: https://aws.amazon.com/cli/
Documentation with example request are provided here: https://docs.aws.amazon.com/pinpoint/latest/apireference/welcome.html
Here is the step by step instruction:
Go to IAM Management Console and find the user associated with the account you want to use.
Find AccessKey and SecretKey associated with that IAM User
Configure your AWSCLI using the credentials found in step 2
Here is a cli example:
aws --region=$REGION_ARG pinpoint send-messages --application-id $APP_ID --message-request "{
\"Addresses\": {
\"$PHONE_NUMBER\": {
\"ChannelType\": \"SMS\"
}
},
\"MessageConfiguration\": {
\"SMSMessage\": {
\"Body\": \"MSS Test\",
\"MessageType\": \"TRANSACTIONAL\"
}
}
}"
I am trying to upload files using amazon web services, but I am getting this error as shown below, because of which the files are not being uploaded to the server:
{
"data": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"]</Message><RequestId>7A20103396D365B2</RequestId><HostId>xxxxxxxxxxxxxxxxxxxxx</HostId></Error>",
"status": 403,
"config": {
"method": "POST",
"transformRequest": [null],
"transformResponse": [null],
"url": "https://giblib-verification.s3.amazonaws.com/",
"data": {
"key": "#usc.edu/1466552912155.jpg",
"AWSAccessKeyId": "xxxxxxxxxxxxxxxxx",
"acl": "private",
"policy": "xxxxxxxxxxxxxxxxxxxxxxx",
"signature": "xxxxxxxxxxxxxxxxxxxx",
"Content-Type": "image/jpeg",
"file": "file:///storage/emulated/0/Android/data/com.ionicframework.giblibion719511/cache/1466552912155.jpg"
},
"_isDigested": true,
"_chunkSize": null,
"headers": {
"Accept": "application/json, text/plain, */*"
},
"_deferred": {
"promise": {}
}
},
"statusText": "Forbidden"
}
Can anyone tell me what is the reason for the forbidden 403 response? Thanks in advance
You need to provide more details. Which client are you using? From the looks of it, there is a policy that explicitly denies this upload.
It looks like you're user does not have proper permissions for that specific S3 bucket. Use AWS console our IAM to assign proper permissions, including write.
More importantly immediately invalidate the key/secret pair, and rename the bucket. Never share actual keys our passwords on open sites. Someone is likely already using your credentials as you read this.
Read the error: Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"].
Your policy document imposes a restriction on the upload that you are not meeting, and S3 is essentially denying the upload because you told it to.
There is no reason to include this condition in your signed policy document. According to the documentation, this means you are expecting a form field called filename that must not be empty. But there's no such form field. Remove this from your policy and the upload should work.