AWS update kinesis firehose configuration pro-grammatically - amazon-web-services

Currently I am writing a test library to test the configuration settings. I would like to set only few parameters of firehose like SizeInMBs and IntervalInSeconds. All other parameters will remain same. Is there a simple way to do it?

I wrote the following method
def set_firehose_buffering_hints(self, size_mb, interval_sec):
response = self._firehose_client.describe_delivery_stream(DeliveryStreamName=self.firehose)
lambdaarn = (response['DeliveryStreamDescription']
['Destinations'][0]['ExtendedS3DestinationDescription']
['ProcessingConfiguration']['Processors'][0]['Parameters'][0]['ParameterValue'])
response = self._firehose_client.update_destination(DeliveryStreamName=self.firehose,
CurrentDeliveryStreamVersionId=response['DeliveryStreamDescription']['VersionId'],
DestinationId=response['DeliveryStreamDescription']['Destinations'][0]['DestinationId'],
ExtendedS3DestinationUpdate={
"BufferingHints": {
"IntervalInSeconds": interval_sec,
"SizeInMBs": size_mb
},
'ProcessingConfiguration': {
'Processors': [{
'Type': 'Lambda',
'Parameters': [
{
'ParameterName': 'LambdaArn',
'ParameterValue': lambdaarn
},
{
'ParameterName': 'BufferIntervalInSeconds',
'ParameterValue': str(interval_sec)
},
{
'ParameterName': 'BufferSizeInMBs',
'ParameterValue': str(size_mb)
}]
}]
}})

Related

Get Dimensions for USAGE_TYPE AWS Boto3 CostExplorer Client

I'm trying to get Costs using CostExplorer Client in boto3. But I can't find the values to use as a Dimension filter. The documentation says that we can extract those values from GetDimensionValues but how do I use GetDimensionValues.
response = client.get_cost_and_usage(
TimePeriod={
'Start': str(start_time).split()[0],
'End': str(end_time).split()[0]
},
Granularity='DAILY',
Filter = {
'Dimensions': {
'Key':'USAGE_TYPE',
'Values': [
'DataTransfer-In-Bytes'
]
}
},
Metrics=[
'NetUnblendedCost',
],
GroupBy=[
{
'Type': 'DIMENSION',
'Key': 'SERVICE'
},
]
)
The boto3 reference for GetDimensionValues has a lot of details on how to use that call. Here's some sample code you might use to print out possible dimension values:
response = client.get_dimension_values(
TimePeriod={
'Start': '2022-01-01',
'End': '2022-06-01'
},
Dimension='USAGE_TYPE',
Context='COST_AND_USAGE',
)
for dimension_value in response["DimensionValues"]:
print(dimension_value["Value"])
Output:
APN1-Catalog-Request
APN1-DataTransfer-Out-Bytes
APN1-Requests-Tier1
APN2-Catalog-Request
APN2-DataTransfer-Out-Bytes
APN2-Requests-Tier1
APS1-Catalog-Request
APS1-DataTransfer-Out-Bytes
.....

Appflow upsert error : ID does not exist in the destination connector

Creating a appflow from S3 bucket to salesforce through CDK with upsert option.
Using existing connection to From S3 to Salesforce -
new appflow.CfnConnectorProfile(this, 'Connector',{
"connectionMode": "Public",
"connectorProfileName":"connection_name",
"connectorType":"Salesforce"
})
Destination flow Code -
new appflow.CfnFlow(this, 'Flow', {
destinationFlowConfigList: [
{
"connectorProfileName": "connection_name",
"connectorType": "Salesforce",
"destinationConnectorProperties": {
"salesforce": {
"errorHandlingConfig": {
"bucketName": "bucket-name",
"bucketPrefix": "subfolder",
},
"idFieldNames": [
"ID"
],
"object": "object_name",
"writeOperationType": "UPSERT"
}
}
}
],
..... other props ....
}
tasks: [
{
"taskType":"Filter",
"sourceFields": [
"ID",
"Some other fields",
...
],
"connectorOperator": {
"salesforce": "PROJECTION"
}
},
{
"taskType":"Map",
"sourceFields": [
"ID"
],
"taskProperties": [
{
"key":"SOURCE_DATA_TYPE",
"value":"Text"
},
{
"key":"DESTINATION_DATA_TYPE",
"value":"Text"
}
],
"destinationField": "ID",
"connectorOperator": {
"salesforce":"PROJECTION"
}
},
{
.... some other mapping fields.....
}
But the problem is - "Invalid request provided: AWS::AppFlow::FlowCreate Flow request failed: [ID does not exist in the destination conne ctor]
According to the error, how to fix the problem with the existing connector which results in ID does not exist in the destination connector
PS: ID is defined in the flow code. But still it is saying ID is not found.
I think your last connector operator should be:
"connectorOperator": {
"salesforce":"NO_OP"
}
instead of:
"connectorOperator": {
"salesforce":"PROJECTION"
}
since you are mapping the field ID into itself without any transformations whatsoever.

How to determine if an AWS s3 bucket has at least one public object?

Let's suppose that I have a bucket with many folders and objects.
This bucket has Objects can be public as policy access. If I want to know if there is at least one public object or list all public objects, how should I do this? Is there any way to do this automatically?
It appears that you would need to loop through every object and call GetObjectAcl().
You'd preferably do it in a programming language, but here is an example with the AWS CLI:
aws s3api get-object-acl --bucket my-bucket --key foo.txt
{
"Owner": {
"DisplayName": "...",
"ID": "..."
},
"Grants": [
{
"Grantee": {
"DisplayName": "...",
"ID": "...",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
},
{
"Grantee": {
"Type": "Group",
"URI": "http://acs.amazonaws.com/groups/global/AllUsers"
},
"Permission": "READ"
}
]
}
I granted the READ permission by using Make Public in the S3 management console. Please note that objects could also be made public via a Bucket Policy, which would not show up in the ACL.
Use this method from AWS SDK to do this with JavaScript:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property. There is an SDK for every major language like Java etc. Use the one that you know.
var params = {
Bucket: "examplebucket",
MaxKeys: 2
};
s3.listObjectsV2(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
// bucket isnt empty
if (data.length != 0)
console.log(data); // successful response
}
/*
data = {
Contents: [
{
ETag: "\"70ee1738b6b21e2c8a43f3a5ab0eee71\"",
Key: "happyface.jpg",
LastModified: <Date Representation>,
Size: 11,
StorageClass: "STANDARD"
},
{
ETag: "\"becf17f89c30367a9a44495d62ed521a-1\"",
Key: "test.jpg",
LastModified: <Date Representation>,
Size: 4192256,
StorageClass: "STANDARD"
}
],
IsTruncated: true,
KeyCount: 2,
MaxKeys: 2,
Name: "examplebucket",
NextContinuationToken: "1w41l63U0xa8q7smH50vCxyTQqdxo69O3EmK28Bi5PcROI4wI/EyIJg==",
Prefix: ""
}
*/
});```
Building off of John's answer, you might find this helpful:
import concurrent.futures
import boto3
BUCKETS = [
"TODO"
]
def get_num_objs(bucket):
num_objs = 0
s3_client = boto3.client("s3")
paginator = s3_client.get_paginator("list_objects_v2")
for res in paginator.paginate(
Bucket=bucket,
):
if "Contents" not in res:
print(f"""No contents in res={res}""")
continue
num_objs += len(res["Contents"])
return num_objs
for BUCKET in BUCKETS:
print(f"Analyzing bucket={BUCKET}...")
num_objs = get_num_objs(BUCKET)
print(f"BUCKET={BUCKET} has num_objs={num_objs}")
# if num_objs > 10_000:
# raise Exception(f"num_objs={num_objs}")
s3_client = boto3.client("s3")
def assert_no_public_obj(res):
if res["ResponseMetadata"]["HTTPStatusCode"] != 200:
raise Exception(res)
if "Contents" not in res:
print(f"""No contents in res={res}""")
return
print(f"""Fetched page with {len(res["Contents"])} objs...""")
for i, obj in enumerate(res["Contents"]):
if i % 100 == 0:
print(f"""Fetching {i}-th obj in page...""")
res = s3_client.get_object_acl(Bucket=BUCKET, Key=obj["Key"])
for grant in res["Grants"]:
# Amazon S3 considers a bucket or object ACL public if it grants any permissions to members of the predefined AllUsers or AuthenticatedUsers groups.
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html#access-control-block-public-access-policy-status
uri = grant["Grantee"].get("URI")
if not uri:
continue
if "AllUsers" in uri or "AuthenticatedUsers" in uri:
raise Exception(f"""Grantee={grant["Grantee"]} found for {BUCKET}/{obj["Key"]}""")
paginator = s3_client.get_paginator("list_objects_v2")
with concurrent.futures.ThreadPoolExecutor() as executor:
for res in paginator.paginate(
Bucket=BUCKET,
):
executor.submit(assert_no_public_obj, res)

How to decrypt an encrypted payload in hyperledger?

I am currently using Hyperledger Fabric. I am using the REST API to make GET request as so:
curl 172.18.0.3:7050/chain/blocks/31
And the output I am getting back is :
{
"transactions": [
{
"type": 2,
"chaincodeID": "BMBQHHg2y0RnadYEaZZT8icjMvZbDPjkn5mFb+clFORxJqz8qsMs/QlalCT+A3msuc59KYM5sbZyhM3OeSplTWo91WAHTUgqIKVrm1gUzsouBIqLNvpqgimN36+s0ywF0Rx4gn27RmQYBbB+877Nh+w7A8Ezz92T1MgHcmzfRgVaDmiN0ga+jAfufNYglmeM4ZSysmSsz6xJtrcD5mTmHXZtvtw6uGCI1TCOMBaWTpLhNHfM2/5EB5jatdMjDi1GAlaXkDWcLgGjScL1yZpWcntz/N0cT90r6i9ycXZ0kk9wodBq2cFutDTdkl8S90kzd0gXig==",
"payload":"BBYZD6S/hRILcf21zVbhMAhA+qLQvAq+KvOBuXOknPCAMjas2LI9f42AKG6r+uWP71LYEkbo1XXANuDmukZjDsFGltzoIfq+Mry5n/CNXzXgiVLX0J7z08kGfEfw2vnywgmVFX4UtKPpl8pMTmRxJWn5Q0HY1pFnA6ZaXluoLRf7f17Ko4SPahi19k2NszcJ0SHE7xRllfLXZJxaOlT2J56nqjTBKTJ86bdqn6AdQXHA6Px7yz5XpgJhccyecaLS4sYcsrqHoOlO+kk+bw5Q6qnkHfIIhLXCEgHxKoT00L8I8B2luO1RlmQd4mNfXb7GrLOJXvCNPrcpSEmQDByEGwn1j3Zy0lilwKVaNYTPNThMwQ==",
"txid": "72bd2ab7-f769-49c9-a754-c7be0c481cf0",
"timestamp": {
"seconds": 1496062124,
"nanos": 474977395
},
"confidentialityLevel": 1,
"confidentialityProtocolVersion": "1.2",
"nonce": "2YgU+0WYPuTKGsKkT1hx7McOURPTIRgG",
"toValidators": "BJWJi5aSycSaJBaLIciUxlhZNyRsW6es2pO7ljUmqxP2SLzgUJtDtAeG8S5SMq+RQ9iX9m8+HIUocrD2J1MBTJaxPWcs/dYFNp1zi8k1ogbEuIQJDe/Gb0mbYVoBqGgFjofiE2lrZTO+RBVmUBQkAoybloOMUSfMawpOPTt/cIeNBq3M+t6gbTSl0ZVs5ofITWtonwhG8PNnlZwEmTLkC7evX1ImivMqo47ONxHXJlbbtjf+pL5kaqU5DrXWiv2L6Wt0xc11od4rbotnAQP2w2dqKTy2fj4ON6qCBp8i+t2FRi/iO0INJpI0aDjdkVCR",
"cert": "MIICUjCCAfigAwIBAgIRAJOBK8HG3E/Pmw8fZwL4iuswCgYIKoZIzj0EAwMwMTELMAkGA1UEBhMCVVMxFDASBgNVBAoTC0h5cGVybGVkZ2VyMQwwCgYDVQQDEwN0Y2EwHhcNMTcwNTI5MTEyNzQwWhcNMTcwODI3MTEyNzQwWjBFMQswCQYDVQQGEwJVUzEUMBIGA1UEChMLSHlwZXJsZWRnZXIxIDAeBgNVBAMTF1RyYW5zYWN0aW9uIENlcnRpZmljYXRlMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE6QVLJ48eCVlS1S8/BiSTU1XiWR0tZ6NGF3OZr306sTcgG/nYtcjx6/yJNwDgdYz5Boi7sA2QWUcqUkWfIPNWPKOB3DCB2TAOBgNVHQ8BAf8EBAMCB4AwDAYDVR0TAQH/BAIwADANBgNVHQ4EBgQEAQIDBDAPBgNVHSMECDAGgAQBAgMEME0GBioDBAUGBwEB/wRArYyx9l4zJL4TbxDHuGZBsJ545Jsph/D/Q/FgMTTtxPh93B+LV6AI1tyFVHWiKNS4GgvDVlmgfwFuMAca+/PaujBKBgYqAwQFBggEQPEdAS1h/9LJJmqriV+42k0bL+ghGFbHa5GiEAitiMjlduiwgfelPK/rbAq0a6NrnPXCEYe1aWCSqyqsEfHGBoIwCgYIKoZIzj0EAwMDSAAwRQIhAOidYaESZ3xyZBTgcBOm3zyXvGb4YCCt7I7+M0gZF4xzAiAgYuCf7FPGx3fnJdABlZjszA1pR6jaPtIOQN2ndfAFZA==",
"signature": "MEUCIQCVBtfjk3yzwfOFyOojH5tynq3HrG7dFN9URXB5C6kYDAIgLPcwJBAIVlD1I4dxzczfxmywlZn1ZMSvL2djioWgqFQ="
}
],
"stateHash": "9KEsiBp4t/VZyETXMASSYtuPuf8JowktCSbX7daPt69uqDzrJvifrPIXpI5N1kOayoq6H0afM8zN/WZpWsesHQ==",
"previousBlockHash": "v6Fo6SARD0xdE0B/jvIq22kgV5uLAKhTwLjrA4YRBskWcZOjECFbNgzlwFQhEmbar1zcAbcZVo9eo/3tx2y68g==",
"consensusMetadata": "CCA=",
"nonHashData": {
"localLedgerCommitTimestamp": {
"seconds": 1496062125,
"nanos": 496018341
},
"chaincodeEvents": [
{
},
{
}
]
}
}
So I had performed a invoke to
transfer 10 from a to b.
And i got this payload.
The payload is encrypted as the
CORE_SECURITY_ENABLED=true and
CORE_SECURITY_PRIVACY=true
I know we have to use the certificate to decrypt the payload and then might be use base64 decoding to get the exact payload back.
But my question is what are the exact function calls or exact steps involved in doing so ?

How to pass a list to a nested stack parameter in AWS CloudFormation?

Im using nested stack to create ELB and application stacks...And i need to pass list of subnets to ELB and Application stack...
And the main json has the below code...
"Mappings":{
"params":{
"Subnets": {
"dev":[
"subnet-1”,
"subnet-2”
],
"test":[
"subnet-3”,
"subnet-4”,
"subnet-5”,
"subnet-6”
],
"prod":[
"subnet-7”,
"subnet-8”,
"subnet-9”
]
}
}
},
"Parameters":{
"Environment":{
"AllowedValues":[
"prod",
"preprod",
"dev"
],
"Default":"prod",
"Description":"What environment type is it (prod, preprod, test, dev)?",
"Type":"String"
}
},
Resources:{
"ELBStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": {
"Fn::Join":[
"",
[
"https://s3.amazonaws.com/",
"myS3bucket",
"/ELB.json"
]
]
},
"Parameters": {
"Environment":{"Ref":"Environment"},
"ELBSHORTNAME":{"Ref":"ELBSHORTNAME"},
"Subnets":{"Fn::FindInMap":[
"params",
"Subnets",
{
"Ref":"Environment"
}
]},
"S3Bucket":{"Ref":"S3Bucket"},
},
"TimeoutInMinutes": "60"
}
}
now when i run this json using lambda or cloudformation i get the below error under cloudformation Events Tab....
CREATE_FAILED AWS::CloudFormation::Stack ELBStack Value of property Parameters must be an object with String (or simple type) properties
using below lambda
import boto3
import time
date = time.strftime("%Y%m%d")
time = time.strftime("%H%M%S")
stackname = 'FulfillSNSELB'
client = boto3.client('cloudformation')
response = client.create_stack(
StackName= (stackname + '-' + date + '-' + time),
TemplateURL='https://s3.amazonaws.com/****/**/myapp.json',
Parameters=[
{
'ParameterKey': 'Environment',
'ParameterValue': 'dev',
'UsePreviousValue': False
}]
)
def lambda_handler(event, context):
return(response)
You can't pass a list to a nested stack. You have to pass a concatenation of items with the intrinsic function Join like this: !Join ["separator", [item1, item2, …]].
In the nested stack, the type of the parameter needs to be List<Type>.
Your JSON is not well-formed. Running your JSON through aws cloudformation validate-template (or even jsonlint.com) quickly reveals several basic syntax errors:
Resources:{ requires the key to be surrounded by quotes: "Resources": {
Some of your quotation marks are invalid 'smart-quotes' "subnet-1”, that need to be replaced with standard ASCII quotes: "subnet-1",
(This is the one your error message refers to) The "Properties" object in your "ELBStack" resource "S3Object: {"Ref: "S3Bucket"}," has a trailing comma after its last element that needs to be removed: "S3Object: {"Ref: "S3Bucket"}"