boto3 DMS enable CloudWatch logs - amazon-web-services

I am writing scripts in Python that are creating DMS tasks using the boto3 package. I wonder if there is any way of programatically enabling CloudWatch logging for the tasks? I can't find any option to do this with the create_replication_task function.

You can achieve this by defining ReplicationTaskSettings in your create_replication_task call. That is an optional parameter. You define the task settings in a JSON string format. You need to add the following in your task settings:
"Logging": {
"EnableLogging": true
}
In that way, you can enable CloudWatch logging while creating the task from Python using Boto3.
A sample request would be as follows:
import boto3
client = boto3.client('dms')
response = client.create_replication_task(
ReplicationTaskIdentifier='string',
SourceEndpointArn='string',
TargetEndpointArn='string',
ReplicationInstanceArn='string',
MigrationType='full-load'|'cdc'|'full-load-and-cdc',
TableMappings='string',
ReplicationTaskSettings="{\"Logging\": {\"EnableLogging\": true}}",
)
Reference to create_replication_task API is here:
AWS SDK for Python - Boto3 - AWS DMS - Create Replication Task API
Reference to ReplicationTaskSettings parameter is here:
AWS SDK for Python - Boto3 - AWS DMS - Create Replication Task API - Replication Task Settings

Related

How can I register opentelemetry lambda exention?

I am following this repo https://github.com/open-telemetry/opentelemetry-lambda/blob/main/collector/README.md to deploy a lambda with opentelemetry extension.
I have build the repo and created a lambda layer by uploading the file nodejs/packages/layer/build/layer.zip. Then I created a lambda who uses this layer and added 2 env var:
AWS_LAMBDA_EXEC_WRAPPER = /opt/otel-handler
OPENTELEMETRY_COLLECTOR_CONFIG_FILE = /var/task/collector.yaml
I created a file collector.yaml under project root directory:
receivers:
otlp:
protocols:
grpc:
exporters:
logging:
loglevel: debug
otlp:
endpoint: http://localhost
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging, otlp]
when I run the lambda I got this error:
2022-12-14T11:50:07.070+11:00 Registering OpenTelemetry
2022-12-14T11:50:07.098+11:00 Exporter "otlp" requested through environment variable is unavailable.
2022-12-14T11:50:07.122+11:00 2022-12-14T00:50:07.121Z undefined WARN Failed extracting version /var/task
it says otlp is unavailable. Do I miss anything? I am not sure what this mean.
To enable OpenTelemetry in your AWS Lambda functions using custom layers; besides providing the two environment variables you described, you also need to add the custom layer to the function manually. You can do this using the AWS CLI:
aws lambda update-function-configuration --function-name Function --layers <your Lambda layer ARN>
You can use the AWS Console as well:
Keep in mind though, that you don't need to create a custom Lambda layer to enable OpenTelemetry. AWS provides different pre-built layers for you to use:
AWS managed Lambda layer for ADOT Java SDK and ADOT Collector
AWS managed Lambda Layer for ADOT Java Auto-instrumentation Agent and ADOT Collector
AWS managed Lambda Layer for ADOT JavaScript SDK and ADOT Collector
AWS managed Lambda Layer for ADOT Python SDK and ADOT Collector
AWS managed Lambda Layer for ADOT Collector and ADOT Lambda .NET SDK (Manual Instrumentation)
AWS managed Lambda Layer for ADOT Collector and ADOT Lambda Go SDK (Manual Instrumentation)
I don't think it's necessarily related to the custom Lambda layer.
I use the "AWS managed Lambda Layer for ADOT JavaScript SDK and ADOT Collector" with the default collector.yaml and get the same error:
2023-01-03T13:40:58.367Z undefined WARN Failed extracting version /var/task
2023-01-03T13:40:58.373Z undefined ERROR Exporter "otlp" requested through environment variable is unavailable.

Cannot access AMAZON REDSHIFT using boto3.resource

I am currently learning about boto3 and how it can interact with AWS to connect using both the client and resources methods. I was made to understand that it doesnt matter which one I use that I can still get access except in some cases where I would need to access some client features that are not available through the resources medium hence I would specifiy the created resource variable i.e from
import boto3
s3_resource = boto3.resource('s3')
Hence if there is a need for me to access some client features, I would simply specify
s3.resource.meta.client
But the main issue here is, I tried creating clients/resources first for EC2, S3, IAM, and Redshift so I did this
import boto3
ec2 = boto3.resource('ec2',
region_name='us-west-2',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
s3 = boto3.resource('s3',
region_name='us-west-2',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
iam = boto3.client('iam',
region_name='us-west-2',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
redshift = boto3.resource('redshift',
region_name='us-west-2',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
But I get this error
UnknownServiceError: Unknown service: 'redshift'. Valid service names are: cloudformation, cloudwatch, dynamodb, ec2, glacier, iam, opsworks, s3, sns, sqs
During handling of the above exception, another exception occurred:
...
- s3
- sns
- sqs
Consider using a boto3.client('redshift') instead of a resource for 'redshift'
Please why is that, I thought I could create using the commands that I specified, Please help
I suggest that you consult the Boto3 documentation for Amazon Redshift. It does, indeed, show that there is no resource method for Redshift (or Redshift Data API, or Redshift Serverless).
Also, I recommend against using aws_access_key_id and aws_secret_access_key in your code unless there is a specific need (such as extracting them from Environment Variables). It is better to use the AWS CLI aws configure command to store AWS credentials in a configuration file, which will be automatically accessed by AWS SDKs such as boto3.

Deploying a CDK Script through Service Catalog gives Authorization failures

I've created an AWS CDK script which deploys an ECR image to Fargate.
When executing the script from an EC2 VM (using cdk deploy using the aws cli tool), I can add an IAM Role to the EC2 instance therefore granting all the permissions required. And the script deploys successfully.
However my aim is to cdk synth the script into a Cloudformation template manually, and then deploy from AWS Service Catalog.
This is where permissions are required, but I'm unsure where exactly to add them?
An example error I get is:
"API: ec2:allocateAddress You are not authorized to perform this operation. Encoded authorization failure message: "
I've looked into the aws cdk docs (https://docs.aws.amazon.com/cdk/api/v1/docs/aws-iam-readme.html) thinking the cdk script needs to have the permissions embedded, however the resources I'm trying to create don't seem to have options to add IAM permissions.
Another option is, like with native Cloudformation scripts, to add Parameters which allow attaching Roles upon provisioning the Product, though I haven't found a way to implement this in cdk either.
It seems like a very obvious solution would be available for this, but I've not found it! Any ideas?
The CDK script used:
from constructs import Construct
from aws_cdk import (
aws_ecs as ecs,
aws_ec2 as ec2,
aws_ecr as ecr,
aws_ecs_patterns as ecs_patterns
)
class MyConstruct(Construct):
def __init__(self, scope: Construct, id: str, *, repository_name="my-repo"):
super().__init__(scope, id)
vpc = ec2.Vpc(self, "my-vpc", max_azs = 3)
cluster = ecs.Cluster(self, "my-ecs-cluster", vpc=vpc)
repository = ecr.Repository.from_repository_name(self, "my-ecr-repo", repository_name)
image = ecs.ContainerImage.from_ecr_repository(repository=repository)
fargate_service = ecs_patterns.ApplicationLoadBalancedFargateService(
self,
"my-fargate-instance",
cluster=cluster,
cpu=256,
desired_count=1,
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image=image,
container_port=3000,
),
memory_limit_mib=512,
public_load_balancer=True
)

How to get brokers endpoint of Amazon MSK as an output

we have an AWS cloudformation template, through which we are creating the Amazon MSK(Kafka) cluster. which is working fine.
Now we have multiple applications in our product stack which consume the Brokers endpoints which is created by the Amazon MSK. now to automate the product deployment we decided to create a Route53 recordset for the MSK broker endpoints. we are having hard time finding how we can get a broker endpoints of MSK cluster as an Outputs in AWS Cloudformation templates.
looking forward for suggestion/guidance on this.
Following on #joinEffort answer, this how i did it using custom resources as the CFN resource for an MKS::Cluster does not expose the broker URL:
(option 2 is using boto3 and calling AWS API
The description of the classes and mothods to use from CDK custom resource code can be found here:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Kafka.html#getBootstrapBrokers-property
Option 1: Using custom resource:
def get_bootstrap_servers(self):
create_params = {
"ClusterArn": self._platform_msk_cluster_arn
}
get_bootstrap_brokers = custom_resources.AwsSdkCall(
service='Kafka',
action='getBootstrapBrokers',
region='ap-southeast-2',
physical_resource_id=custom_resources.PhysicalResourceId.of(f'connector-{self._environment_name}'),
parameters = create_params
)
create_update_custom_plugin = custom_resources.AwsCustomResource(self,
'getBootstrapBrokers',
on_create=get_bootstrap_brokers,
on_update=get_bootstrap_brokers,
policy=custom_resources.AwsCustomResourcePolicy.from_sdk_calls(resources=custom_resources.AwsCustomResourcePolicy.ANY_RESOURCE)
)
return create_update_custom_plugin.get_response_field('BootstrapBrokerString')
Option 2: Using boto3:
client = boto3.client('kafka', region_name='ap-southeast-2')
response = client.get_bootstrap_brokers(
ClusterArn='xxx')
#From here u can get the broker urls:
json_response = json.loads(json.dumps(response))
Reff: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kafka.html#Kafka.Client.get_bootstrap_brokers
You should be able to get it from below command. More can be found here.
aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn ClusterArn

Using AWS SNS when ec2 instance is deployed in us-west-1

I have a quick question about usage of AWS SNS.
I have deployed an EC2 (t2.micro, Linux) instance in us-west-1 (N.California). I have written a python script using boto3 to send a simple text message to my phone. Later I discovered, there is no SNS service for instances deployed out of us-east-1 (N.Virginia). Till this point it made sense, because I see this below error when i execute my python script, as the region is defined as "us-west-1" in aws configure (AWS cli) and also in my python script.
botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: PhoneNumber Reason:
But to test, when I changed the "region" in aws conifgure and in my python script to "us-east-1", my script pushed a text message to my phone. Isn't it weird? Can anyone please explain why this is working just by changing region in AWS cli and in my python script, though my instance is still in us-west-1 and I dont see "Publish text message" option on SNS dashboard on N.california region?
Is redefining the aws cli with us-east-1 similar to deploying a new instance altogether in us-east-1? I dont think so. Correct me if I am wrong. Or is it like having an instance in us-west-1, but just using SNS service from us-east-1? Please shed some light.
Here is my python script, if anyone need to look at it (Its a simple snippet).
import boto3
def send_message():
# Create an SNS client
client = boto3.client("sns", aws_access_key_id="XXXX", aws_secret_access_key="XXXX", region_name="us-east-1")
# Send your sms message.
client.publish(PhoneNumber="XXXX",Message="Hello World!")
if __name__ == '__main__':
send_message()
Is redefining the aws cli with us-east-1 similar to deploying a new
instance altogether in us-east-1?
No, it isn't like that at all.
Or is it like having an instance in us-west-1, but just using SNS
service from us-east-1?
Yes, that's all you are doing. You can connect to any AWS regions' API from anywhere on the Internet. It doesn't matter that it is running on an EC2 instance in a specific region, it only matters what region you tell the SDK/CLI to use.
You could run the same code on your local computer. Obviously your local computer is not running on AWS so you would have to tell the code which AWS region to send the API calls to. What you are doing is the same thing.
Code running on an EC2 server is not limited into using the AWS API in the same region that the EC2 server is in.
Did you try creating a topic before publishing to it? You should try create a topic and then publish to that topic.