boto3- regions names and service names - amazon-web-services
How can I get the list of all the regions for eg: us-east-1. How can I get the list of all the regions using boto3. (I am not trying to get the available regions for a particular service as have been asked already)
Also how can I get the names of all the services that AWS provides using boto3 so that I can use them later, when creating resources or clients later on.
I am asking this question because when creating sessions or resources or client I have to specify these things and I don't know how to find the exact value to pass.
For the regions, the closest you can get is describe_regions:
ec2 = boto3.client('ec2')
response = [region['RegionName'] for region in ec2.describe_regions(AllRegions=True)['Regions']]
print(response)
which gives:
['af-south-1', 'eu-north-1', 'ap-south-1', 'eu-west-3', 'eu-west-2', 'eu-south-1', 'eu-west-1', 'ap-northeast-3', 'ap-northeast-2', 'me-south-1', 'ap-northeast-1', 'sa-east-1', 'ca-central-1', 'ap-east-1', 'ap-southeast-1', 'ap-southeast-2', 'eu-central-1', 'us-east-1', 'us-east-2', 'us-west-1', 'us-west-2']
For services - I don't think there is any API call for that. You could scrape them from here, but this would not involve boto3.
To get the list of services in AWS you can perform get_available_services of boto3 Session:
import boto3
boto_session = boto3.session.Session()
list_of_services = boto_session.get_available_services()
print(list_of_services)
which gives you a list of all available services in AWS
['accessanalyzer', 'account', 'acm', 'acm-pca', 'alexaforbusiness', 'amp', 'amplify', 'amplifybackend', 'apigateway', 'apigatewaymanagementapi', 'apigatewayv2', 'appconfig', 'appflow', 'appintegrations', 'application-autoscaling', 'application-insights', 'applicationcostprofiler', 'appmesh', 'apprunner', 'appstream', 'appsync', 'athena', 'auditmanager', 'autoscaling', 'autoscaling-plans', 'backup', 'batch', 'braket', 'budgets', 'ce', 'chime', 'chime-sdk-identity', 'chime-sdk-meetings', 'chime-sdk-messaging', 'cloud9', 'cloudcontrol', 'clouddirectory', 'cloudformation', 'cloudfront', 'cloudhsm', 'cloudhsmv2', 'cloudsearch', 'cloudsearchdomain', 'cloudtrail', 'cloudwatch', 'codeartifact', 'codebuild', 'codecommit', 'codedeploy', 'codeguru-reviewer', 'codeguruprofiler', 'codepipeline', 'codestar', 'codestar-connections', 'codestar-notifications', 'cognito-identity', 'cognito-idp', 'cognito-sync', 'comprehend', 'comprehendmedical', 'compute-optimizer', 'config', 'connect', 'connect-contact-lens', 'connectparticipant', 'cur', 'customer-profiles', 'databrew', 'dataexchange', 'datapipeline', 'datasync', 'dax', 'detective', 'devicefarm', 'devops-guru', 'directconnect', 'discovery', 'dlm', 'dms', 'docdb', 'ds', 'dynamodb', 'dynamodbstreams', 'ebs', 'ec2', 'ec2-instance-connect', 'ecr', 'ecr-public', 'ecs', 'efs', 'eks', 'elastic-inference', 'elasticache', 'elasticbeanstalk', 'elastictranscoder', 'elb', 'elbv2', 'emr', 'emr-containers', 'es', 'events', 'finspace', 'finspace-data', 'firehose', 'fis', 'fms', 'forecast', 'forecastquery', 'frauddetector', 'fsx', 'gamelift', 'glacier', 'globalaccelerator', 'glue', 'grafana', 'greengrass', 'greengrassv2', 'groundstation', 'guardduty', 'health', 'healthlake', 'honeycode', 'iam', 'identitystore', 'imagebuilder', 'importexport', 'inspector', 'iot', 'iot-data', 'iot-jobs-data', 'iot1click-devices', 'iot1click-projects', 'iotanalytics', 'iotdeviceadvisor', 'iotevents', 'iotevents-data', 'iotfleethub', 'iotsecuretunneling', 'iotsitewise', 'iotthingsgraph', 'iotwireless', 'ivs', 'kafka', 'kafkaconnect', 'kendra', 'kinesis', 'kinesis-video-archived-media', 'kinesis-video-media', 'kinesis-video-signaling', 'kinesisanalytics', 'kinesisanalyticsv2', 'kinesisvideo', 'kms', 'lakeformation', 'lambda', 'lex-models', 'lex-runtime', 'lexv2-models', 'lexv2-runtime', 'license-manager', 'lightsail', 'location', 'logs', 'lookoutequipment', 'lookoutmetrics', 'lookoutvision', 'machinelearning', 'macie', 'macie2', 'managedblockchain', 'marketplace-catalog', 'marketplace-entitlement', 'marketplacecommerceanalytics', 'mediaconnect', 'mediaconvert', 'medialive', 'mediapackage', 'mediapackage-vod', 'mediastore', 'mediastore-data', 'mediatailor', 'memorydb', 'meteringmarketplace', 'mgh', 'mgn', 'migrationhub-config', 'mobile', 'mq', 'mturk', 'mwaa', 'neptune', 'network-firewall', 'networkmanager', 'nimble', 'opensearch', 'opsworks', 'opsworkscm', 'organizations', 'outposts', 'panorama', 'personalize', 'personalize-events', 'personalize-runtime', 'pi', 'pinpoint', 'pinpoint-email', 'pinpoint-sms-voice', 'polly', 'pricing', 'proton', 'qldb', 'qldb-session', 'quicksight', 'ram', 'rds', 'rds-data', 'redshift', 'redshift-data', 'rekognition', 'resource-groups', 'resourcegroupstaggingapi', 'robomaker', 'route53', 'route53-recovery-cluster', 'route53-recovery-control-config', 'route53-recovery-readiness', 'route53domains', 'route53resolver', 's3', 's3control', 's3outposts', 'sagemaker', 'sagemaker-a2i-runtime', 'sagemaker-edge', 'sagemaker-featurestore-runtime', 'sagemaker-runtime', 'savingsplans', 'schemas', 'sdb', 'secretsmanager', 'securityhub', 'serverlessrepo', 'service-quotas', 'servicecatalog', 'servicecatalog-appregistry', 'servicediscovery', 'ses', 'sesv2', 'shield', 'signer', 'sms', 'sms-voice', 'snow-device-management', 'snowball', 'sns', 'sqs', 'ssm', 'ssm-contacts', 'ssm-incidents', 'sso', 'sso-admin', 'sso-oidc', 'stepfunctions', 'storagegateway', 'sts', 'support', 'swf', 'synthetics', 'textract', 'timestream-query', 'timestream-write', 'transcribe', 'transfer', 'translate', 'voice-id', 'waf', 'waf-regional', 'wafv2', 'wellarchitected', 'wisdom', 'workdocs', 'worklink', 'workmail', 'workmailmessageflow', 'workspaces', 'xray']
Related
AWS Glue - Kafka Connection using SASL/SCRAM
I am trying to create an AWS Glue Streaming job that reads from Kafka (MSK) clusters using SASL/SCRAM client authentication for the connection, per https://aws.amazon.com/about-aws/whats-new/2022/05/aws-glue-supports-sasl-authentication-apache-kafka/ The connection configuration has the following properties (plus adequate subnet and security groups): "ConnectionProperties": { "KAFKA_SASL_SCRAM_PASSWORD": "apassword", "KAFKA_BOOTSTRAP_SERVERS": "theserver:9096", "KAFKA_SASL_MECHANISM": "SCRAM-SHA-512", "KAFKA_SASL_SCRAM_USERNAME": "auser", "KAFKA_SSL_ENABLED": "false" } And the actual api method call is df = glue_context.create_data_frame.from_options( connection_type="kafka", connection_options={ "connectionName": "kafka-glue-connector", "security.protocol": "SASL_SSL", "classification": "json", "startingOffsets": "latest", "topicName": "atopic", "inferSchema": "true", "typeOfData": "kafka", "numRetries": 1, } ) When running logs show the client is attempting to connect to brokers using Kerberos, and runs into 22/10/19 18:45:54 INFO ConsumerConfig: ConsumerConfig values: sasl.mechanism = GSSAPI security.protocol = SASL_SSL security.providers = null send.buffer.bytes = 131072 ... org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login How can I authenticate the AWS Glue job using SASL/SCRAM? What properties do I need to set in the connection and in the method call? Thank you
When using lambda to generate elbv2 attributes (name specifically), receiving error from Lambda that name is longer than 32 characters
I am building a CloudFormation template that uses a Lambda function to generate the name of the load balancer built by the template. When the function runs, it fails with the following error: Failed to validate attributes of ELB arn:aws-us-gov:elasticloadbalancing:us-gov-west-1:273838691273:loadbalancer/app/dev-fu-WALB-18VHO2DJ4MHK/c69c48fd3464de01. An error occurred (ValidationError) when calling the DescribeLoadBalancers operation: The load balancer name 'arn:aws-us-gov:elasticloadbalancing:us-gov-west-1:273838691273:loadbalancer/app/dev-fu-WALB-18VHO2DJ4MHK/c69c48fd3464de01' cannot be longer than '32' characters. It is obviously pulling the arn rather than the name of the elbv2. I opened a ticket with AWS to no avail, and also with the company that wrote the script... same results. I have attached the script and any help is greatly appreciated. import cfn_resource import boto3 import boto3.session import logging logger = logging.getLogger() handler = cfn_resource.Resource() # Retrieves DNSName and source security group name for the specified ELB #handler.create def get_elb_attribtes(event, context): properties = event['ResourceProperties'] elb_name = properties['PORALBName'] elb_template = properties['PORALBTemplate'] elb_subnets = properties['PORALBSubnets'] try: client = boto3.client('elbv2') elb = client.describe_load_balancers( Names=[ elb_name ] )['LoadBalancers'][0] for az in elb['AvailabilityZones']: if not az['SubnetId'] in elb_subnets: raise Exception("ELB does not include VPC subnet '" + az['SubnetId'] + "'.") target_groups = client.describe_target_groups( LoadBalancerArn=elb['LoadBalancerArn'] )['TargetGroups'] target_group_arns = [] for target_group in target_groups: target_group_arns.append(target_group['TargetGroupArn']) if elb_template == 'geoevent': if elb['Type'] != 'network': raise Exception("GeoEvent Server requires network ElasticLoadBalancer V2.") response_data = {} response_data['DNSName'] = elb['DNSName'] response_data['TargetGroupARNs'] = target_group_arns msg = 'ELB {} found.'.format(elb_name) logger.info(msg) return { 'Status': 'SUCCESS', 'Reason': msg, 'PhysicalResourceId': context.log_stream_name, 'StackId': event['StackId'], 'RequestId': event['RequestId'], 'LogicalResourceId': event['LogicalResourceId'], 'Data': response_data } except Exception, e: error_msg = 'Failed to validate attributes of ELB {}. {}'.format(elb_name, e) logger.error(error_msg) return { 'Status': 'FAILED', 'Reason': error_msg, 'PhysicalResourceId': context.log_stream_name, 'StackId': event['StackId'], 'RequestId': event['RequestId'], 'LogicalResourceId': event['LogicalResourceId'] }
The error says: An error occurred (ValidationError) when calling the DescribeLoadBalancers operation So, looking at where it calls DescribeLoadBalancers: elb = client.describe_load_balancers( Names=[ elb_name ] )['LoadBalancers'][0] The error also said: The load balancer name ... cannot be longer than '32' characters. The name comes from: properties = event['ResourceProperties'] elb_name = properties['PORALBName'] So, the information is being passed into the Lambda function via event. This is coming from whatever is triggering the Lambda function. So, you'll need to find out what is triggering the function and discover what information it actually sending. Your problem is outside of the code listed. Other options In your code, you can send event to the debug logs (eg print (event)) and see whether they are passing the ELB name in a different field. Alternatively, you could call describe_load_balancers without a Name filter to retrieve a list of all load balancers, then use the ARN (that you have) to find the load balancer of interest. Simply loop through all the results until you find the one that matches the ARN you have. Then, continue as normal.
How to run a Google Cloud Build trigger via cli / rest api / cloud functions?
Is there such an option? My use case would be running a trigger for a production build (deploys to production). Ideally, that trigger doesn't need to listen to any change since it is invoked manually via chatbot. I saw this video CI/CD for Hybrid and Multi-Cloud Customers (Cloud Next '18) announcing there's an API trigger support, I'm not sure if that's what I need.
I did same thing few days ago. You can submit your builds using gcloud and rest api gcloud: gcloud builds submit --no-source --config=cloudbuild.yaml --async --format=json Rest API: Send you cloudbuild.yaml as JSON with Auth Token to this url https://cloudbuild.googleapis.com/v1/projects/standf-188123/builds?alt=json example cloudbuild.yaml: steps: - name: 'gcr.io/cloud-builders/docker' id: Docker Version args: ["version"] - name: 'alpine' id: Hello Cloud Build args: ["echo", "Hello Cloud Build"] example rest_json_body: {"steps": [{"args": ["version"], "id": "Docker Version", "name": "gcr.io/cloud-builders/docker"}, {"args": ["echo", "Hello Cloud Build"], "id": "Hello Cloud Build", "name": "alpine"}]}
This now seems to be possible via API: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run request.json: { "projectId": "*****", "commitSha": "************" } curl request (with using a gcloud command): PROJECT_ID="********" TRIGGER_ID="*******************"; curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \ --format='value(credential.access_token)')" \ https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run
You can use google client api to create build jobs with python: import operator from functools import reduce from typing import Dict, List, Union from google.oauth2 import service_account from googleapiclient import discovery class GcloudService(): def __init__(self, service_token_path, project_id: Union[str, None]): self.project_id = project_id self.service_token_path = service_token_path self.credentials = service_account.Credentials.from_service_account_file(self.service_token_path) class CloudBuildApiService(GcloudService): def __init__(self, *args, **kwargs): super(CloudBuildApiService, self).__init__(*args, **kwargs) scoped_credentials = self.credentials.with_scopes(['https://www.googleapis.com/auth/cloud-platform']) self.service = discovery.build('cloudbuild', 'v1', credentials=scoped_credentials, cache_discovery=False) def get(self, build_id: str) -> Dict: return self.service.projects().builds().get(projectId=self.project_id, id=build_id).execute() def create(self, image_name: str, gcs_name: str, gcs_path: str, env: Dict = None): args: List[str] = self._get_env(env) if env else [] opt_params: List[str] = [ '-t', f'gcr.io/{self.project_id}/{image_name}', '-f', f'./{image_name}/Dockerfile', f'./{image_name}' ] build_cmd: List[str] = ['build'] + args + opt_params body = { "projectId": self.project_id, "source": { 'storageSource': { 'bucket': gcs_name, 'object': gcs_path, } }, "steps": [ { "name": "gcr.io/cloud-builders/docker", "args": build_cmd, }, ], "images": [ [ f'gcr.io/{self.project_id}/{image_name}' ] ], } return self.service.projects().builds().create(projectId=self.project_id, body=body).execute() def _get_env(self, env: Dict) -> List[str]: env: List[str] = [['--build-arg', f'{key}={value}'] for key, value in env.items()] # Flatten array return reduce(operator.iconcat, env, []) Here is the documentation so that you can implement more functionality: https://cloud.google.com/cloud-build/docs/api Hope this helps.
If you just want to create a function that you can invoke directly, you have two choices: An HTTP trigger with a standard API endpoint A pubsub trigger that you invoke by sending a message to a pubsub topic The first is the more common approach, as you are effectively creating a web API that any client can call with an HTTP library of their choice.
You should be able to manually trigger a build using curl and a json payload. For details see: https://cloud.google.com/cloud-build/docs/running-builds/start-build-manually#running_builds. Given that, you could write a Python Cloud function to replicate the curl call via the requests module.
I was in search of the same thing (Fall 2022) and while I haven't tested yet I wanted to answer before I forget. It appears to be available now in gcloud beta builds triggers run TRIGGER
you can trigger a function via gcloud functions call NAME --data 'THING' inside your function you can do pretty much anything possibile within Googles Public API's if you just want to directly trigger Google Cloud Builder from git then its probably advisable to use Release version tags - so your chatbot might add a release tag to your release branch in git at which point cloud-builder will start the build. more info here https://cloud.google.com/cloud-build/docs/running-builds/automate-builds
AWS lambda function logs are not getting displayed in cloudwatch
I have below setup which I am trying to run. I have a python app which is running locally on my linux host. I am using boto3 to connect to AWS with my user secret key and secret key Id. My user had full access to EC2, Cloudwatch, S3 and config My application invokes a lamdbda function called mylambda. The execution role for mylambda also has all the required permissions. Now if i call my lambda function from aws console it works fine. I can see the logs of execution in cloudwatch. But if I do it from my linux box from my custom application, I dont see any execution logs, I am not getting error either. is there anything I am missing ? Any help is really appreciated. I dont see it getting invoked. But surprisingly I am getting response as below. gaurav#random:~/lambda_s3$ python main.py {u'Payload': <botocore.response.StreamingBody object at 0x7f74cb7f5550>, u'ExecutedVersion': '$LATEST', 'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': '7417534c-6263-11e8-xxx-afab1667510a', 'HTTPHeaders': {'x-amzn-requestid': '7417534c-xxx-11e8-8a24-afab1667510a', 'content-length': '4', 'x-amz-executed-version': '$LATEST', 'x-amzn-trace-id': 'root=1-5b0bdc78-7559e68acd668476bxxxx754;sampled=0', 'x-amzn-remapped-content-length': '0', 'connection': 'keep-alive', 'date': 'Mon, 28 May 2018 10:39:52 GMT', 'content-type': 'application/json'}}, u'StatusCode': 200} {u'CreationDate': datetime.datetime(2018, 5, 27, 9, 50, 9, tzinfo=tzutc()), u'Name': 'bucketname'} gaurav#random:~/lambda_s3$ My sample app is as below #!/usr/bin/python import boto3 import json import base64 d= {'key': 10, 'key2' : 20} client = boto3.client('lambda') response = client.invoke( FunctionName='mylambda', InvocationType='RequestResponse', #LogType='None', ClientContext=base64.b64encode(b'{"custom":{"foo":"bar", \ "fuzzy":"wuzzy"}}').decode('utf-8'), Payload=json.dumps(d) ) print response
Make sure that you're actually invoking the Lambda correctly. Lambda error handling can be a bit tricky. Using boto3 the invoke method doesn't necessarily throw even if the invocation fails. You have to check the statusCode property in the response. You mentioned that your user has full access to EC2, Cloudwatch, S3, and config. For your use case, you need to add lambda:InvokeFunction to your user's permissions.
when using boto3 how to create an aws instance with a custom root volume size
When creating an instance in AWS the volume root size defaults to 8GB, I am trying to create an instance using boto3 but with a different default size, for example 300GB, I am currently trying something like this without success: block_device_mappings = [] block_device_mappings.append({ 'DeviceName': '/dev/sda1', 'Ebs': { 'VolumeSize': 300, 'DeleteOnTermination': True, 'VolumeType': 'gp2' } Any idea of how to achieve this?
Most likely what is happening is that you're using an AMI that uses /dev/xvda instead of /dev/sda1 for its root volume. AMIs these days support one of two types of virtualization, paravirtual (PV) or hardware virtualized (HVM), and PV images support /dev/sda1 for the root device name, whereas HVM images can specify either dev/xvda or /dev/sda1 (more from AWS Documentation). You can add in an image check to determine what the AMI you're using sets its root volume to, and then use that information for your calls to create_images. Here's a code snippet that makes a call out to describe_images, retrieves information about its RootDeviceName, and then uses that to configure the block device mapping. import boto3 if __name__ == '__main__': client = boto3.client('ec2') # This grabs the Debian Jessie 8.6 image (us-east-1 region) image_id = 'ami-49e5cb5e' response = client.describe_images(ImageIds=[image_id]) device_name = response['Images'][0]['RootDeviceName'] print(device_name) block_device_mappings = [] block_device_mappings.append({ 'DeviceName': device_name, 'Ebs': { 'VolumeSize': 300, 'DeleteOnTermination': True, 'VolumeType': 'gp2' } }) # Whatever you need to create the instances For reference, the call to describe_images returns a dict that looks like this: {u'Images': [{u'Architecture': 'x86_64', u'BlockDeviceMappings': [{u'DeviceName': '/dev/xvda', u'Ebs': {u'DeleteOnTermination': True, u'Encrypted': False, u'SnapshotId': 'snap-0ddda62ff076afbc8', u'VolumeSize': 8, u'VolumeType': 'gp2'}}], u'CreationDate': '2016-11-13T14:03:45.000Z', u'Description': 'Debian jessie amd64', u'EnaSupport': True, u'Hypervisor': 'xen', u'ImageId': 'ami-49e5cb5e', u'ImageLocation': '379101102735/debian-jessie-amd64-hvm-2016-11-13-1356-ebs', u'ImageType': 'machine', u'Name': 'debian-jessie-amd64-hvm-2016-11-13-1356-ebs', u'OwnerId': '379101102735', u'Public': True, u'RootDeviceName': '/dev/xvda', u'RootDeviceType': 'ebs', u'SriovNetSupport': 'simple', u'State': 'available', u'VirtualizationType': 'hvm'}], 'ResponseMetadata': {'HTTPHeaders': {'content-type': 'text/xml;charset=UTF-8', 'date': 'Mon, 19 Dec 2016 14:03:36 GMT', 'server': 'AmazonEC2', 'transfer-encoding': 'chunked', 'vary': 'Accept-Encoding'}, 'HTTPStatusCode': 200, 'RequestId': '85a22932-7014-4202-92de-4b5ee6b7f73b', 'RetryAttempts': 0}}