boto3 list_services() with order - amazon-web-services

I made aws auto deployment code with boto3 library.
In my code, get all service list and use it.
I have to get lastest service. But I think there is no order option.
(https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecs.html#ECS.Client.list_services)
Sometimes, first element is latest service.
But sometimes, old service is placed in first element.
Is there any option or way to get latest service?
Thanks.

The list_services method does not return details of individual services. It simply lists the services, and returns you a list of identifiers (ARNs) for those services.
To get more details of a given service, you can use describe_services. This allows you to get details of up to 10 services at a time.
So, take the list of service identifiers that you get back from list_services, and pass it to describe_services (with at most 10 service identifiers). Something like this (untested):
list_response = client.list_services(
cluster='xyz',
launchType='EC2'
)
desc_response = client.describe_services(
cluster='xyz',
services=list_response['serviceArns']
)
Note that you will have to do pagination using maxResults / nextToken if there are a lot of results.

Related

Boto3 (and AWS CLI) only returns 25 API Gateway custom domains when web GUI shows more

I can only imagine I'm doing something stupid here, but I can't figure out what. The AWS GUI shows .. well, lots of custom domains that I've set up. Eyeballing it, I'd estimate 75 or so. When I query via either the Boto3 library or the AWS CLI, it steadfastly returns 25 (no errors) and no more. I've tried the following using Boto3:
session = boto3.session.Session(region_name="eu-west-1")
apigw = session.client('apigatewayv2')
print(apigw.get_domain_names()["Items"]) # Returns 25 items
print(apigw.get_domain_names(MaxResults="1000")["Items"]) # Also returns 25 items
Similarly, I've tried the following calls using the CLI:
aws apigatewayv2 get-domain-names
aws apigatewayv2 get-domain-names --max-items=1000
To be clear, I definitely have more than 25 domain names set up in this account's API Gateway. I've double and triple checked that.
Can anyone help me understand what I'm doing wrong in calling the service?
Many boto3 client api calls have a built-in limit, and you are likely hitting this. Sometimes those limits are not documented. If you want to pull back all items, you should use a paginator. If a function has a corresponding paginator, that is a sure sign that the client call has a limit. You can use tokens to paginate through things yourself with a client, but a paginator does this for you and is usually a better way to go.
Here's the paginator syntax for this same call:
paginator = client.get_paginator('get_domain_names')
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
You probably just want to call
paginator = client.get_paginator('get_domain_names')
response_iterator = paginator.paginate()
It seems wrong that MaxResults would not return up to the number you specify, but check out the discussion in this post, which may not be official, but still helps explain the behavior:
The maxResults is the maximum number of items to return for this
request. It can happen that you may get less than the specified value.
It only guarantees that you will not have more than 100 results. If
you are not getting all results then you will get a nextToken to make
another api call.

List all LogGroups using cdk

I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.

Get all items in DynamoDB with API Gateway's Mapping Template

Is there a simple way to retrieve all items from a DynamoDB table using a mapping template in an API Gateway endpoint? I usually use a lambda to process the data before returning it but this is such a simple task that a Lambda seems like an overkill.
I have a table that contains data with the following format:
roleAttributeName roleHierarchyLevel roleIsActive roleName
"admin" 99 true "Admin"
"director" 90 true "Director"
"areaManager" 80 false "Area Manager"
I'm happy with getting the data, doesn't matter the representation as I can later transform it further down in my code.
I've been looking around but all tutorials explain how to get specific bits of data through queries and params like roles/{roleAttributeName} but I just want to hit roles/ and get all items.
All you need to do is
create a resource (without curly braces since we dont need a particular item)
create a get method
use Scan instead of Query in Action while configuring the integration request.
Configurations as follows :
enter image description here
now try test...you should get the response.
to try it out on postman deploy the api first and then use the provided link into postman followed by your resource name.
API Gateway allows you to Proxy DynamoDB as a service. Here you have an interesting tutorial on how to do it (you can ignore the part related to index to make it work).
To retrieve all the items from a table, you can use Scan as the action in API Gateway. Keep in mind that DynamoDB limits the query sizes to 1MB either for Scan and Query actions.
You can also limit your own query before it is automatically done by using the Limit parameter.
AWS DynamoDB Scan Reference

AWS Ruby SDK filtering

I'm refactoring a Ruby framework that is calling describe_instances and then filtering the response for just the VPC names.
It seems a waste of bandwidth to pull down the data for every instance in the region and then filter out the VPC ids in this way.
When I look at the documentation for filtering server side I see posts doing things like applying filters for all instances of type xx and so on.
What I want to do is pull down all VPC ids as a unique list.
Can anyone point me at an example of how to do that?
Thanks in advance
Never mind, I discovered the describe_vpcs endpoint:
def get_vpc_ids
ec2_object.describe_vpcs[:vpcs].each do |vpc|
#vpc_list.push(vpc[:vpc_id])
end
#vpc_list.uniq!
end

When using AWS SQS, is there any reason to prefer using GetQueueUrl to building a queue url from the region, account id, and name?

I have an application that uses a single SQS queue.
For the sake of flexibility I would like to configure the application using the queue name, SQS region, and AWS account id (as well as the normal AWS credentials and so forth), rather than giving a full queue url.
Does it make any sense to use GetQueueUrl to retrieve a url for the queue when I can just build it with something like the following (in ruby):
region = ENV['SQS_REGION'] # 'us-west-2'
account_id = ENV['SQS_AWS_ACCOUNT_ID'] # '773083218405'
queue_name = ENV['SQS_QUEUE_NAME'] # 'test3'
queue_url = "https://sqs.#{region}.amazonaws.com/#{account_id}/#{queue_name}
# => https://sqs.us-west-2.amazonaws.com/773083218405/test3
Possible reasons that it might not:
Amazon might change their url format.
Others???
I don't think you have any guarantee that the URL will have such a form. The official documentation states the GetQueueUrl call as the official method for obtaining queue urls. So while constructing it using the method above may be a very good guess, it may also fail at any time because Amazon can change the URL scheme (e.g. for new queues).
If Amazon changes the queue URL in a breaking way it will not be immediate and will be deprecated slowly, and will take effect moving up a version (i.e. when you upgrade your SDK).
While the documentation doesn't guarantee it, Amazon knows that it would be a massively breaking change for thousands of customers.
Furthermore, lots of customers use hard coded queue URLs which they get from the console, so those customers would not get the updated queue URL format either.
In the end, you will be safe either way. If you have LOTs of queues, then you will be better off formatting them yourself. If you have a small number of queues, then it shouldn't make much difference either way.
I believe for safety purposes the best way to get the URL is through the sqs.queue.named method. What you can do is memoize the queues by name to avoid multiple calls, something like that:
# https://github.com/phstc/shoryuken/blob/master/lib/shoryuken/client.rb
class Client
##queues = {}
class << self
def queues(queue)
##queues[queue.to_s] ||= sqs.queues.named(queue)
end
end
end