I have two log groups generated by two different lambda. When I subscribe one log group to my elasticsearch service, it is working. However, when I add the other log group I have the following error in the log generated by cloudwatch :
"responseBody": "{\"took\":5,\"errors\":true,\"items\":[{\"index\":{\"_index\":\"cwl-2018.03.01\",\"_type\":\"/aws/lambda/lambda-1\",\"_id\":\"33894733850010958003644005072668130559385092091818016768\",\"status\":400,\"error\":
{\"type\":\"illegal_argument_exception\",\"reason\":\"Rejecting mapping update to [cwl-2018.03.01] as the final mapping would have more than 1 type: [/aws/lambda/lambda-1, /aws/lambda/lambda-2]\"}}}]}"
How can I resolve this, and still have both log group in my Elasticsearch service, and visualize all the logs ?
Thank you.
The problem is that ElasticSearch 6.0.0 made a change that allows indices to only contain a single mapping type. (https://www.elastic.co/guide/en/elasticsearch/reference/6.0/removal-of-types.html) I assume you are running an ElasticSearch service instance that is using version 6.0.
The default Lambda JS file if created through the AWS console sets the index type to the log group name. An example of the JS file is on this gist (https://gist.github.com/iMilnb/27726a5004c0d4dc3dba3de01c65c575)
Line 86: action.index._type = payload.logGroup;
I personally have a modified version of that script in use and changed that line to be:
action.index._type = 'cwl';
I have logs from various different log groups streaming through to the same ElasticSearch instance. It makes sense to have them all be the same type since they are all CloudWatch logs versus having the type be the log group name. The name is also set in the #log_group field so queries can use that for filtering.
In my case, I did the following:
Deploy modified Lambda
Reindex today's index (cwl-2018.03.07 for example) to change the type
for old documents from <log group name> to cwl
Entries from different log groups will now coexist.
You can also modify the generated Lambda code like below to make it work with multiple CW log groups. If the Lambda function can create different ES index for the different log streams coming under the same log groups, then we can avoid this problem. So, you need to find the Lambda function LogsToElasticsearch_<AWS-ES-DOMAIN-NAME>, then the function function transform(payload), and finally change the index name formation part like below.
// index name format: cwl-YYYY.MM.DD
//var indexName = [
//'cwl-' + timestamp.getUTCFullYear(), // year
//('0' + (timestamp.getUTCMonth() + 1)).slice(-2), // month
//('0' + timestamp.getUTCDate()).slice(-2) // day
//].join('.');
var indexName = [
'cwl-' + payload.logGroup.toLowerCase().split('/').join('-') + '-' + timestamp.getUTCFullYear(), // log group + year
('0' + (timestamp.getUTCMonth() + 1)).slice(-2), // month
('0' + timestamp.getUTCDate()).slice(-2) // day
].join('.');
Is it possible to forward all the cloudwatch log groups to a single index in ES? Like having one index "rds-logs-* "to stream logs from all my available RDS instances.
example: error logs, slow-query logs, general logs, etc., of all RDS instances, would be required to be pushed under the same index(rds-logs-*)?
I tried the above-mentioned code change, but it pushes only the last log group that I had configured.
From AWS: by default, only 1 log group can stream log data into ElasticSearch service. Attempting to stream two log groups at the same time will result in log data of one log group override the log data of the other log group.
Wanted to check if we have a work-around for the same.
Related
I want to enable data capture for a specific endpoint (so far, only via the console). The endpoint works fine and also logs & returns the desired results. However, no files are written to the specified S3 location.
Endpoint Configuration
The endpoint is based on a training job with a scikit learn classifier. It has only one variant which is a ml.m4.xlarge instance type. Data Capture is enabled with a sampling percentage of 100%. As data capture storage locations I tried s3://<bucket-name> as well as s3://<bucket-name>/<some-other-path>. With the "Capture content type" I tried leaving everything blank, setting text/csv in "CSV/Text" and application/json in "JSON".
Endpoint Invokation
The endpoint is invoked in a Lambda function with a client. Here's the call:
sagemaker_body_source = {
"segments": segments,
"language": language
}
payload = json.dumps(sagemaker_body_source).encode()
response = self.client.invoke_endpoint(EndpointName=endpoint_name,
Body=payload,
ContentType='application/json',
Accept='application/json')
result = json.loads(response['Body'].read().decode())
return result["predictions"]
Internally, the endpoint uses a Flask API with an /invocation path that returns the result.
Logs
The endpoint itself works fine and the Flask API is logging input and output:
INFO:api:body: {'segments': [<strings...>], 'language': 'de'}
INFO:api:output: {'predictions': [{'text': 'some text', 'label': 'some_label'}, ....]}
Data capture can be enabled by using the SDK as shown below -
data_capture_config = DataCaptureConfig(
enable_capture=True, sampling_percentage=100, destination_s3_uri=s3_capture_upload_path
)
predictor = model.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
endpoint_name=endpoint_name,
data_capture_config=data_capture_config,
)
Make sure to reference your data capture config in your endpoint creation step. I've always seen this method to work. Can you try this and let me know? Reference notebook
NOTE - I work for AWS SageMaker , but my opinions are my own.
So the issue seemed to be related to the IAM role. The default role (ModelEndpoint-Role) does not have access to write S3 files. It worked via the SDK since it uses another role in the sagemaker studio. I did not receive any error message about this.
I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.
Is there a simple way to retrieve all items from a DynamoDB table using a mapping template in an API Gateway endpoint? I usually use a lambda to process the data before returning it but this is such a simple task that a Lambda seems like an overkill.
I have a table that contains data with the following format:
roleAttributeName roleHierarchyLevel roleIsActive roleName
"admin" 99 true "Admin"
"director" 90 true "Director"
"areaManager" 80 false "Area Manager"
I'm happy with getting the data, doesn't matter the representation as I can later transform it further down in my code.
I've been looking around but all tutorials explain how to get specific bits of data through queries and params like roles/{roleAttributeName} but I just want to hit roles/ and get all items.
All you need to do is
create a resource (without curly braces since we dont need a particular item)
create a get method
use Scan instead of Query in Action while configuring the integration request.
Configurations as follows :
enter image description here
now try test...you should get the response.
to try it out on postman deploy the api first and then use the provided link into postman followed by your resource name.
API Gateway allows you to Proxy DynamoDB as a service. Here you have an interesting tutorial on how to do it (you can ignore the part related to index to make it work).
To retrieve all the items from a table, you can use Scan as the action in API Gateway. Keep in mind that DynamoDB limits the query sizes to 1MB either for Scan and Query actions.
You can also limit your own query before it is automatically done by using the Limit parameter.
AWS DynamoDB Scan Reference
Trying to add tags to cloudwatchLog group using ResourceGroupsTaggingAPI service using boto3.
Code seems to be executing with out errors but don't see the tags being added.
How to add tags to cloudwatchLog group?
Code:
log_group=[]
client = boto3.client('logs')
client_api=boto3.client('resourcegroupstaggingapi')
def lambda_handler(event, context):
paginator = client.get_paginator('describe_log_groups')
response_iterator = paginator.paginate()
for page in response_iterator:
for grp in page['logGroups']:
log_group.append(str(grp['arn']))
client_api.tag_resources(
ResourceARNList=log_group,
Tags={
'Header1':'value1',
'Header2':'value2',
'header3':'value3'}
)
A couple of things here.
Print the response of tag_resources API, it could contain messages that will point you in the right direction.
You're using the newer tag_resources API. This API is built on top of older, per resource, APIs. Like tag_log_group API in this case. This means your lambda will need permissions on tag:TagResources and on logs:TagLogGroup.
You are sending ARNs that you got by calling describe_log_groups. These ARNs will be of the form arn:aws:logs:REGION:ACCOUNT:log-group:LOG_GROUP_NAME:*. Since the underline tag_log_group API works with log groups names and not ARNs, you need to drop the last :* from the ARN so the correct log group name can be extracted.
I'm not 100% sure you can even see tags on log groups in the UI. You may need to call list_tags_log_group API to verify the tags are there.
I would like to use Boto3 to generate a list of EC2s along with state changes (pending, running, shutting-down, terminated etc.) between a set of two date times. My understanding is that Config Service maintains histories of EC2s even if the EC2 no longer exists. I have taken a look at this document, however I am having difficulty understanding which functions to use in order to accomplish the task at hand.
Thank you
Under the assumption that you have already configured AWS Config rules to track ec2-instance state, this approach will suit your need.
1) Get the list of ec2-instances using the list_discovered_resources API.Ensure includeDeletedResources is set to True if you want to include deleted resources in the response.
response = client.list_discovered_resources(
resourceType='AWS::EC2::Instance',
limit=100,
includeDeletedResources=True,
nextToken='string'
)
Parse the response and store the resource-id.
2) Pass each resource_id to the get_resource_config_history API.
response = client.get_resource_config_history(
resourceType='AWS::EC2::Instance',
resourceId='i-0123af12345be162h5', // Enter your EC2 instance id here
laterTime=datetime(2018, 1, 7), // Enter end date. default is current date.
earlierTime=datetime(2018, 1, 1), // Enter start date
chronologicalOrder='Reverse'|'Forward',
limit=100,
nextToken='string'
)
You can parse the response and get the state changes, which ec2 instance went through for that corresponding time period.