Not able to get Amazon SNS logs - amazon-web-services

Below is the log stream I am getting with the CLI command :
And I am also getting the log streams as below:
But while accessing a log stream I am getting the below error:
So could you please help me where I am wrong or why the error is coming. Thanks in advance.

Try passing the values for log stream name and group name as double quoted strings, may be the params are not getting passed correctly

Related

Kinesis put records not returned in response from get records request

I have a Scala app using the aws-java-sdk-kinesis to issue a series of putRecord requests to a local kinesis stream.
The response returned after each putRecord request indicates its successfully putting the records into the stream.
The scala code making the putRecordRquest:
def putRecord(kinesisClient: AmazonKinesis, value: Array[Byte], streamName: String): Try[PutRecordResult] = Try {
val putRecordRequest = new PutRecordRequest()
putRecordRequest.setStreamName(streamName)
putRecordRequest.setData(ByteBuffer.wrap(value))
putRecordRequest.setPartitionKey("integrationKey")
kinesisClient.putRecord(putRecordRequest)
}
To confirm this I have a small python app that basically consumes from the stream (initialStreamPosition: LATEST). And prints the records it finds by iterating through the shard-iterators. But unexpectedly however it returns an empty set of records for each obtained shardIterator.
Trying this using the aws cli tool, I do however get records returned for the same shardIterator. I am confused? How can that be?
Running the python consumer (with LATEST), returns:
Shard-iterators: ['AAAAAAAAAAH9AUYVAkOcqkYNhtibrC9l68FcAQKbWfBMyNGko1ypHvXlPEuQe97Ixb67xu4CKzTFFGoLVoo8KMy+Zpd+gpr9Mn4wS+PoX0VxTItLZXxalmEfufOqnFbz2PV5h+Wg5V41tST0c4X0LYRpoPmEnnKwwtqwnD0/VW3h0/zxs7Jq+YJmDvh7XYLf91H/FscDzFGiFk6aNAVjyp+FNB3WHY0d']
Records: []
If doing the "same" with the aws cli tool however I get:
> aws kinesis get-records --shard-iterator AAAAAAAAAAH9AUYVAkOcqkYNhtibrC9l68FcAQKbWfBMyNGko1ypHvXlPEuQe97Ixb67xu4CKzTFFGoLVoo8KMy+Zpd+gpr9Mn4wS+PoX0VxTItLZXxalmEfufOqnFbz2PV5h+Wg5V41tST0c4X0LYRpoPmEnnKwwtqwnD0/VW3h0/zxs7Jq+YJmDvh7XYLf91H/FscDzFGiFk6aNAVjyp+FNB3WHY0d --endpoint-url http://localhost:4567
Returns:
{"Records":[{"SequenceNumber":"49625122979782922897342908653629584879579547704307482626","ApproximateArrivalTimestamp":1640263797.328,"Data":{"type":"Buffer","data":[123,34,116,105,109,101,115,116,97,109,112,34,58,49,54,52,48,50,54,51,55,57,55,44,34,100,116,109,34,58,49,54,52,48,50,54,51,55,57,55,44,34,101,34,58,34,101,34,44,34,116,114,97,99,107,101,114,95,118,101,114,115,105,111,110,34,58,34,118,101,114,115,105,111,110,34,44,34,117,114,108,34,58,34,104,116,116,112,115,58,47,47,116,101,115,116,46,99,111,109,34,44,34,104,99,99,34,58,102,97,108,115,101,44,34,115,99,34,58,49,44,34,99,111,110,116,101,120,116,34,58,123,34,101,116,34,58,34,101,116,34,44,34,100,101,118,34,58,34,100,101,118,34,44,34,100,119,101,108,108,34,58,49,44,34,111,105,100,34,58,49,44,34,119,105,100,34,58,49,44,34,115,116,97,116,101,34,58,123,34,108,99,34,58,123,34,99,111,100,101,34,58,34,115,111,109,101,45,99,111,100,101,34,44,34,105,100,34,58,34,115,111,109,101,45,105,100,34,125,125,125,44,34,121,117,105,100,34,58,34,102,53,101,52,57,53,98,102,45,100,98,102,100,45,52,102,53,102,45,56,99,56,98,45,53,97,56,98,50,56,57,98,52,48,49,97,34,125]},"PartitionKey":"integrationKey"},{"SequenceNumber":"49625122979782922897342908653630793805399163707871723522","ApproximateArrivalTimestamp":1640263817.338,"Data":{"type":"Buffer","data":[123,34,116,105,109,101,115,116,97,109,112,34,58,49,54,52,48,50,54,51,56,49,55,44,34,100,116,109,34,58,49,54,52,48,50,54,51,56,49,55,44,34,101,34,58,34,101,34,44,34,116,114,97,99,107,101,114,95,118,101,114,115,105,111,110,34,58,34,118,101,114,115,105,111,110,34,44,34,117,114,108,34,58,34,104,116,116,112,115,58,47,47,116,101,115,116,46,99,111,109,34,44,34,104,99,99,34,58,102,97,108,115,101,44,34,115,99,34,58,49,44,34,99,111,110,116,101,120,116,34,58,123,34,101,116,34,58,34,101,116,34,44,34,100,101,118,34,58,34,100,101,118,34,44,34,100,119,101,108,108,34,58,49,44,34,111,105,100,34,58,49,44,34,119,105,100,34,58,49,44,34,115,116,97,116,101,34,58,123,34,108,99,34,58,123,34,99,111,100,101,34,58,34,115,111,109,101,45,99,111,100,101,34,44,34,105,100,34,58,34,115,111,109,101,45,105,100,34,125,125,125,44,34,121,117,105,100,34,58,34,102,53,101,52,57,53,98,102,45,100,98,102,100,45,52,102,53,102,45,56,99,56,98,45,53,97,56,98,50,56,57,98,52,48,49,97,34,125]},"PartitionKey":"integrationKey"},{"SequenceNumber":"49625122979782922897342908653632002731218779711435964418","ApproximateArrivalTimestamp":1640263837.347,"Data":{"type":"Buffer","data":[123,34,116,105,109,101,115,116,97,109,112,34,58,49,54,52,48,50,54,51,56,51,55,44,34,100,116,109,34,58,49,54,52,48,50,54,51,56,51,55,44,34,101,34,58,34,101,34,44,34,116,114,97,99,107,101,114,95,118,101,114,115,105,111,110,34,58,34,118,101,114,115,105,111,110,34,44,34,117,114,108,34,58,34,104,116,116,112,115,58,47,47,116,101,115,116,46,99,111,109,34,44,34,104,99,99,34,58,102,97,108,115,101,44,34,115,99,34,58,49,44,34,99,111,110,116,101,120,116,34,58,123,34,101,116,34,58,34,101,116,34,44,34,100,101,118,34,58,34,100,101,118,34,44,34,100,119,101,108,108,34,58,49,44,34,111,105,100,34,58,49,44,34,119,105,100,34,58,49,44,34,115,116,97,116,101,34,58,123,34,108,99,34,58,123,34,99,111,100,101,34,58,34,115,111,109,101,45,99,111,100,101,34,44,34,105,100,34,58,34,115,111,109,101,45,1pre05,100,34,125,125,125,44,34,121,117,105,100,34,58,34,102,53,101,52,57,53,98,102,45,100,98,102,100,45,52,102,53,102,45,56,99,56,98,45,53,97,56,98,50,56,57,98,52,48,49,97,34,125]},"PartitionKey":"integrationKey"}],"NextShardIterator":"AAAAAAAAAAE+9W/bI4CsDfzvJGN3elplafFFBw81/cVB0RjojS39hpSglW0ptfsxrO6dCWKEJWu1f9BxY7OZJS9uUYyLn+dvozRNzKGofpHxmGD+/1WT0MVYMv8tkp8sdLdDNuVaq9iF6aBKma+e+iD079WfXzW92j9OF4DqIOCWFIBWG2sl8wn98figG4x74p4JuZ6Q5AgkE41GT2Ii2J6SkqBI1wzM","MillisBehindLatest":0}
The actual python consumer I have used in many other settings to introspec other kinesis streams we have and its working as expected. But for some reason here its not working.
Does anyone have a clue what might be going on here?
So I was finally able to identify the issue, and perhaps it will be useful for someone else with similar problem.
In my setup, I am using a local kinesis stream (kinesalite) which doesn't support CBOR. You have to disable this explicitly otherwise I was seeing the following error when trying to deserialize the received record.
Unable to unmarshall response (We expected a VALUE token but got: START_OBJECT). Response Code: 200, Response Text: OK
In my case, setting the environment variable: AWS_CBOR_DISABLE=1 did the trick

Google Cloud Alert Policies - Include Error Message Within Documentation Body

I can't seem to find much information on this one. I have a requirement to include the error message body in the email alert that is sent out when the GCP alert policy is triggered.
Now I can see from the documentation page that certain variables can be added relating to the policy itself but does anyone know of any easy way to parse the log contents failure payload and pass it? This may be a failure of understanding on my part.
What I have tried is to setup a cloud function that raises an error with a specific message whenever the http request is made. I have setup a policy on the logs for this function that trigger for the error severity. This works fine but what I can't do is parse the error message in the email alert body. Does anyone know of a good work around for this or am I missing something?

Amazon Connect - cannot debug error in Get Customer Input Stage

I am just new to Amazon Connect and Lex and have just starting creating simple projects. I already have created an entire contact flow which uses Lex and Lambda for routing. Problem is in the "Get Customer Input" stage, it seems to always go to the error output and I could not figure out why. I tried to check if there's any way I can find logs for each stages in the contact flow but could not find any.
Can anyone help me solve this issue? I need to see logs to find out the cause of the error.
EDIT: I got the contact flow logs from cloudwatch. See below. I can't find any significant error from it.
{
"Results": "Error",
"ContactId": "<contact-id>",
"ContactFlowId": "<the contact flow id>",
"ContactFlowModuleType": "GetUserInput",
"Timestamp": "2019-07-08T08:27:01.185Z"
}
You might be getting error because you are getting error from your Lex and that is why the flow is going in error.
You can check the logs for connect and Lex in Amazon service - Amazon CloudWatch.
You can also provide details from logs/screenshot what exactly error you are getting, so that I can help.
This might be due to language settings mismatch.
If you're using LexV2 make sure you set the proper Language Attribute as well. Easiest way is using the set Voice block in your Contact Flow, on the very bottom of the block you can enable "set language attribute".
Original answer: https://repost.aws/questions/QUn9bLLnclQxmD_DMBgfB9_Q/amazon-connect-error-using-lex-as-customer-input

AWS EMR: Error parsing parameter: Expected: '=', received: 'EOF' for input:

I'm trying to create a cluster from inside one of my EC2 instances. Typing the following command to start my cluster-
aws emr create-cluster --release-label emr-5.20.0 --instance-groups instance-groups.json --auto-terminate and so on...
I receive the following error-
Error parsing parameter '--instance-groups': Expected: '=', received: 'EOF' for input:
instance-groups.json
^
I already tried --instance-groups=instance-groups.json, but I get the same error message.
What's wrong here?
The reason this was failing was because AWS has strict rules when it comes to providing the path for reading files within your EC2 instance.
So, if you want to read the file instance-groups.json (assuming it is in the same directory from where you're running the aws emr CLI command), you must provide file://instance-groups.json as the filename, instead of the straightforward instance-groups.json.
Got this same error message for importing a JSON file in AWS DynamoDB, I was trying to import it from an S3 bucket.
Error parsing parameter '--instance-groups': Expected: '=', received:
'EOF' for input: instance-groups.json
The issues got fixed when I moved file locally and executed the command with file://
So thanks
You have to provide like:-
--key Name=123456789
Had similar error as "Expected: ',', received: 'EOF' for input:". I noticed there was a string with space in one of my arguments. Fixed the space issue and it was resolved.
--lifecycle-configuration file://C:/Users/MyUser/Desktop/AMZ/lifecycle.json
This way works too.

Lambda.FunctionError In my elasticsearch service log

I have an aws lambda function that connects to Kinesis Firehose delivery streams. In my logs the lambda function is executing perfectly and returning the data I want.
On my Kinesis Firehose delivery streams dashboard in the monitoring section it looks like I am getting Incoming bytes and Incoming records since there is data in those graphs. there is also data in the ExecuteProcessing Duration graph but then the ExecuteProcessing Success graph shows a line at 0, so I am guessing it is failing.
In the Elasticsearch logs I am getting a
Lambda.FunctionError with a message that says: The Lambda function
was successfully invoked but it returned an error result.
I am new to working with AWS and I am having trouble debugging this error code. Any help is appreciated.
The first thing that you must check is the return output of function. Remember that you must return a array with the same number of records in the following structure:
output_record = {
'recordId': record['recordId'],
'result': 'Ok',
'data': base64.b64encode(json.dumps(payload).encode('utf-8')).decode('utf-8')
}
output.append(output_record)
**return {'records': output}**
if you forget return this array , you will get this error message.
If the Lambda function is returning an error, then you should be able to find more information in the CloudWatch logs for the function in question. These docs describe the different ways to access the logs. If the logs don't provide enough information, you might considering altering the function to write more information to stdout or stderr.