In the gCloud Log, is there a way to have it log the request or the response from the API?
For example, I notice that using the text recognition API under different light settings for the same text will produce a range of very different results - useful to track these things.
Yes, by writing to the Stackdriver logs in your code. Stackdriver does not log request or response bodies. This is something your code will need to do. Depending on your program language this is as simple as a print statement.
Related
I am trying to write a Lua library for Amazon SES that will allow me to send API requests. I've poured over the documentation and various examples but I am continuing to get the following error:
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
One of my functions somewhere along the line is formatting data incorrectly or doing something to cause the end result of my signing process to not match what Amazon is generating on their side, so my request is being rejected. However, Amazon doesn't provide any useful information in their error response such as showing me the canonical request that they generate so I can compare it to mine to see if there are any discrepencies. My best attempts to debug this is to use the examples they provide in their documentation (see below) as "known good" comparisons and to try and generate the same hashes with my functions... Except that they don't provide all of the necessary information to do so.
In Task 3 of their documentation process, they do share an example secrey key and I've been able to use that to verify that at least part of my code is indeed working as intended, but that key does not seem to generate the same hashes on the other tasks. Am I missing something here, or is there a better way to figure this problem out?
Below are the example keys I was able to pull out of various Task pages in their documentation:
api_key = "AKIDEXAMPLE"
api_secret = "wJalrXUtnFEMI/K7MDENG+bPxRfiCYEXAMPLEKEY"
In Amazon's Documentation for Task 1, they provide the final canonical request and a paired hash:
GET
/
Action=ListUsers&Version=2010-05-08
content-type:application/x-www-form-urlencoded; charset=utf-8
host:iam.amazonaws.com
x-amz-date:20150830T123600Z
content-type;host;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
f536975d06c0309214f805bb90ccff089219ecd68b2577efef23edd43b7e1a59
However when I use the above secret to hash the above canonical request, I get a different hash:
d2da54b4842d8ca1acf1cf197827f4d75a742918af868d472e883781624a8bb5
So they must being using a different secret in some examples without actually documenting them.. unless I missed something?
Documentation: https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
WIP Code: https://hastebin.com/mezugukefu.lua
In the app I'm working on, we have a process whereby a user can download a CSV or PDF version of their data. The generation works great, but I'm trying to get it to download the file and am running into all sorts of problems. We're using API Gateway for all the requests, and the generation happens inside a Lambda on a POST request. The GET endpoint takes in a file_name parameter and then constructs the path in S3 and then makes the request directly there. The problem I'm having is when I'm trying to transform the response. I get a 500 error and when I look at the logs, it says Execution failed due to configuration error: Unable to transform response. So, clearly that's where I've spent most of my time. I've tried at least 50 different iterations of templates and combinations with little success. The closest I've gotten is the following code, where the CSV downloads fine, but the PDF is not a valid PDF anymore:
CSV:
#set($contentDisposition = "attachment;filename=${method.request.querystring.file_name}")
$input.body
#set($context.responseOverride.header.Content-Disposition = $contentDisposition)
PDF:
#set($contentDisposition = "attachment;filename=${method.request.querystring.file_name}")
$util.base64Encode($input.body)
#set($context.responseOverride.header.Content-Disposition = $contentDisposition)
where contentHandling = CONVERT_TO_TEXT. My binaryMediaTypes just has application/pdf and that's it. My goal is to get this working without having to offload the problem into a Lambda so we don't have that overhead at the download step. Any ideas how to do this right?
Just as another comment, I've tried CONVERT_TO_BINARY and just leaving it as Passthrough. I've tried it with text/csv as another binary media type and I've tried different combinations of encoding and decoding base64 and stuff. I know the data is coming back right from S3, but the transformation is where it's breaking. I am happy to post more logs if need be. Also, I'm pretty sure this makes sense on StackOverflow, but if it would fit in another StackExchange site better, please let me know.
Resources I've looked at:
https://docs.aws.amazon.com/apigateway/latest/developerguide/request-response-data-mappings.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#util-template-reference
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-workflow.html
https://docs.amazonaws.cn/en_us/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-control-service-api.html.
(But they're all so confusing...)
EDIT: One Idea I've had is to do CONVERT_TO_BINARY and somehow base64 encode the CSVs in the transformation, but I can't figure out how to do it right. I keep feeling like I'm misunderstanding the order of things, specifically when the "CONVERT" part happens. If that makes any sense.
EDIT 2: So, I got rid of the $util.base64Encode in the PDF one and now I have a PDF that's empty. The actual file in S3 definitely has things in it, but for some reason CONVERT_TO_TEXT is not handling it right or I'm still not understading how this all works.
Had similar issues. One major thing is the Accept header. I was testing in chrome which sends Accept header as text/html,application/xhtml.... api-gateway ignores everything except the first one(text/html). It will then convert any response from S3 to base64 to try and conform to text/html.
At last after trying everything else I tried via Postman which defaults the Accept header to */*. Also set your content handling on the Integration response to Passthrough. And everything was working!
One other thing is to pass the Content-Type and Content-Length headers through(Add them in method response first and then in Integration response):
Content-Length integration.response.header.Content-Length
Content-Type integration.response.header.Content-Type
We have cloud watch log agent setup and the logs streamed are appending a timestamp to beginning of each line which we could see after export.
2017-05-23T04:36:02.473Z "message"
Is there any configuration on cloud watch log agent setup that helps not appending this timestamp to each log entry?
Is there a way to export cloud watch logs only the messages of log events? We dont want the timestamp on our exported logs.
Thanks
Assume that you are able to retrieve those logs using your Lambda function (Python 3.x).
Then you can use Regular Expression to identify the timestamp and write a function to strip it from the event log.
^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z\t
The above will identify the following timestamp: 2019-10-10T22:11:00.123Z
Here is a simple Python function:
def strip(eventLog):
timestamp = "r'^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z\t'"
result = re.sub(timestamp, "", eventLog)
return result
I don't think it's possible, I needed the same exact behavior you are asking for and looks like it's not possible unless you implement a man in the middle processor to remove the timestamp from every log message as suggested in the other answer
Checking the CloudWatch Logs Client API in the first place, it's required to send the timestamp with every log message you send to CloudWatch Logs (API reference)
And the export logs to S3 task API also has no parameters to control this behavior (API reference)
I have a Lambda function and its logs in Cloudwatch (Log group and Log Stream). Is it possible to filter (in Cloudwatch Management Console) all logs that contain "error"? For example logs containing "Process exited before completing request".
In Log Groups there is a button "Search Events". You must click on it first.
Then it "changes" to "Filter Streams":
Now you should just type your filter and select the beginning date-time.
So this is kind of a side issue, but it was relevant for us. (I posted this to another answer on StackOverflow but thought it would be relevant to this conversation too)
We've noticed that tailing and searching logs gets really slow after a log group has a lot of Log Streams in it, like when an AWS Lambda Function has had a lot of invocations. This is because "tail" type utilities and searching need to connect to each log stream to run. Log Events get expired and deleted due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs to get searched.
You can also use CloudWatch Insights (https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-amazon-cloudwatch-logs-insights-fast-interactive-log-analytics/) which is an AWS extension to CloudWatch logs that gives a pretty powerful query and analytics tool. However it can be slow. Some of my queries take up to a minute. Okay, if you really need that data.
You could also use a tool I created called SenseLogs. It downloads CloudWatch data to your browser where you can do queries like you ask about. You can use either full text and search for "error" or if your log data is structured (JSON), you can use a Javascript like expression language to filter by field, eg:
error == 'critical'
Posting an update as CloudWatch has changed since 2016:
In the Log Groups there is a Search all button for a full-text search
Then just type your search:
I am working to set up a HTTP Endpoint in JitterBit, for this end point we have a system that will call this Endpoint and pass parameters through the URL to it.
example...
http://[server]:[server port]/EndPoint?Id={SalesForecID}&Status={updated status in SF}
Would i need to use the Text File, JSON or XML Method for this? Follow up question would be if it is JSON or XML what would the file look like that is uploaded during creating the endpoint. I have tired with no success with the text file version.
any help would be great.
I'm just seeing your question now. You may have found a solution, but this took me a while to figure out, so I'll respond anyway.
To get the passed values, go ahead and create your HTTP Endpoint and add a new operation triggered by it. Then, in your new operation create a script with something like the following:
$SalesForceID = $jitterbit.networking.http.query.Id
$UpdatedStatus = $jitterbit.networking.http.query.Status
You can then use these variables elsewhere in your operation chain.
If you want to use these values to feed into another RESTful web service (i.e. an HTTP Source), you'll have to create a separate transformation operation with the HTTP Source. You'd set that source URL to be: http://mysfapp.com/call?Id=[SalesForceID]&Status=[UpdatedStatus]. I'm not sure why, but you can't have the script that extracts the parameters from the Endpoint and the HTTP Source that uses those in the same operation.
Cheers