How to fix mangled log rows with formatting in Cloud Watch? - amazon-web-services

I'm looking at my logs in Cloud Watch and it seems that there is some sort of formattin problem. This is what I see there:
[90m2023-02-13 21:07:21.521 [39m[1m[94mINFO[39m[22m [90m[PrepareNextTransaction dist/apps/admin/.next/server/chunks/813.js:5114 AsyncTask.handler][39m
If I run this app locally I can see the same rows properly:
2023-02-13 21:10:09.797 INFO [PrepareNextTransaction AsyncTask.execute]
It seems to me that CloudWatch doesn't understand the formatting, but I don't see a setting to fix this.
How can I do so?

The formatting problem you're encountering in CloudWatch is due to the usage of ANSI escape codes in your log output.
ANSI escape codes are a sequence of characters used to format text in a terminal. For example, the code \033[31m can be used to change the text color to red.
However, not all applications that display text, such as CloudWatch, support ANSI escape codes.
It would be best if you investigated how to proper configure your application code output in order to avoid ANSI codes in CloudWatch logs.

Related

Preserve Indentation in GCP Logs

Is there a way to preserve the indentation of lines in GCP logs?
I have a hard time reading scopes when every line is at the same level visually. Below is a screenshot of an example:
The logs actually do preserve indentation, as Srividya pointed out.
I've since edited my code, but I believe this was the result of JSON.stringify or some sort of string conversion function that did not preserve tabs when I printed an object.

SageMaker ANSI escape codes

I'm using a library in a SageMaker training script that includes print statements with characters like tabs. When I look in my SM cloudwatch training logs, they're filled with ANSI escape codes like #011 (in place of tabs). This makes the logs much more difficult to read.
Is there any way I can prevent this behavior? Whether that be through a modification of my Dockerfile or my train.py script?

Google Cloud Dataflow removing accents and special chars with '??'

This is going to be quite a hit or miss question as I don't really know which context or piece of code to give you as it is a situation of it works in local, which does!
The situation here is that I have several services, and there's a step where messages are put in a PubSub topic awaiting for the Dataflow consumer to handle them and save as .parquet files (I also have another one which sends that payload to a HTTP endpoint).
The thing is, the message in that service prior sending it to that PubSub topic seems to be correct, Stackdriver logs show all the chars as they should be.
However, when I'm going to check the final output in .parquet or in the HTTP endpoint I just see, for example h?? instead of hí, which seems pretty weird as running everything in local makes the output be correct.
I can only think about encoding server-wise when deploying the Dataflow as a job and not running in local.
Hope someone can shed some light in something this abstract.
The strange thing is that it works locally.
But as a workaround, the first thing that comes to mind is to use encoding.
Are you using at some point a function to convert your string input as bytes?
If yes, you could try to force getBytes() to use utf-8 encoding by passing by the argument like in the following example from this Stackoverflow thread:
byte[] bytes = string.getBytes("UTF-8");
// feed bytes to Base64
// get bytes from Base64
String string = new String(bytes, "UTF-8");
Also:
- Have you tried setting the parquet.enable.dictionary option?
- Are your original files written in utf-8 before conversion?
Google Cloud Dataflow (at least the Java SDK) replaces Spanish characters like 'ñ' or accents 'á','é',' etc with the symbol � since the default charset of the JVM installed on service workers is US-ASCII. So, if UTF-8 is not explicitly declared when you instantiate strings or their relative byte-arrays transformation, the platform default encoding will be used.

How to fix aws console problem (every code became oneline)

I closed my AWS console after the session ended.
And I reopened it and found every file in every directory became one line like below.
I think this is about a format setting problem or something.
How can I fix it
{"filter":false,"title":"guess.feature","tooltip":"/work/hw-sinatra-saas-hangperson/features/guess.feature","undoManager":{"mark":-1,"position":-1,"stack":[]},"ace":{"folds":[],"scrolltop":0,"scrollleft":0,"selection":{"start":{"row":15,"column":18},"end":{"row":15,"column":18},"isBackwards":false},"options":{"guessTabSize":true,"useWrapMode":false,"wrapToView":true},"firstLineState":0},"timestamp":1556672503420,"hash":"7c932328fb87b63ff3d5362a56f39ca0bd38857a"}
The format of the text you've put above is known as JSON. It looks like your JSON has been consolidated to a one liner. There are plenty of free tools to format your JSON, infact most IDE's come with the feature to do it (and if not, there's always a plugin for it!). For now, you can use an online formatter: https://jsonformatter.curiousconcept.com/
Just remember to use the copy button and not try copy paste it yourself to avoid ruining the format due to the collapse/expand buttons.

Yeoman causes obscure double-print output when run as External Tool from WebStorm

I am trying to wire up various yeoman generators as External Tools in JetBrains WebStorm (as well as JetBrains Rider) and am experiencing a very peculiar problem with the output.
On generators that take any kind of input, there is all sorts of cattywompus output, specifically duplicated output that is obtusely fragmented.
Thinking this might be a problem with the terminal encoding, I've turned the encoding to UTF-8 in the *.vmoptions file as told by support by adding -Dfile.encoding=UTF-8 to the file and rebooting.
But it doesn't seem to matter what I do, or how I configure it - when I configure a yeoman generator as an external tool, I get obscure output. I've captured the phenomenon in a screen cast here;
VIDEO OF THE PROBLEM OCCURRING
I have also just included a screenshot, for those who would rather not watch the video.
These are the settings I'm using for the external tools, in their respective order;
For good measure, here is a repository of the exact generator I am using in the video and screenshots; The easiest way to make this available is to run
npm install
npm link
The problem is caused by ANSI sequences processing in external tools console. Yo generator uses inquirer.js module that, in turn, uses some special ANSI escape sequences to format the output, namely
CSI 8D Cursor Back
CSI 8C Cursor Forward
CSI 2K clear entire line
these sequences are not currently supported; please follow IDEA-149959 and linked tickets for updates