So I have a script
echo $ansi->getScreen();
$ansi->appendString($ssh->read());
And it shows:
������������������������������������������������������������������������������
But in putty it show:
▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒
▒0001 0002 0003 0004
Is there a way I can at least get it to the point where I can see the numbers?
So After investigating the issue further. Formatting coded outputted by the host machine are interacting negatively with the ANSI function. As a result ANSI is dropping information on the output.
Replacing ANSI with something of my own design resolves the issue.
Related
I am trying to wire up various yeoman generators as External Tools in JetBrains WebStorm (as well as JetBrains Rider) and am experiencing a very peculiar problem with the output.
On generators that take any kind of input, there is all sorts of cattywompus output, specifically duplicated output that is obtusely fragmented.
Thinking this might be a problem with the terminal encoding, I've turned the encoding to UTF-8 in the *.vmoptions file as told by support by adding -Dfile.encoding=UTF-8 to the file and rebooting.
But it doesn't seem to matter what I do, or how I configure it - when I configure a yeoman generator as an external tool, I get obscure output. I've captured the phenomenon in a screen cast here;
VIDEO OF THE PROBLEM OCCURRING
I have also just included a screenshot, for those who would rather not watch the video.
These are the settings I'm using for the external tools, in their respective order;
For good measure, here is a repository of the exact generator I am using in the video and screenshots; The easiest way to make this available is to run
npm install
npm link
The problem is caused by ANSI sequences processing in external tools console. Yo generator uses inquirer.js module that, in turn, uses some special ANSI escape sequences to format the output, namely
CSI 8D Cursor Back
CSI 8C Cursor Forward
CSI 2K clear entire line
these sequences are not currently supported; please follow IDEA-149959 and linked tickets for updates
I have to maintain a small C++ / VS2015 project in my department which only checks the installed .NET Framework of a machine and prompts the user if the current version is not installed. This small application is localized by a file called Language.rc which contains some STRINGTABLES with the corresponding texts.
All this works fine if the program is compiled on my machine, but if the same code is compiled on our build machines then the special characters like for example the German ÄÖÜ are missing.
Unfortunately I'm not a c++ person and I have no clue what is wrong. I already searched the web but cannot find a hint on what might be the problem.
Does anybody have an idea what could be different on the build machines compared to my machine that causes the different characters?
UPDATE:
So after my TFS expert has analysed the problem on the build machines we were able to identify the culprit:
As I said before the application that was causing the problem is only a small tool. Our automatic build contains a lot more solutions and projects. One part of the automatic build is a script that sets the version numbers of all kinds of files to the same value. This is apparently also done for so called RC files. As far as I understand there are different kinds of RC files in C++ (and also in Delpi) which actually hold version numbers. The RC file in my case only has texts and translations but is opened and also saved even though it does not have a version number.
Unfortunately this operation also explicitly sets the encoding of the file to some old IBMxyz encoding (maybe for the Delphi RC files?). This is the actual operations where the special characters get lost... So the solution to my problem is not within the original encoding of the file but somewhere in the build process.
As a temporary fix we changed the .rc file to an .rc2 file - this way the project still compiles but the build does no longer modify it.
I've had enough fun for today...
Windows has two ways of handling text. These are known as "Unicode" (really UTF-16) and "ANSI" (which isn't related to the ANSI standards organization, and describes any 8 bit superset of ASCII).
Your problem is clearly a case of "ANSI" disease. ASCII does not contain "Ä", some supersets of ASCII do, but not all supersets do. Different machines using different supersets will cause different results.
The "simple" fix is to prefix an L to the string in the .rc file: L"zum Beispiel", and then save this .rc file as Unicode (UTF-16). While newer versions of Windows contain more UTF-16 characters, this never affects existing characters, and Ä has been part of every Unicode version. (Even € works everywhere - I think that was added in Windows 2000)
The Subversion API has a number of functions for converting from "natively-encoded" strings to strings that are encoded in UTF-8. My question is: what is this native encoding on Windows? Does it depend on locale?
"Natively encoded" strings are strings written in whatever code page the user is using. That is, they are numbers that are translated to the appropriate glyphs based on the correct code page. Assuming the file was saved that way and not as a UTF-8 file.
This is a candidate question for Joel's article on Unicode.
Specifically:
Eventually this OEM free-for-all got
codified in the ANSI standard. In the
ANSI standard, everybody agreed on
what to do below 128, which was pretty
much the same as ASCII, but there were
lots of different ways to handle the
characters from 128 and on up,
depending on where you lived. These
different systems were called code
pages. So for example in Israel DOS
used a code page called 862, while
Greek users used 737. They were the
same below 128 but different from 128
up, where all the funny letters
resided. The national versions of
MS-DOS had dozens of these code pages,
handling everything from English to
Icelandic and they even had a few
"multilingual" code pages that could
do Esperanto and Galician on the same
computer! Wow! But getting, say,
Hebrew and Greek on the same computer
was a complete impossibility unless
you wrote your own custom program that
displayed everything using bitmapped
graphics, because Hebrew and Greek
required different code pages with
different interpretations of the high
numbers.
Windows 1252. Jukka Korpela has an excellent page on character encodings, with an extensive discussion of the Windows character set.
From the header svn_string.h you can see that the relevant svn_strings are just plain old const char* + a length element.
I would guess that the "natively encoded" svn strings are interpreted according to your system locale (I do not know this for sure, but this is the convention). On Windows 7 you can check your locale by selecting "Start-->Control Panel-->Region and Language-->Administrative-->Change system locale" where any value of English would probably entail the character encoding Windows 1252. However, a different system locale, for example Hebrew (Israel), would entail a different character encoding (Windows 1255 for the case of Hebrew).
Sadly the MSVC version of the C library does not support UTF-8 and uses legacy codepages only, but cygwin provides a UTF-8 locale as part of its emulation layer. If your svn is built on cygwin, you should be able to use UTF-8 just fine.
I've got a sequence of 28 bytes, which are supposedly encoded with a Reed-Solomon (28, 24, 5) code. The RS code uses 8-bit symbols and operates in GF(28). The field generator polynomial is x8+x4+x3+x2+1. I'm looking for a simple way to decode this sequence, so I can tell if this sequence has errors.
I've tried the Python ReedSolomon module, but I'm not even sure how to configure the codec properly for my RS code (e.g. what's the first consecutive root of the field generator polynomial, what's the primitive element). I also had a look at Schifra, but I couldn't even compile it on my Mac.
I don't care too much about the platform (e.g. Python, C, Scilab) as long as it is free.
I successfully built an embedded data comms project that used Reed Solomon error correction a few years ago. I just had a look at it to refresh my memory, and I found that I used a fairly lightweight, GPL licenced, C language subsystem published by a well known guy named Phil Karn to do the encoding and decoding. It's only a few hundred lines of code, but it's pretty intense stuff. However I found I didn't need to understand the math to use the code.
Googling Phil Karn Reed Solomon got me this document.
Which looks like a decent place to start. Hope this helps.
Im trying to get libcurl to download a webpage that is encoded in UTF-8, which is working fine, except for the fact that it converts it to ASCII and screws up some of the characters. Is there an easy way to get it to keep it in UTF-8?
libcurl doesn't translate/convert the data at all so there's actually nothing particular you need to do. Just get it.
Check the CURL options for conversion. They might have been defined at compilation time.