I just tried to get tui up and it said:
Cannot enable the TUI when output is not a terminal
Which I though was odd because I thought I had it up before. Turns out it worked when I was using cmd but doesn't work using mintty.exe. The bash shell says that TERM=xterm. I also tried some other vt terminals without success. So I'm thinking that gdb isn't respecting the TERM variable.
Anyone know anything about this?
The source code for GDB (line 380 of the linked source) uses stderr.isatty() to check whether the output file (in this case MinTTY) is a terminal or not. However, this check fails using MSYS/MinGW because, according to the developer of MinTTY,
Quoting from mingw.org: “MinGW … is a minimalist development environment for native Microsoft Windows applications.”
Native Windows means no tty.
Looking at this patch suggests that a workaround may be to unset the $TERM variable to enable the native Windows console driver (rather than using a Unix tty). So try unset TERM to see if that will resolve the issue.
Related
Hello I'm using CodeLite on Linux. When I use system("cls");, system("pause>0"); or something like that, I get sh: 1: cls: not found error.
I did a search by myself and I realise these commands are for Windows.
Does someone know where can I find the equivalent commands for Linux?
The system() function what does is opening a new console tab with the command you introduce by parameters. So system("cls") will open a new console and do the cls command. As you said, cls is a Windows command, so you need to change it to the equivalent command in Linux: clear. So, to make the functions system("cls") work on Linux, you should use system("clear").
For the other options you're asking, it's the same, you just need to search for the equivalent commands in Linux.
Finally, it's important to know these kinds of functions aren't very recommendable, cause they made your code work only for a specific SO. You should search for libraries that have functions that do the same and can help make your code portable.
I'm building a lightweight version of the ncurses library. So far, it works pretty well with VT100-compatible terminals, but win32 console fails to recognise the \033 code as the beginning of an escape sequence:
# include <stdio.h>
# include "term.h"
int main(void) {
puts(BOLD COLOR(FG, RED) "Bold text" NOT_BOLD " is cool!" CLEAR);
return 0;
}
What needs to be done on the C code level, in order that the ANSI.SYS driver is loaded and the ANSI/VT100 escape sequences recognized?
[UPDATE] For latest Windows 10 please read useful contribution by #brainslugs83, just below in the comments to this answer.
While for versions before Windows 10 Anniversary Update:
ANSI.SYS has a restriction that it can run only in the context of the MS-DOS sub-system under Windows 95-Vista.
Microsoft KB101875 explains how to enable ANSI.SYS in a command window, but it does not apply to Windows NT. According to the article: we all love colors, modern versions of Windows do not have this nice ANSI support.
Instead, Microsoft created a lot of functions, but this is far from your need to operate ANSI/VT100 escape sequence.
For a more detailed explanation, see the Wikipedia article:
ANSI.SYS also works in NT-derived systems for 16-bit legacy programs executing under the NTVDM.
The Win32 console does not natively support ANSI escape sequences at all. Software such as Ansicon can however act as a wrapper around the standard Win32 console and add support for ANSI escape sequences.
So I think ANSICON by Jason Hood is your solution. It is written in C, supports 32-bit and 64-bit versions of Windows, and the source is available.
Also I found some other similar question or post which ultimately have been answered to use ANSICON:
How to load ANSI escape codes or get coloured file listing in WinXP cmd shell?
how to use ansi.sys in windows 7
How can I get cmd.exe to display ANSI color escape sequences?
ansi color in windows shells
enable ansi colors in windows command prompt
Starting from Windows 10 TH2 (v1511), conhost.exe and cmd.exe support ANSI and VT100 Escape Sequences out of the box (although they have to be enabled).
See my answer over at superuser for more details.
Base on #BrainSlugs83 you can activate on the current Windows 10 version via register, with this command line:
REG ADD HKCU\CONSOLE /f /v VirtualTerminalLevel /t REG_DWORD /d 1
For Python 2.7 the following script works for me fine with Windows 10 (v1607)
import os
print '\033[35m'+'color-test'+'\033[39m'+" test end"
os.system('') #enable VT100 Escape Sequence for WINDOWS 10 Ver. 1607
print '\033[35m'+'color-test'+'\033[39m'+" test end"
Result should be:
[35mcolor-test[39m test end
color-test test end
Starting from Windows 10, you can use ENABLE_VIRTUAL_TERMINAL_PROCESSING to enable ANSI escape sequences:
https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx
If ANSICON is not acceptable since it requires you to install something on the system, a more lightweight solution that parses and translates the ANSI codes into the relevant Win32 API console functions such as SetConsoleTextAttribute.
https://github.com/mattn/ansicolor-w32.c
For coloring the cmd you need Windows.h and use SetConsoleTextAttribute() more details can be found in http://msdn.microsoft.com/en-us/library/windows/desktop/ms686047%28v=vs.85%29.aspx
In lastest win10, it can be done by SetConsoleMode(originMode | ENABLE_VIRTUAL_TERMINAL_PROCESSING). See https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#example
Maybe ANSICON can help u
Just download and extract files, depending on your windows os: 32bit or 64bit
Install it with: ansicon -i
I personally like clink. It not only processes ANSI codes, it also adds many other features so Windows Console behaves like bash (history, reverse history search, keyboard shortcuts, etc.):
The same line editing as Bash (from GNU's Readline library).
History persistence between sessions.
Context sensitive completion;
Executables (and aliases).
Directory commands.
Environment variables
Thirdparty tools; Git, Mercurial, SVN, Go, and P4.
New keyboard shortcuts;
Paste from clipboard (Ctrl-V).
Incremental history search (Ctrl-R/Ctrl-S).
Powerful completion (TAB).
Undo (Ctrl-Z).
Automatic "cd .." (Ctrl-PgUp).
Environment variable expansion (Ctrl-Alt-E).
(press Alt-H for many more...)
Scriptable completion with Lua.
Coloured and scriptable prompt.
Auto-answering of the "Terminate batch job?" prompt.
Ansi.sys (in the system32 folder) is an "MSDOS driver" provided as part of Windows XP, 2000, and earlier versions of NT. In 2000 and XP, it is located in the system32 folder (I don't remember the structure of earlier versions of NT). Programs that run in the DOS subsystem and use standard output can use ANSI.SYS just as they could running over MSDOS.
To load ansi.sys, you must use the device= or devicehigh= command in config, just as you would in MSDOS. On Windows NT 5 (2K & XP), each copy of the DOS subsystem can be given a separate config file in the pif/shortcut (use the "advanced" button), and there is a default file called CONFIG.NT (also in the system32 folder), which is used if the pif/shortcut does not specify a special config file.
When ansi.sys is loaded correctly, mem /d will report that it is loaded. On earlier versions of NT, you can and must load a proper DOS environment to load ansi.sys, and ansi art will work at the prompt. On Win 2K and XP, loading ansi.sys will have no effect on your "CMD prompt" because CMD is not a DOS program: it is a 32 bit Windows console program. For some reason that I do not understand, on WinXP, even if you load a fixed copy of command.com using "command.com /p", the command prompt will not be ansi enabled: perhaps when you do it that way it only emulates loading command.com?
In any case, when you use an actual DOS version of command.com, ansi is enabled after being loaded: you can demonstrate it's use with a bit of ansi art like this:
command /c type ansiart.ans
(here is an example: http://artscene.textfiles.com/ansi/artwork/beastie.ans)
CONFIG.NT (in the system32 folder) contains an example of the syntax for loading device drivers. You will need to be an Administrator to edit that default file, or you can make a copy of it.
On Win 2K and XP, the default "shortcut" for MSDOS is a .PIF file, not a .LNK file. If you create a .lnk file to CMD, you won't be able to set special config and autoexec files, it will use the default CONFIG.NT. If you want to use a special config file for just one DOS application, you can make a copy of the "MSDOS shortcut", or you can make a copy of "_default.pif", found in your Windows folder.
Had the same issue. I installed ConEmu and that one solved my problem.
I found this tool to be working for my end.
Microsoft Color Tool from GitHub
Unzip the compressed file then open CMD with Administration permission.
Go to the folder where you unzip the file in CMD.
Then execute this command "colortool -b scheme-name"
The scheme-name needs to be replaced with any of these options below:
campbell.ini
campbell-legacy.ini
cmd-legacy.ini
deuternopia.itermcolors
OneHalfDark.itermcolors
OneHalfLight.itermcolors
solarized_dark.itermcolors
solarized_light.itermcolors
In my case, the command would be like this "colortool -b solarized_dark.itermcolors"
Click right on the console window and select Properties.
You don't need to change any value just click "OK" to save the setting. (You will notice that your font already contains colors).
Console Property
Then restart your cmd or powerShell.
The ANSI color should be enabled and working with the color scheme you chose before.
Somehow in Windows you just need to call any shell command first, rather call the system function. Just in start of your main method put system("");, and don't forget to include stdlib.h.
I noticed this when I looked at some of my old programs that also used ANSI codes to understand why they work, but my new code is not
A while ago I changed my personal operating system to linux and my development enviroment to KDevelop.
However debugging c++ projects is still not working as it should.
My KDevelop version is 4.2.2 (I installed it through package management)
Every time I hit the "debug button" the application is starting with the console message
warning: GDB: Failed to set controlling terminal: Operation not permitted and debugging functionality is not available.
Any ideas welcome.
(If you need additional information don't hesitate to ask)
I also had this problem, but I use gdb in KDevelop sparsely enough that hadn't bothered me yet. Here's my log of trying to fix it:
Grepping through the GDB 7.3.1 source code reveals that this message is printed when GDB tries to set its master TTY to a newly-created pseudo-tty (see gdb/inflow.c, lines 683-740). In particular, a call to ioctl with request TIOCSCTTY fails with a permissions error.
With this in mind, I took a look at the Linux kernel source code to see what could cause a failure. A bit of searching shows that it will eventually degenerate into a call to tiocsctty(). The comment from tiocsctty that is important here:
/*
* The process must be a session leader and
* not have a controlling tty already.
*/
Since the only other reason it can fail with EPERM is if the tty that GDB creates is actually a controlling tty for another process (which seems highly unlikely), I thought it reasonable to assume that GDB is not a session leader. Fair enough, it's launched by KDevelop after all!
So: I tried not launching the GDB session in an external terminal, and it works. Problem narrowed down.
Originally, the external terminal line was set to konsole --noclose --workdir %workdir -e %exe. Changing this to terminator -e %exe made a slight difference: KDevelop warned me that
GDB cannot use the tty* or pty* devices.
Check the settings on /dev/tty* and /dev/pty*
As root you may need to "chmod ug+rw" tty* and pty* devices and/or add the user to the tty group using "usermod -G tty username".
I checked my permissions; my user was part of the tty group and all relevant files were readable and writable.
Grepping through the KDevelop source code reveals how KDevelop actually sets up the terminal. It runs the shell script
tty > FIFO_PATH ; trap "" INT QUIT TSTP ; exec<&-; exec>&-; while :; do sleep 3600;done
and then sets up GDB to use the terminal device it reads from FIFO_PATH. (My name, by the way, not the one that KDevelop uses.) The problem (as best I can tell) is that gdb is not launched as a child of the shell script, and thus cannot use it as its main tty.
I'm not feeling up to patching KDevelop to make this work properly as of yet (or finding what actually caused this to stop working in the first place . . .), so the best I can suggest at the moment is to simply not use an external terminal for debugging purposes.
Good luck! I'll update if I find anything useful.
As Arthur Zennig said, for more information, you need to do something
Firstly, you need to create the Terminal profile
Secondly, open Launch Configurations, fill info such as the image below
Good luck!
In case you got the error:
"Can't receive konsole tty/pty. Check that konsole is actually a
terminal and that it accepts these arguments"
RUN > CONFIGURE LAUCHERS > (See picture below. My project name was "loops")
What worked for me was to uncheck checkbox "Use External Terminal". Found the in the "Compiled Binaries" Tab.
I have a batch file that tries to compile a static library using Borland C++ Builder 6.0
It is called from Borland make (makefile created with bpr2mak) which is called from a .bat file (used to compile the whole project with Visual Studio and some Borland C++ Builder legacy projects), which is called from a bash shell script running inside Cygwin.
When I run the .bat file directly from a Cygwin shell, it runs OK, but when its being run from a Program calling cygwin with Boost::Process::launcher I'm getting this error:
C:\ARQUIV~1\Borland\CBUILD~1\Bin\..\BIN\TLib /u bclibs.lib #MAKE0000.###
DOS-reported error: Bad file number
TLIB 4.5 Copyright (c) 1987, 1999 Inprise Corporation
opening 'MAKE0000.###'
** error 1 ** deleting bclibs.lib
It's a complicated scenario, but this Program which calls cygwin is run whenever we need to build our software package which needs to be build for various Linux distos and Windows 32 and 64-bit.
Note: It's the only Borland Project failing, the other compile just fine (it's the only static library using borland also, so it can be some problem with the TLib tool.
The problem was that TLib does not like to have his output redirected (seen here) without having an input pipe as well. Solved by creating an input pipe to in the Boost::Process::launcher using set_stdin_behavior
I'm just guessing here, but this may have to do with long filenames and/or spaces in paths.
1) Modify your makefile so it would save current environment to a file, immediately before executing the failing command (set > d:\env.txt & echo CD=%CD% >> d:\env.txt). Then run it both ways (directly and via program) and compare the environments of good run and bad run.
2) Using filemon from Sysinternals, capture logs of disk access in both cases (these logs are going to be huge, though you can uncheck everything except Open in the filter to reduce the size). Again, compare and check for clues...
3) Try instaling everything involved to paths conforming to 8.3 scheme.
This error is not related to C++ itself. It happens when your build script opens too much files (more than defined in DOS command processor environment). To resolve this issue try to set value of files variable to 253. For Windows XP this variable defined in the file %WINDIR%\system32\config.nt.
files=253
Seems it is known bug in Borland C++ tools. Here is description and possible workaround for this issue:
Problem: Some static Lib projects will
not link correctly when compiled. You might see something
like this :
J:\Borland\CBUILD~1\bin\..\BIN\TLib /u debug\jpegD.lib #MAKE0000.###
DOS-reported error: Bad file number
TLIB 4.5 Copyright (c) 1987, 1999 Inprise Corporation
opening 'MAKE0000.###'
** error 1 ** deleting debug\jpegD.lib
MAKE failed, returned : 1
Workaround : In some cases (where the "Bad file number" error is seen) it may be possible to work around this by specifying -tDEFLIB.BMK in the BPR2MAKE Options field, and Turning off the "Capture Make Output" option.
I have not tested it, but I hope that helps.
I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding.
Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program:
#include <locale>
int main ( int argc, char *argv [] ) {
std::locale::global(std::locale(""));
return 0;
}
This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration.
My questions are:
Is this expected behavior for this code on MacOS 10.6?
What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library.
For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode.
edit The error given by the above code on 10.6.1 is:
$ ./locale
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Abort trap
Ok I don't have an answer for you, but I have some clues:
This isn't limited to OS X 10.6. I get the same result on a 10.4 machine.
I looked at the GCC source for libstdc++ and hunted around for _S_create_c_locale. What I found is on line 143 of config/locale/generic/c_locale.cc. The comment there says "Currently, the generic model only supports the "C" locale." That's not promising. In fact if I do LANG=C the runtime error goes away, but any other value for LANG I try causes the same error, regardless of what arguments I give to the locale constructor. (I tried locale::classic(), "C", "", and the default). This is true as far back as GCC 4.0
That same page has a reference to libstdc++ mailing list discussion on this topic. I don't know how fruitful it is: I only followed it a little way down, and it gets very technical very fast.
None of this tells you why the default locale on 10.6 wouldn't work with std::locale but it does suggest a workaround, which is to set LANG=C before running the program.
I have encountered this problem very recently on Ubuntu 14.04 LTS and on a Raspberry Pi running the latest Raspbian Wheezy.
It has nothing to do with OS X, rather with a combination of G++ and Boost (at least up to V1.55) and the default locale settings on certain platforms. There are Boost bug tickets sort of related to this issue, see
ticket #4688 and ticket #5928.
My "solution" was first to do some extra locale setup, as suggested by this AskUbuntu posting:
sudo locale-gen en_US en_US.UTF-8
sudo dpkg-reconfigure locales
But then, I also had to make sure that the environment variable LC_ALL is set to the value of LANG (it is advisable to put this in your .profile):
export LC_ALL=$LANG
In my case I use the locale en_US.UTF-8.
Final remark: the OP said "This program fails when I run this through g++". I understand that this thread was started in 2009, but today there is absolutely no need to use GCC or G++ on the Mac, the much better LLVM/Clang compiler suite is available from Apple free of charge, see the XCode home page.
The situation is still the same. But some functionality may be gained by
setlocale( LC_ALL, "" );
This gets you UTF-8 coding on wide iostreams but not money formatting, for my two data points.
locale::global( locale( "" ) );
should be equivalent, but it crashes if subsequently run in the very same program.
I had the same problem, checked LANG and LC_MESSAGES and they are not set when you lunch the application through Finder, so the following lines saved the day:
unset("LANG");
unset("LC_MESSAGES");
The _S_create_c_locale exception seems to indicate some sort of misconfiguration: check that whatever your LC_ALL or LANG environment variable is set to, exists in the output of locale -a.
$ env LC_ALL=xx_YY ./test
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Aborted
$ env LC_ALL=C ./test
$ echo $?
0
But since you're on OS X, I'm not really sure how locale information is supposed to be handled.
Quoting the accepted answer:
It has nothing to do with OS X
I encountered this issue on MacOS Big Sur using an outdated MacOS utility. The specific utility was VMWare's ovftool, but none of the above LANG/LC_ALL workarounds fixed it. Updating the tool was the only way to get the error to go away. No combination of locale workarounds would fix this.
In my specific case, the error occurred using ovftool 4.1.0, and the error went away using ovftool 4.4.3.