Can two windows apps communicate using the command-line? - c++

I worked on a Unix app where two applicants ran and talked to each other using the command-line, i.e each had a loop something like (treat this as pseudo code):
bool stop=false;
do
{
stringstring cmdBuffer;
cin >> cmdBuffer
string ret = processCommand(cmdBuffer);
if(ret.length()==0)
stop=true;
else
cout << ret;
}
while(!stop);
Is there any reason two Windows apps can't do the same? Would they have to be running on the same 'command prompt' or be console apps, or does the notion of command-lines go beyond being able to see a command prompt in front of me?
For reference, in my case one app would be running the other, they're not two separate apps run up independently.

I'd say redirect the input and output handles(SetStdHandle), but using a named pipe is safer and more secure, plus you can use sync functions on it.
you can also use a global mutex/event or Mapped memory instead, as both are globally named and easy to get/set and read/write to.

Related

System() function, and calling internet explorer from it, DevC++

I tried making a program that'd take website info, then feed it to system() to start website. I'm aware that characters like (\, ",') don't get fed to the output directly, so I used escape sequences.
I wrote this program, but the command prompt just refuses to go past C:\ path. But if I copy paste the command displayed by my program, internet explorer gets launched. But the case isn't so for my program. Can anybody tell me where is the error?
Here is my code:
#include<iostream>
#include<cstdlib>
using namespace std;
int main()
{
cout<<"Please enter the website you wish to visit: ";
string website,web;
cin>>web;
web= " " + web;
website = "\"%ProgramFiles%\\Internet Explorer\\iexplore\""+web;
cout<<"\n"<<website<<endl<<endl<<endl;
system(website.c_str());
return 0;
}
You are using an environment variable, %ProgramFiles%, in your system command-line; these are specific to the MS-DOS prompt environment, and generally not available in system implementations.
I suggest replacing that with the full path, such as \"C:\Program Files\Internet Explorer\iexplore\", and see if that works.
If that works, then your implementation doesn't implicitly replace environment variables the way a full Command Prompt does, so you will need to query the environment variable separately and construct the path before you run system. See getenv for one possible way (I'm not sure what mingw32 supports, so you may have other options as well).
If that doesn't remedy the problem, I suggest checking if you can launch something simpler, like notepad.exe, to verify that there is nothing interfering with launching an application in general, such as your environment path or permissions.
Pass it in double double quotes:
website = "\"\"%ProgramFiles%\\Internet Explorer\\iexplore\"\""+web;
The system("something") call actually runs the command interpreter cmd in a way similar (but probably not identical) to cmd /c something. This has implications when there are spaces in the command name, see e.g this. I cannot tell exactly why single double quotes work when there's no environment variable involved, and do not work otherwise, but the fact is, double double quotes do work.
If you want to launch the user's preferred browser, consider calling
system("start http://" + websitename);
instead.
Get that environment variable value first.
#include <iostream>
#include <ShlObj.h>
int main() {
char pathToPf[MAX_PATH];
if (S_OK == SHGetFolderPathA(NULL, CSIDL_PROGRAM_FILES, NULL, 0, pathToPf))
std::cout << pathToPf << std::endl;
return 0;
}
See SHGetFolderPath documentation...
Note that I was lazy and using the ASCII version of this function. Use it without the A postfix and deal with the conversation ;)

pidof from a background script for another background process

I wrote a c++ program to check if a process is running or not . this process is independently launched at background . my program works fine when I run it on foreground but when I time schedule it, it do not work .
int PID= ReadCommanOutput("pidof /root/test/testProg1"); /// also tested with pidof -m
I made a script in /etc/cron.d/myscript to time schedule it as follows :-
45 15 * * * root /root/ProgramMonitor/./testBkg > /root/ProgramMonitor/OutPut.txt
what could be the reason for this ?
string ReadCommanOutput(string command)
{
string output="";
int its=system((command+" > /root/ProgramMonitor/macinfo.txt").c_str());
if(its==0)
{
ifstream reader1("/root/ProgramMonitor/macinfo.txt",fstream::in);
if(!reader1.fail())
{
while(!reader1.eof())
{
string line;
getline(reader1,line);
if(reader1.fail())// for last read
break;
if(!line.empty())
{
stringstream ss(line.c_str());
ss>>output;
cout<<command<<" output = ["<<output<<"]"<<endl;
break;
}
}
reader1.close();
remove("/root/ProgramMonitor/macinfo.txt");
}
else
cout<<"/root/ProgramMonitor/macinfo.txt not found !"<<endl;
}
else
cout<<"ERROR: code = "<<its<<endl;
return output;
}
its output coming as "ERROR: code = 256"
thanks in advacee .
If you really wanted to pipe(2), fork(2), execve(2) then read the output of a pidof command, you should at least use popen(3) since ReadCommandOutput is not in the Posix API; at the very least
pid_t thepid = 0;
FILE* fpidof = popen("pidof /root/test/testProg1");
if (fpidof) {
int p=0;
if (fscanf(fpidof, "%d", &p)>0 && p>0)
thepid = (pid_t)p;
pclose(fpidof);
}
BTW, you did not specify what should happen if several processes (or none) are running the testProg1....; you also need to check the result of pclose
But you don't need to; actually you'll want to build, perhaps using snprintf, the pidof command (and you should be scared of code injection into that command, so quote arguments appropriately). You could simply find your command by accessing the proc(5) file system: you would opendir(3) on "/proc/", then loop on readdir(3) and for every entry which has a numerical name like 1234 (starts with a digit) readlink(2) its exe entry like e.g. /proc/1234/exe ...). Don't forget the closedir and test every syscall.
Please read Advanced Linux Programming
Notice that libraries like Poco or toolkits like Qt (which has a layer QCore without any GUI, and providing QProcess ....) could be useful to you.
As to why your pidof is failing, we can't guess (perhaps a permission issue, or perhaps there is no more any process like you want). Try to run it as root in another terminal at least. Test its exit code, and display both its stdout & stderr at least for debugging purposes.
Also, a better way (assuming that testProg1 is some kind of a server application, to be run in at most one single process) might be to define different conventions. Your testProg1 might start by writing its own pid into /var/run/testProg1.pid and your current application might then read the pid from that file and check, with kill(2) and a 0 signal number, that the process is still existing.
BTW, you could also improve your crontab(5) entry. You could make it run some shell script which uses logger(1) and (for debugging) runs pidof with its output redirected elsewhere. You might also read the mail perhaps sent to root by cron.
Finally I solved this problem by using su command
I have used
ReadCommanOutput("su -c 'pidof /root/test/testProg1' - root");
insteadof
ReadCommanOutput("pidof /root/test/testProg1");

Using "rundll32.exe" to access SpeechUX.dll

Good Day,
I have searched the Internet tirelessly trying to find an example of how to start Windows Speech Training from with in my VB.Net Speech Recognition Application.
I have found a couple examples, which I can not get working to save my life.
One such example is on the Visual Studios Fourms:
HERE
this particular example users the "Process.Start" call to try and start the Speech Training Session. However this does not work for me. Here is the exmaple from that thread:
Process.Start("rundll32.exe", "C:\Windows\system32\speech\speechux\SpeechUX.dll, RunWizard UserTraining")
What happens is I get and error that says:
There was a problem starting
C:\Windows\system32\speech\speechux\SpeechUX.dll
The specified module could not be found
So I tried creating a shortcut (.lnk) file and thought I could access the DLL this way. My short cut kind of does the same thing. In the short cut I call the "rundll32.exe" with parameters:
C:\Windows\System32\rundll32.exe "C:\Windows\system32\speech\speechux\SpeechUX.dll" RunWizard UserTraining
Then in my VB.Net application I use the "Process.Start" and try to run the shortcut.
This also gives me the same error. However the shortcut itself will start the SPeech Training session. Weird?!?
So, I then took it one step further, to see if it has something to do with my VB.Net Application and the "Process.Start" Call.
I created a VBScript, and using "Wscript.Shell" I point to the Shortcut.
Running the VBScript calls the Shortcut and low and behold the Speech Training starts!
Great! But...
when I try to run the VBscript from my VB.net Application, I get that error again.
What the heck is going on here?
Your problem likely is that your program is compiled as 32-bit and your OS is 64-bit, and thus, when you try to access "C:\Windows\System32\Speech\SpeechUX\SpeechUX.dll" from your program, you're really accessing "C:\Windows\SysWOW64\Speech\SpeechUX\SpeechUX.dll" which, as rundll32.exe is reporting doesn't exist.
Compile your program as 64-bit instead or try the pseudo directory %SystemRoot%\sysnative.
Also, instead of rundll32.exe, you may want to just run SpeechUXWiz.exe with an argument.
Eg.
private Process StartSpeechMicrophoneTraining()
{
Process process = new Process();
process.StartInfo.FileName = System.IO.Path.Combine(Environment.SystemDirectory, "speech\\speechux\\SpeechUXWiz.exe");
process.StartInfo.Arguments = "MicTraining";
process.Start();
return process;
}
private Process StartSpeechUserTraining()
{
Process process = new Process();
process.StartInfo.FileName = System.IO.Path.Combine(Environment.SystemDirectory, "speech\\speechux\\SpeechUXWiz.exe");
process.StartInfo.Arguments = "UserTraining";
process.Start();
return process;
}
Hope that helps.
Read more about Windows 32-bit on Windows 64-bit at http://en.wikipedia.org/wiki/WoW64
or your problem specifically at http://en.wikipedia.org/wiki/WoW64#Registry_and_file_system
If you are using a 64bit OS and want to access system32 folder you must use the directory alias name, which is "sysnative".
"C:\windows\sysnative" will allow you access to system32 folder and all it's contents.
Honestly, who decided this at Microsoft is just silly!!

Using getenv function in Linux

I have this following simple program:
int main()
{
char* v = getenv("TEST_VAR");
cout << "v = " << (v==NULL ? "NULL" : v) << endl;
return 0;
}
These lines are added to .bashrc file:
TEST_VAR="2"
export TEST_VAR
Now, when I run this program from the terminal window (Ubuntu 10.04), it prints v = 2. If I run the program by another way: using launcher or from Eclipse, it prints NULL. I think this is because TEST_VAR is defined only inside bash shell. How can I create persistent Linux environment variable, which is accessible in any case?
On my system (Fedora 13) you can make system wide environment variables by adding them under /etc/profile.d/.
So for example if you add this to a file in /etc/profile.d/my_system_wide.sh
SYSTEM_WIDE="system wide"
export SYSTEM_WIDE
and then open a another terminal it should source it regardless of who the user is opening the terminal
echo $SYSTEM_WIDE
system_wide
Add that to .bash_profile (found in your home directory). You will need to log out and log back in for it to take effect.
Also, since you are using bash, you can combine the export and set in a single statement:
export TEST_VAR="2"
Sorry if I'm being naive but isn't .bash_profile useful only if you are running bash as your default shell ?
I 'sometimes' use Linux and mostly use ksh. I have .profile so may be you should check for .*profile and export the variable there.
Good luck :)
There is no such thing as a system-wide environment variable on Linux. Every process has its own environment. Now by default, every process inherits its environment from its parent, so you can get something like a system-wide environment by ensuring that a var is set in an ancestor of every process of interest. Then, as long as no other process changes that var, every process of interest will have it set.
The other answers here give various methods of setting variables early. For example, .bash_profile sets it in every login process a user runs, which is the ultimate parent of every process they run after login.
/etc/profile is read by every bash login by every user.

What is the point of clog?

I've been wondering, what is the point of clog? As near as I can tell, clog is the same as cerr but with buffering so it is more efficient. Usually stderr is the same as stdout, so clog is the same as cout. This seems pretty lame to me, so I figure I must be misunderstanding it. If I have log messages going out to the same place I have error messages going out to (perhaps something in /var/log/messages), then I probably am not writing too much out (so there isn't much lost by using non-buffered cerr). In my experience, I want my log messages up to date (not buffered) so I can help find a crash (so I don't want to be using the buffered clog). Apparently I should always be using cerr.
I'd like to be able to redirect clog inside my program. It would be useful to redirect cerr so that when I call a library routine I can control where cerr and clog go to. Can some compilers support this? I just checked DJGPP and stdout is defined as the address of a FILE struct, so it is illegal to do something like "stdout = freopen(...)".
Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr?
Is the only difference between clog and cerr the buffering?
How should I implement (or find) a more robust logging facility (links please)?
Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr?
Yes. You want the rdbuf function.
ofstream ofs("logfile");
cout.rdbuf(ofs.rdbuf());
cout << "Goes to file." << endl;
Is the only difference between clog and cerr the buffering?
As far as I know, yes.
If you're in a posix shell environment (I'm really thinking of bash), you can redirect any
file descriptor to any other file descriptor, so to redirect, you can just:
$ myprogram 2>&5
to redirect stderr to the file represented by fd=5.
Edit: on second thought, I like #Konrad Rudolph's answer about redirection better. rdbuf() is a more coherent and portable way to do it.
As for logging, well...I start with the Boost library for all things C++ that isn't in the std library. Behold: Boost Logging v2
Edit: Boost Logging is not part of the Boost Libraries; it has been reviewed, but not accepted.
Edit: 2 years later, back in May 2010, Boost did accept a logging library, now called Boost.Log.
Of course, there are alternatives:
Log4Cpp (a log4j-style API for C++)
Log4Cxx (Apache-sponsored log4j-style API)
Pantheios (defunct? last time I tried I couldn't get it to build on a recent compiler)
Google's GLog (hat-tip #SuperElectric)
There's also the Windows Event logger.
And a couple of articles that may be of use:
Logging in C++ (Dr. Dobbs)
Logging and Tracing Simplified (Sun)
Since there are several answers here about redirection, I will add this nice gem I stumbled across recently about redirection:
#include <fstream>
#include <iostream>
class redirecter
{
public:
redirecter(std::ostream & dst, std::ostream & src)
: src(src), sbuf(src.rdbuf(dst.rdbuf())) {}
~redirecter() { src.rdbuf(sbuf); }
private:
std::ostream & src;
std::streambuf * const sbuf;
};
void hello_world()
{
std::cout << "Hello, world!\n";
}
int main()
{
std::ofstream log("hello-world.log");
redirecter redirect(log, std::cout);
hello_world();
return 0;
}
It's basically a redirection class that allows you to redirect any two streams, and restore it when you're finished.
Redirections
Konrad Rudolph answer is good in regard to how to redirect the std::clog (std::wclog).
Other answers tell you about various possibilities such as using a command line redirect such as 2>output.log. With Unix you can also create a file and add another output to your commands with something like 3>output.log. In your program you then have to use fd number 3 to print the logs. You can continue to print to stdout and stderr normally. The Visual Studio IDE has a similar feature with their CDebug command, which sends its output to the IDE output window.
stderr is the same as stdout?
This is generally true, but under Unix you can setup the stderr to /dev/console which means that it goes to another tty (a.k.a. terminal). It's rarely used these days. I had it that way on IRIX. I would open a separate X-Window and see errors in it.
Also many people send error messages to /dev/null. On the command line you write:
command ...args... 2>/dev/null
syslog
One thing not mentioned, under Unix, you also have syslog().
The newest versions under Linux (and probably Mac OS/X) does a lot more than it used to. Especially, it can use the identity and some other parameters to redirect the logs to a specific file (i.e. mail.log). The syslog mechanism can be used between computers, so logs from computer A can be sent to computer B. And of course you can filter logs in various ways, especially by severity.
The syslog() is also very simple to use:
syslog(LOG_ERR, "message #%d", count++);
It offers 8 levels (or severity), a format a la printf(), and a list of arguments for the format.
Programmatically, you may tweak a few things if you first call the openlog() function. You must call it before your first call to syslog().
As mentioned by unixman83, you may want to use a macro instead. That way you can include some parameters to your messages without having to repeat them over and over again. Maybe something like this (see Variadic Macro):
// (not tested... requires msg to be a string literal)
#define LOG(lvl, msg, ...) \
syslog(lvl, msg " (in " __FILE__ ":%d)", __VA_ARGS__, __LINE__)
You may also find __func__ useful.
The redirection, filtering, etc. is done by creating configuration files. Here is an example from my snapwebsites project:
mail.err /var/log/mail/mail.err
mail.* /var/log/mail/mail.log
& stop
I install the file under /etc/rsyslog.d/ and run:
invoke-rc.d rsyslog restart
so the syslog server handles that change and saves any mail related logs to those folders.
Note: I also have to create the /var/log/mail folder and the files inside the folder to make sure it all works right (because otherwise the mail daemon may not have enough permissions.)
snaplogger (a little plug)
I've used log4cplus, which, since version 1.2.x, is quite good. I have three cons about it, though:
it requires me to completely clear everything if I want to call fork(); somehow it does not survive a fork(); call properly... (at least in the version I had it used a thread)
the configuration files (.properties) are not easy to manage in my environment where I like the administrators to make changes without modifying the original
it uses C++03 and we are now in 2019... I'd like to have at least C++11
Because of that, and especially because of point (1), I wrote my own version called snaplogger. This is not exactly a standalone project, though. I use many other projects from the snapcpp environment (it's much easier to just get snapcpp and run the bin/build-snap script or just get the binaries from launchpad.)
The advantage of using a logger such as snaplogger or log4cplus is that you generally can define any number of destinations and many other parameters (such as the severity level as offered by syslog()). The log4cplus is capable of sending its output to many different places: files, syslog, MS-Windows log system, console, a server, etc. Check out the appenders in those two projects to have an idea of the list of possibilities. The interesting factor here is that any log can be sent to all the destinations. This is useful to have a file named all.log where all your services send their logs. This allows to understand certain bugs which would not be as easy with separate log files when running many services in parallel.
Here is a simple example in a snaplogger configuration file:
[all]
type=file
lock=true
filename=/var/log/snapwebsites/all.log
[file]
lock=false
filename=/var/log/snapwebsites/firewall.log
Notice that for the all.log file I require a lock so multiple writers do not mangle the logs between each others. It's not necessary for the [file] section because I only have one process (no threads) for that one.
Both offer you a way to add your own appenders. So for example if you have a Qt application with an output window, you could write an appender to send the output of the SNAP_LOG_ERROR() calls to that window.
snaplogger also offers you a way to extend the variable support in messages (also called the format.) For example, I can insert the date using the ${date} variable. Then I can tweak it with a parameter. To only output the year, I use ${date:year}. This variable parameter support is also extensible.
snaplogger can filter the output by severity (like syslog), by a regex, and by component. We have a normal and a secure component, the default is normal. I want logs sent to the secure component to be written to secure files. This means in a sub-directory which is way more protected than the normal logs that most admins can review. When I run my HTTP services, some times I send information such as the last 3 digits of a credit card. I prefer to have those in a secure log. It could also be password related errors. Anything I deem to be a security risk in a log, really. Again, components are extensible so you can have your own.
One little point about the redirecter class. It needs to be destroyed properly, and only once. The destructor will ensure this will happen if the function it is declared in actually returns, and the object itself is never copied.
To ensure it can't be copied, provide private copy and assignment operators:
class redirecter
{
public:
redirecter(std::ostream & src, std::ostream & dst)
: src_(src), sbuf(src.rdbuf(dst.rdbuf())) {}
~redirecter() { src.rdbuf(sbuf); }
private:
std::ostream & src_;
std::streambuf * const sbuf_;
// Prevent copying.
redirecter( const redirecter& );
redirecter& operator=( const redirecter& );
};
I'm using this technique by redirecting std::clog to a log file in my main(). To ensure that main() actually returns, I place the guts of main() in a try/catch block. Then elsewhere in my program, where I might call exit(), I throw an exception instead. This returns control to main() which can then execute a return statement.
Basic Logger
#define myerr(e) {CriticalSectionLocker crit; std::cerr << e << std::endl;}
Used as myerr("ERR: " << message); or myerr("WARN: " << message << code << etc);
Is very effective.
Then do:
./programname.exe 2> ./stderr.log
perl parsestderr.pl stderr.log
or just parse stderr.log by hand
I admit this is not for extremely performance critical code. But who writes that anyway.