I'm trying to optimize my c++ program. It uses caffe.
When executing my program, caffe outputs around 1GB (!) of info logs every 15 mins. I suspect this impacts efficiency significantly. But I haven't found how to turn logging off. In this question someone suggested setting FLAGS_v manually.
With the following code I can disable VLOG logs by level, but LOG(x) logs are unaffected.
First lines in main():
FLAGS_v = 1; //disables vlog(2), vlog(3), vlog(4)
VLOG(0) << "Verbose 0";
VLOG(1) << "Verbose 1";
VLOG(2) << "Verbose 2";
VLOG(3) << "Verbose 3";
VLOG(4) << "Verbose 4";
LOG(INFO) << "LOG(INFO)";
LOG(WARNING) << "LOG(WARNING)";
LOG(ERROR) << "LOG(ERROR)";
Output:
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0523 19:06:51.484634 14115 main.cpp:381] Verbose 0
I0523 19:06:51.484699 14115 main.cpp:382] Verbose 1
I0523 19:06:51.484705 14115 main.cpp:386] LOG(INFO)
W0523 19:06:51.484710 14115 main.cpp:387] LOG(WARNING)
E0523 19:06:51.484715 14115 main.cpp:388] LOG(ERROR)
Is there another flag I'm unaware of? I'm thinking of commenting every LOG(INFO) line out, but I would like a more elegant solution. (I'd prefer a c++ solution over a command line flag solution).
This works in C++ source code.
google::InitGoogleLogging("XXX");
google::SetCommandLineOption("GLOG_minloglevel", "2");
you need to set your environment variable
GLOG_minloglevel=2
then run your executable.
You can find more information here (at the bottom of this page there is a section on stripping LOG()s from your code using a macro definition).
If you want to turn off log from code level, you can use this.
Just add below line in your c++ code at src/caffe/net.cpp in Init method and build caffe:
fLI::FLAGS_minloglevel=3;
Partial view of the function where this line should be added:
template <typename Dtype>
void Net<Dtype>::Init(const NetParameter& in_param) {
fLI::FLAGS_minloglevel=3;
// Set phase from the state.
phase_ = in_param.state().phase();
// Filter layers based on their include/exclude rules and
// the current NetState.
NetParameter filtered_param;
FilterNet(in_param, &filtered_param);
LOG(INFO) << "Initializing net from parameters: " << std::endl
<< filtered_param.DebugString();
// Create a copy of filtered_param with splits added where necessary.
NetParameter param;
InsertSplits(filtered_param, ¶m);
// Basically, build all the layers and set up their connections.
name_ = param.name();
.
.
.
.
Set log level according to your necessity.
The environment variable "GLOG_minloglevel" will filter some log but they have been compile in your executable file. If you want to disable them during compiling time, define a macro:
"#define GOOGLE_STRIP_LOG 1"
This is the comment in logging.h:
111 // The global value of GOOGLE_STRIP_LOG. All the messages logged to
112 // LOG(XXX) with severity less than GOOGLE_STRIP_LOG will not be displayed.
113 // If it can be determined at compile time that the message will not be
114 // printed, the statement will be compiled out.
115 //
116 // Example: to strip out all INFO and WARNING messages, use the value
117 // of 2 below. To make an exception for WARNING messages from a single
118 // file, add "#define GOOGLE_STRIP_LOG 1" to that file _before_ including
119 // base/logging.h
120 #ifndef GOOGLE_STRIP_LOG
121 #define GOOGLE_STRIP_LOG 0
122 #endif
To add to answer of Qi Cai,
if there is a dependency on Gflags library, than remove "GLOG_".
google::SetCommandLineOption("minloglevel", "2");
Each levels matches to
INFO : 0
WARNING : 1 ...
glog version
commit d4e8eb
Date: 2021-03-02 09:59:36 +0100
#include <glog/logging.h>
FLAGS_minloglevel = 100;
Related
WriteConsole does not work with PowerShell ISE.
Neither WriteConsoleW or WriteConsoleA do.
See, for example, this program:
#include <iostream>
#include <Windows.h>
void w() {
DWORD written;
BOOL const success = WriteConsoleW(GetStdHandle(STD_OUTPUT_HANDLE), L"Printed\n", 8, &written, nullptr);
std::wcout << (success ? L"Success" : L"Failure") << L". Wrote " << written << L" characters." << std::endl;
}
void a() {
DWORD written;
BOOL const success = WriteConsoleA(GetStdHandle(STD_OUTPUT_HANDLE), "Printed\n", 8, &written, nullptr);
std::cout << (success ? "Success" : "Failure") << ". Wrote " << written << " characters." << std::endl;
}
int main() {
w();
a();
return 0;
}
Ran from PowerShell (or Command Prompt, or Git Bash), it prints:
Printed
Success (wrote 8 characters)
Printed
Success (wrote 8 characters)
But from PowerShell ISE:
Failure (wrote 0 characters)
Failure (wrote 0 characters)
To provide background information on Bertie Wheen's own helpful answer:
Perhaps surprisingly, the Windows PowerShell ISE does not allocate a console by default. (The console-like UI that the ISE presents is not a true Windows console).
A console is allocated on demand, the first time a console-subsystem program is run in a session (e.g., cmd /c ver)
Even once that has happened, however, interactive console-subsystem programs are fundamentally unsupported (try choice /m "Prompt me", for instance).
Interactively, you can test if a console has been allocated or not with the following command: [Console]::WindowTop; if there's no console, you'll get a The handle is invalid error.
It follows from the above that your program cannot assume that a console is present when run in the ISE.
One option is to simply not support running in the ISE, given that it is:
no longer actively developed
and there are various reasons not to use it (bottom section), notably not being able to run PowerShell (Core) 6+, and the limitations with respect to console-subsystem programs mentioned above.
As for a successor environment: The actively developed, cross-platform editor that offers the best PowerShell development experience is Visual Studio Code with its PowerShell extension.
As for the potential reason for the poor console support in the ISE: zett42 notes:
A possible reason why ISE developers choose not to allocate a console could stem from the historic difficulties of creating a custom, embedded console within an app's own window. Developers had to resort to hackish, unsupported ways of doing that. Only recently (2018) Windows got a dedicated pseudo-console (ConPTY) API.
The reason why is shown by this program:
#include <iostream>
#include <Windows.h>
int main() {
DWORD const file_type = GetFileType(GetStdHandle(STD_OUTPUT_HANDLE));
if (file_type == FILE_TYPE_CHAR) {
std::cout << "char" << std::endl;
} else if (file_type == FILE_TYPE_PIPE) {
std::cout << "pipe" << std::endl;
} else {
std::cout << file_type << std::endl;
}
return 0;
}
When run from PowerShell (or Command Prompt, or Git Bash), it prints:
char
But from PowerShell ISE:
pipe
WriteConsole cannot write through a pipe, and thus fails. The same thing happens when run from PowerShell / Command Prompt / Git Bash if the output is piped.
My purpose is simple: to somehow see the logs printed by VLOG(5), which is provided by the glog library.
I have the following code:
google::InitGoogleLogging(argv[0]);
google::ParseCommandLineFlags(&argc, &argv, true);
FLAGS_logtostderr = 1;
FLAGS_v = 10;
LOG(INFO) << "info"; // OK, I see it
LOG(WARNING) << "warning"; // OK
VLOG(5) << "vlog"; // Nothing :(
No matter I manually set the flags here (FLAGS_logtostderr and FLAGS_v) or I pass it through the command line (--v=10), I just never find the string "vlog" anywhere: neither stdout, stderr nor some log file under \tmp. I think I didn't change the output path though.
Do I miss anything here? Any idea how to enable VLOG?
Personally, I've never tried it with
FLAGS_logtostderr = 1;
FLAGS_v = 10;
The VLOG works fine for me if I set "GLOG_v=x" as an environmental variable on both linux and windows. E.g.
Alternatively, if you want to test it on the command line you can do the following:
Windows:
C:>set GLOG_v=5
C:>set GLOG_logtostderr=1
C:>YourProgramName
Linux:
$ GLOG_v=7 GLOG_logtostderr=1 ./YourProgramName
I recently tried to create a program that can read an ODBC database then write the entries in an Excel file by using the CRecordset class, the program complilates perfectly, but the problems come in the execution...
First error :
Debug Assertion Failed!
Program: C:\Windows\system32\mfc140ud.dll
File: f:\dd\vctools\vc7libs\ship\atlmfc\include\afxwin1.inl
Line: 24
Second error :
Debug Assertion Failed!
Program: C:\Windows\system32\mfc140ud.dll
File: f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\dbcore.cpp
Line: 3312
The two errors are pointing toward the mfc140ud.dll file, it's not a missing file, so it's not the problem.
Here is the function where the exception is raised:
void parseDB(CRecordset &rs, const CString &SqlString, CString strOut) {
std::cout << "test2";
rs.Open(CRecordset::snapshot, SqlString, CRecordset::readOnly);
std::string entry;
std::fstream file;
std::cout << "test3";
while(!rs.IsEOF()) {
std::cout << "test4";
rs.GetFieldValue((short)0, strOut);
CT2CA pszConvertedAnsiString = strOut;
entry = pszConvertedAnsiString;
writeXLSX(entry.c_str(), file);
rs.MoveNext();
}
rs.Close();
}
The "std::cout << "test"" are here for debugging, and my program generates these errors right after the "test2" display, so I deducted that the error comes from the "Open" line.
This is the way I initialize the CRecordset:
CString sDsn;
CDatabase db;
CRecordset rs(&db);
CString strOut;
CString SqlString;
Then, I use a CALL SQL function in a switch-case:
switch (sequence) {
case 1:
SqlString = "CALL GETCUSNAME(AGENTS)";
break;
case 2:
SqlString = "CALL GETCUSNAME(CLIENT)";
break;
default:
AfxMessageBox(_T("Wrong entry!"));
}
I searched on many sites and I couldn't find an answer, that's why I ask a question here, thanks by advance.
The first assertion comes from AfxGetResourceHandle complaining that it has not been set up correctly.
This will usually happen because you either didn't call AfxWinInit at the start of your application (if you have a console application and didn't set it up with the MFC wizard, this is very likely the case), or you're writing an MFC DLL called from non-MFC code, and you didn't add AFX_MANAGE_STATE(AfxGetStaticModuleState( )); at the start of every externally visible function.
I believe the second is because MFC requires you to wrap CALL queries in curly braces, like so: {CALL GETCUSNAME(AGENTS)}. Otherwise the call is not recognized, and code execution enters a path it is not supposed to take.
(Context) I'm developing a cross-platform (Windows and Linux) application for distributing files among computers, based on BitTorrent Sync. I've made it in C# already, and am now porting to C++ as an exercise.
BTSync can be started in API mode, and for such, one must start the 'btsync' executable passing the name and location of a config file as arguments.
At this point, my greatest problem is getting my application to deal with the executable. I've come to found Boost.Process when searching for a cross-platform process management library, and decided to give it a try. It seems that v0.5 is it's latest working release, as some evidence suggests, and it can be infered there's a number of people using it.
I implemented the library as follows (relevant code only):
File: test.hpp
namespace testingBoostProcess
{
class Test
{
void StartSyncing();
};
}
File: Test.cpp
#include <string>
#include <vector>
#include <iostream>
#include <boost/process.hpp>
#include <boost/process/mitigate.hpp>
#include "test.hpp"
using namespace std;
using namespace testingBoostProcess;
namespace bpr = ::boost::process;
#ifdef _WIN32
const vector<wstring> EXE_NAME_ARGS = { L"btsync.exe", L"/config", L"conf.json" };
#else
const vector<string> EXE_NAME_ARGS = { "btsync", "--config", "conf.json" };
#endif
void Test::StartSyncing()
{
cout << "Starting Server...";
try
{
bpr::child exeServer = bpr::execute(bpr::initializers::set_args(EXE_NAME_ARGS),
bpr::initializers::throw_on_error(), bpr::initializers::inherit_env());
auto exitStatus = bpr::wait_for_exit(exeServer); // type will be either DWORD or int
int exitCode = BOOST_PROCESS_EXITSTATUS(exitStatus);
cout << " ok" << "\tstatus: " << exitCode << "\n";
}
catch (const exception& excStartExeServer)
{
cout << "\n" << "Error: " << excStartExeServer.what() << "\n";
}
}
(Problem) On Windows, the above code will start btsync and wait (block) until the process is terminated (either by using Task Manager or by the API's shutdown method), just like desired.
But on Linux, it finishes execution immediately after starting the process, as if wait_for_exit() isn't there at all, though the btsync process isn't terminated.
In an attempt to see if that has something to do with the btsync executable itself, I replaced it by this simple program:
File: Fake-Btsync.cpp
#include <cstdio>
#ifdef _WIN32
#define WIN32_LEAN_AND_MEAN
#define SLEEP Sleep(20000)
#include <Windows.h>
#else
#include <unistd.h>
#define SLEEP sleep(20)
#endif
using namespace std;
int main(int argc, char* argv[])
{
for (int i = 0; i < argc; i++)
{
printf(argv[i]);
printf("\n");
}
SLEEP;
return 0;
}
When used with this program, instead of the original btsync downloaded from the official website, my application works as desired. It will block for 20 seconds and then exit.
Question: What is the reason for the described behavior? The only thing I can think of is that btsync restarts itself on Linux. But how to confirm that? Or what else could it be?
Update: All I needed to do was to know about what forking is and how it works, as pointed in sehe's answer (thanks!).
Question 2: If I use the System Monitor to send an End command to the child process 'Fake-Btsync' while my main application is blocked, wait_for_exit() will throw an exception saying:
waitpid(2) failed: No child processes
Which is a different behavior than on Windows, where it simply says "ok" and terminates with status 0.
Update 2: sehe's answer is great, but didn't quite address Question 2 in a way I could actually understand. I'll write a new question about that and post the link here.
The problem is your assumption about btsync. Let's start it:
./btsync
By using this application, you agree to our Privacy Policy, Terms of Use and End User License Agreement.
http://www.bittorrent.com/legal/privacy
http://www.bittorrent.com/legal/terms-of-use
http://www.bittorrent.com/legal/eula
BitTorrent Sync forked to background. pid = 24325. default port = 8888
So, that's the whole story right there: BitTorrent Sync forked to background. Nothing more. Nothing less. If you want to, btsync --help tells you to pass --nodaemon.
Testing Process Termination
Let's pass --nodaemon run btsync using the test program. In a separate subshell, let's kill the child btsync process after 5 seconds:
sehe#desktop:/tmp$ (./test; echo exit code $?) & (sleep 5; killall btsync)& time wait
[1] 24553
[2] 24554
By using this application, you agree to our Privacy Policy, Terms of Use and End User License Agreement.
http://www.bittorrent.com/legal/privacy
http://www.bittorrent.com/legal/terms-of-use
http://www.bittorrent.com/legal/eula
[20141029 10:51:16.344] total physical memory 536870912 max disk cache 2097152
[20141029 10:51:16.344] Using IP address 192.168.2.136
[20141029 10:51:16.346] Loading config file version 1.4.93
[20141029 10:51:17.389] UPnP: Device error "http://192.168.2.1:49000/l2tpv3.xml": (-2)
[20141029 10:51:17.407] UPnP: ERROR mapping TCP port 43564 -> 192.168.2.136:43564. Deleting mapping and trying again: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:17.415] UPnP: ERROR removing TCP port 43564: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:17.423] UPnP: ERROR mapping TCP port 43564 -> 192.168.2.136:43564: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:21.428] Received shutdown request via signal 15
[20141029 10:51:21.428] Shutdown. Saving config sync.dat
Starting Server... ok status: 0
exit code 0
[1]- Done ( ./test; echo exit code $? )
[2]+ Done ( sleep 5; killall btsync )
real 0m6.093s
user 0m0.003s
sys 0m0.026s
No problem!
A Better Fake Btsync
This should still be portable and be (much) better behaved when killed/terminated/interrupted:
#include <boost/asio/signal_set.hpp>
#include <boost/asio.hpp>
#include <iostream>
int main(int argc, char* argv[])
{
boost::asio::io_service is;
boost::asio::signal_set ss(is);
boost::asio::deadline_timer timer(is, boost::posix_time::seconds(20));
ss.add(SIGINT);
ss.add(SIGTERM);
auto stop = [&]{
ss.cancel(); // one of these will be redundant
timer.cancel();
};
ss.async_wait([=](boost::system::error_code ec, int sig){
std::cout << "Signal received: " << sig << " (ec: '" << ec.message() << "')\n";
stop();
});
timer.async_wait([&](boost::system::error_code ec){
std::cout << "Timer: '" << ec.message() << "'\n";
stop();
});
std::copy(argv, argv+argc, std::ostream_iterator<std::string>(std::cout, "\n"));
is.run();
return 0;
}
You can test whether it is well-behaved
(./btsync --nodaemon; echo exit code $?) & (sleep 5; killall btsync)& time wait
The same test can be run with "official" btsync and "fake" btsync. Output on my linux box:
sehe#desktop:/tmp$ (./btsync --nodaemon; echo exit code $?) & (sleep 5; killall btsync)& time wait
[1] 24654
[2] 24655
./btsync
--nodaemon
Signal received: 15 (ec: 'Success')
Timer: 'Operation canceled'
exit code 0
[1]- Done ( ./btsync --nodaemon; echo exit code $? )
[2]+ Done ( sleep 5; killall btsync )
real 0m5.014s
user 0m0.001s
sys 0m0.014s
I am struggling with an issue regarding running a SQL statement to an Oracle database through C++, using occi. My code is as follows:
#include <iostream>
#include "occi.h"
namespace oc = oracle::occi;
int main() {
std::cout << "Setting up environment...\n";
oc::Environment * env = oc::Environment::createEnvironment();
std::cout << "Setting up connection...\n";
oc::Connection * conn = env->createConnection("user","pass","server");
std::cout << "Creating statement...\n";
//Very simply query...
oc::Statement * stmt = conn->createStatement("SELECT '1' FROM dual");
std::cout << "Executing query...\n";
oc::ResultSet * rs = stmt->executeQuery();
while(rs->next()) {
std::cout << rs->getString(1) << std::endl; //Error is thrown at this line, but after printing since I can see '1' on the console.
}
stmt->closeResultSet(rs);
conn->terminateStatement(stmt);
env->terminateConnection(conn);
oc::Environment::terminateEnvironment(env);
return 0;
}
The error that is shown is:
Unhandled exception at 0x1048ad7a (msvcp100d.dll) in MyDatabaseApp.exe: 0xC0000005: Access violation reading location 0xccccccd0.
My program stops inside 'xstring' at the following line of code:
#if _ITERATOR_DEBUG_LEVEL == 0
....
#else /* _ITERATOR_DEBUG_LEVEL == 0 */
typedef typename _Alloc::template rebind<_Elem>::other _Alty;
_String_val(_Alty _Al = _Alty())
: _Alval(_Al)
{ // construct allocator from _Al
....
}
~_String_val()
{ // destroy the object
typename _Alloc::template rebind<_Container_proxy>::other
_Alproxy(_Alval);
this->_Orphan_all(); //<----------------------Code stops here
_Dest_val(_Alproxy, this->_Myproxy);
_Alproxy.deallocate(this->_Myproxy, 1);
this->_Myproxy = 0;
}
#endif /* _ITERATOR_DEBUG_LEVEL == 0 */
If I change my query to:
oc::Statement * stmt = conn->createStatement("SELECT 1 FROM dual");
and the loop statement to:
std::cout << rs->getInt(1) << std::endl;
It works fine with no errors. I think this is because getting an integer simply returns a primitive, but when an object is being returned it is blowing up (I think on a destructor, but I'm not sure why...)
I have been playing around with this for hours today, and I am pretty stuck.
Some information about my system:
OS - Windows XP
Oracle Version - 10g
IDE - Microsoft Visual Studio 2010 Express C++
My project properties are as follows:
C/C++ - General - Additional Include Directories = C:\oracle\product\10.2.0\client_1\oci\include;%(AdditionalIncludeDirectories)
C/C++ - Code Generation - Multi-threaded Debug DLL (/MDd)
Linker - General - Additional Library Directories = C:\oracle\product\10.2.0\client_1\oci\lib\msvc\vc8;%(AdditionalLibraryDirectories)
Linked - Input - Additional Dependencies = oraocci10.lib;oraocci10d.lib;%(AdditionalDependencies)
I hope I haven't been confusing with too much info... Any help or insight would be great, Thanks in advance!
EDIT If I rewrite my loop, storing the value in a local variable, the error is thrown at the end of the loop:
while(rs->next()) {
std::string s = rs->getString(1); //s is equal to "1" as expected
std::cout << s << std::endl; //This is executed successfully
} //Error is thrown here
Usually such kind of problems come from differences in build environments (IDE) of end user and provider.
Check this.
Related problems:
Unhandled exception at 0x523d14cf (msvcr100d.dll)?
Why does this program crash: passing of std::string between DLLs
First try to use correct lib and dll. If compiled in debug mode then all libs and dlls must be debug. Use VC++ Modules view to be sure that proper DLL loaded.
I was lucky with my application to have all libs compiled for MSVC2010. So I just check debug and release mode DLLs and got working application.
I revisited this issue about a month ago and I found that the MSVC2010 occi library was built for Oracle 11g. We are running Oracle 10g, so I had to use the MSVC2005 library. So I installed the outdated IDE and loaded the Debug library and it worked (for some reason the release version wouldn't work though).
EDIT
For anyone who is having the same problem I was, if downgrading the IDE from MSVC2010 to MSVC2005 with the appropriate libraries doesn't work, you could try upgrading the Oracle client from 10g to 11g and use the MSVC2010 library, as suggested by harvyS. In retrospect this would've probably been the better solution.