If I have a file that's meant to be required by other files, is it possible to get the absolute filepath of the file that's requiring it?
So if lib_file.cr has a macro that is meant to be called by the app_file.cr that imported it, can that macro discover the filepath to app_file.cr at compile time?
I tried stuff like this:
macro tester
puts __DIR__
puts __FILE__
end
But when invoked from the file doing the requiring, it gives nothing at compile time and this at runtime:
?
expanded macro: app_file
If I change the macro to this:
macro tester
{{puts __DIR__}}
{{puts __FILE__}}
end
It gives me this at compile time:
"/home/user/Documents/crystal_test/lib_file"
"/home/user/Documents/crystal_test/lib_file.cr"
So is there a trick to get the full path to app_file.cr inside lib_file.cr at compile time?
You can use default args with __FILE__ to get the file that is calling the macro.
macro tester(file = __FILE__)
{{puts file}} # Print at compile time
puts {{file}} # Print at runtime
end
Related
I had some difficulties in overriding perl lib when testing perl codes when the .pl or .pm has use lib or unshift #INC, my question is:
Is it a bad idea to use use lib or unshift #INC in the production code since they are hard to test? The prove -lvr cannot override these too.
Code test.pl
push #INC, '/push/inc/lowest_priority';
use lib "/top/priority/use/lib/second_priority";
unshift #INC, "/unshift/inc/lib/first_priority";
foreach my $inc (#INC){
print "INC=>$inc\n";
}
set perl env
export PERL5LIB=/export/PERL5LIB/env/lib:$PERL5LIB
Output of perl -I/cmd/Iinclude/lib/ test.pl
INC=>/unshift/inc/lib/first_priority
INC=>/top/priority/use/lib/second_priority
INC=>/cmd/Iinclude/lib/
INC=>/export/PERL5LIB/env/lib
INC=>/usr/local/lib64/perl5
INC=>/usr/local/share/perl5
INC=>/usr/lib64/perl5/vendor_perl
INC=>/usr/share/perl5/vendor_perl
INC=>/usr/lib64/perl5
INC=>/usr/share/perl5
INC=>/push/inc/lowest_priority
I don't hard-code paths unless I have no other options.
In general, you don't want to hardcode things that you can provide to your program in some other fashion so it can respond to whatever environment it's in rather than only the environment where you developed it. One of those environments could be your testing environment.
You can set the library search path from outside the program, and that makes it more flexible.
And, since you hard-code them and add them at runtime, they are going to come after anything you've set previously. Here's what happens in your setting:
You start with the default #INC.
You start to "run" your program, you start the compilation phase. It compiles the entire program before it executes run-time statements.
As it compiles, it encounters the use lib and executes that pragma immediately. Now /top/priority/use/lib/second_priority is at the beginning of #INC.
For the rest of the compilation phase, /top/priority/use/lib/second_priority is the first thing in #INC. That's where subsequent use calls will look for things.
The compilation phase finishes and the program transitions into the run phase.
It encounters the push and executes that. Now /push/inc/lowest_priority is the last element of #INC.
It skips over the use lib because the compilation phase handled the pragma.
It encounters the unshift and executes that. Now /unshift/inc/lib/first_priority is the first item in #INC.
Subsequent require calls (a runtime feature) will look in /unshift/inc/lib/first_priority first.
I don't know where you expect to find the library you expected to load, but you have to supply the full path to it. There may be extra directories under the lib/ that matter and you haven't accounted for.
I might be misunderstanding your problem but local::lib allows you to "manually" tune your module path. You should be able to use it to control what paths are used for your test environment.
All the time when debugging cmake code, I find myself writing things like the following:
message( "[some_filename.cmake]: some message about what is going on here." )
In C++, I use macros to automatically log the file name and line number - is this possible in cmake? Ideally, I'd like to write a macro that with the following:
log_info( "some message about what is going on here." )
It would print to console:
|info | some_filename.cmake[72] some message about what is going on here.
With CMAKE_CURRENT_LIST_FILE and CMAKE_CURRENT_LIST_LINE this should be possible.
However, using CMAKE_CURRENT_LIST_LINE directly in the macro will always give you the line in the macro, not where the macro was used (at least with CMake 3.6.1). Thus you need to pass it as an argument:
macro(log_info _line)
# get the path of the CMakeLists file evaluating this macro
# relative to the project's root source directory
file(RELATIVE_PATH _curr_file
"${PROJECT_SOURCE_DIR}"
"${CMAKE_CURRENT_LIST_FILE}")
message("|info | ${_curr_file}[${_line}] ${ARGN}")
endmacro(log_info)
And use it as
log_info(${CMAKE_CURRENT_LIST_LINE} "a info message")
A more advanced solution would be:
macro(log _level _line)
# get the path of the CMakeLists file evaluating this macro
# relative to the project's root source directory
file(RELATIVE_PATH _curr_file
"${PROJECT_SOURCE_DIR}"
"${CMAKE_CURRENT_LIST_FILE}")
set(level_padded " ")
if("${_level}" MATCHES [Dd][Ee][Bb][Uu][Gg])
set(level_padded "debug ")
elseif("${_level}" MATCHES [Ii][Nn][Ff][Oo])
set(level_padded "info ")
elseif("${_level}" MATCHES [Ee][Rr][Rr][Oo][Rr])
set(level_padded "error ")
endif()
message("|${level_padded} | ${_curr_file}[${_line}] ${ARGN}")
endmacro(log)
Usage:
log(debug ${CMAKE_CURRENT_LIST_LINE} "debug info")
log(Info ${CMAKE_CURRENT_LIST_LINE} "info message")
log(ERROR ${CMAKE_CURRENT_LIST_LINE} "error message")
I don't think this is possible in a general manner. CMake does not offer much help with regard to debugging or introspection.
I can imagine a macro that uses global variables which must be added by the user at the beginning of each macro or function. In the end, it wouldn't reduce the amount of manual work but clutter the code.
Maybe you can use a function / macro of your editor. Some editors can insert the current file name, the current time stamp or a pseudo-random UUID by key stroke. Whenever you want to log a message, just add the value and you can later search the code for this unique sequence. Nut sure whether this is a better workflow for you.
The best place would be a feature request for CMake. You are not the only one who could profit from such a feature. Maybe Kitware or someone else is willing to develop and upstream a log macro.
in my c++ linux application I have this macro:
#define PRINT(format,arg...) printf(format,##arg)
I want to add a date and time to the beggining of the string that come to PRINT. (it is a log, so I want it at runtime, with variables)
how to change this macro in order to do it?
thanks
Do you want compile time or runtime added to the string? If the former:
#define PRINT(format,arg...) printf(__DATE__ ":" __TIME__ " " format,##arg)
will work most of the time.
Note that this will only work if invocations of PRINT only use a string literal for the format string. (ie, PRINT( "foo" ) will work, but PRINT( x ) where x is a variable will not).
If you want a runtime date and time, just append "%s" to the format and then add a call to a function that returns what you want before the arguments.
If you want local runtime date and can use boost.date_time
#define DATE_TODAY to_simple_string(day_clock::local_day())
#define PRINT(format,arg...) printf( (DATE_TODAY + ": " + format).c_str(), ##arg)
You can also use day_clock::universal_day() if you want UTC time.
Assuming that you want the compile time date and that you compiler has a __DATE__ macro that returns the date
#define PRINT(format,arg...) printf(__DATE__ ": " format,##arg)
If you want runtime date, then you can do something like that:
std::string format_current_time()
{
// format the time as you like and return it as an std::string
}
#define PRINT(format,arg...) printf("%s: " format, format_current_time.c_str(), ##arg)
If you need the current datetime, you have to implement a regular function to do what you ask, since it's impossible for a C macro to return the data you are looking for.
Remember that a C macro is replaced by the C preprocessor at compile time.
I'm running a program that does processing on a file.
I want to be able to supply the program with several files, and by attaching to it with gdb, I want to get a memory dump at a certain point in the code for each of the files. I want the dump for each file to go to a file with the same filename as the input file (maybe after formatting it a little, say adding a suffix)
So suppose I have a function called HereIsTheFileName(char* filename), and another function called DumpThisMemoryRegion(void* startAddr, void* endAddr), I want to do something like the following:
To get the file name to an environment variable:
break HereIsTheFileName
commands 1
set $filename = malloc(strlen(filename) + 1)
call memcpy($filename, filename, strlen(filename) + 1)
end
Then to dump the memory to the filename I saved earlier:
break DumpThisMemoryRegion
commands 2
append binary memory "%s.memory"%$filename startAddr endAddr
end
(I would even settle for the filename as it is, without formatting, if that turns out to be the difficult part)
However, I couldn't get gdb to accept anything except an exlicit file name for the append/dump commands. when I ran "append binary memory $filename ..." I got the output in the file "/workdir/$filename".
Is there any way to make gdb choose the file name at runtime?
Thanks!
I don't know how to make append accept a runtime filename, but you can always cheat a bit by writing the whole thing to a file and then sourcing that file, using logging.
By putting this in your ~/.gdbinit
define reallyappend
printf "using gdbtmp.log to dump memory to file %s\n", $arg0
set logging file gdbtmp.log
set logging overwrite on
set logging redirect on
set logging on
printf "append binary memory %s 0x%x 0x%x", $arg0, $arg1, $arg2
set logging off
set logging redirect off
set logging overwrite off
source gdbtmp.log
end
you can use the function reallyappend instead, for example with
(gdb) set $filename = "somethingruntimegenerated"
(gdb) reallyappend $filename startAddr endAddr
I don't know if logging works ok in an "commands" environment, but you can give it a shot at least.
Yeah, you can't use a variable here for the filename argument.
The best suggestion I can offer is to write a script that will
set all the breakpoints and set up the "append" commands, and
use text editing or awk and sed to set up the filenames in the
script.
In MFC C++ (Visual Studio 6) I am used to using the TRACE macro for debugging. Is there an equivalent statement for plain win32?
_RPTn works great, though not quite as convenient. Here is some code that recreates the MFC TRACE statement as a function allowing variable number of arguments. Also adds TraceEx macro which prepends source file and line number so you can click back to the location of the statement.
Update: The original code on CodeGuru wouldn't compile for me in Release mode so I changed the way that TRACE statements are removed for Release mode. Here is my full source that I put into Trace.h. Thanks to Thomas Rizos for the original:
// TRACE macro for win32
#ifndef __TRACE_H__850CE873
#define __TRACE_H__850CE873
#include <crtdbg.h>
#include <stdarg.h>
#include <stdio.h>
#include <string.h>
#ifdef _DEBUG
#define TRACEMAXSTRING 1024
char szBuffer[TRACEMAXSTRING];
inline void TRACE(const char* format,...)
{
va_list args;
va_start(args,format);
int nBuf;
nBuf = _vsnprintf(szBuffer,
TRACEMAXSTRING,
format,
args);
va_end(args);
_RPT0(_CRT_WARN,szBuffer);
}
#define TRACEF _snprintf(szBuffer,TRACEMAXSTRING,"%s(%d): ", \
&strrchr(__FILE__,'\\')[1],__LINE__); \
_RPT0(_CRT_WARN,szBuffer); \
TRACE
#else
// Remove for release mode
#define TRACE ((void)0)
#define TRACEF ((void)0)
#endif
#endif // __TRACE_H__850CE873
From the msdn docs, Macros for Reporting:
You can use the _RPTn, and _RPTFn macros, defined in CRTDBG.H, to replace the use of printf statements for debugging. These macros automatically disappear in your release build when _DEBUG is not defined, so there is no need to enclose them in #ifdefs.
There is also OutputDebugString. However that will not be removed when compiling release.
Trace macros that provide messages with source code link, run-time callstack information, and function prototype information with parameter values:
Extended Trace: Trace macros for Win32
I just use something like this (from memory, not tested at all...)
#define TRACE(msg) {\
std::ostringstream ss; \
ss << msg << "\n"; \
OutputDebugString(msg.str()); \
}
And then I can write things like :-
TRACE("MyClass::MyFunction returned " << value << " with data=" << some.data);
You can wrap that in some #ifdefs to remove it in release builds easily enough.
I found that using the _RPT() macro will also work with a C source file in Visual Studio 2005. This article Debugging with Visual Studio 2005/2008: Logging and Tracing provides an overview of TRACE, _RPT, and other logging type macros.
I generate a line for a log file called the ASSRTLOG which contains logs and when writing the log to the file, I also do the following source code line:
_RPT1(_CRT_WARN, "ASSRTLOG: %s", szLog1);
This line puts the same log that is going into the log file into the output window of the Visual Studio 2005 IDE.
You might be interested in the mechanics behind the approach we are using for logging. We have a function PifLogAbort() which accepts a series of arguments that are then used to generate a log. These arguments include the name of the file where the log is being generated along with the line number. The macro looks like this:
#define NHPOS_ASSERT_TEXT(x, txt) if (!(x)) { PifLogAbort( (UCHAR *) #x , (UCHAR *) __FILE__ , (UCHAR *) txt , __LINE__ );}
and the function prototype for PifLogAbort() look like this:
PifLogNoAbort(UCHAR *lpCondition, UCHAR *lpFilename, UCHAR *lpFunctionname, ULONG ulLineNo)
and to use the macro we will insert a line like this:
NHPOS_ASSERT_TEXT(sBRetCode >= 0, "CliEtkTimeIn(): EtkTimeIn() returned error");
What this macro will do is that if the return code is less than 0 (the assertion fails), a log will be generated with the provided text. The log includes the condition that generated the log along with file name and line number.
The function PifLogAbort() generates logs with a specified length and treats the output file as a circular buffer. The logs have a time and date stamp as well.
In those cases where we want to generate the descriptive text dynamically at run time, perhaps to provide the actual error code value, we use the sprintf() function with a buffer as in the following code sequence:
if (sErrorSave != STUB_BM_DOWN) {
char xBuff[128];
sprintf(xBuff, "CstSendBMasterFH: CstComReadStatus() - 0x%x, sError = %d", usCstComReadStatus, CliMsg.sError);
NHPOS_ASSERT_TEXT((sErrorSave == STUB_BM_DOWN), xBuff);
}
If we want the logs to not be generated, all we need to do is to go to the single header file where the macro is defined and define it to be nothing then recompile. However we have found that these logs can be very valuable when investigating field issues and are especially useful during integration testing.
Windows Events are a potential replacement for TRACE macros, depending on your particular scenario. The code gets compiled into both Debug and Release configurations. Event tracing can then be dynamically enabled and disabled, displayed in real-time, or dumped on a client's machine for later diagnosis. The traces can be correlated with trace information gathered from other parts of the OS as well.
If you merely need to dump information whenever code reaches certain checkpoints, together with variable content, stack traces, or caller names, Visual Studio's Tracepoints are a non-intrusive option to do so.