I need to have two files in my setup. The second file will perform some variable definitions, but based on preprocessor directives read on the first file. The first file will then have to do stuff like set up an object, using the variables defined in the second file as parameters to the constructor.
Is this possible to have this back and forth information between two source files?
To sum up (something like this):
*File1 uses a #define status true.
*File2 sees the preprocessor directive of File1.
If its true it sets up some variables.
*File1 then initializes an object with the variables from File2.
In this example, steps 1 and 3 are redundant. Because if File1 sets up the preprocessor directive, it can know the variables that File2 which is going to set up - by hardcoding means.
But I just want to experiment with what information I can pass back and forth... Can File2 read preprocessor directives from File1? Can then File1 read back information from File2?
EDIT (pseudocode):
//file1.cpp
#define status true
//this class is defined previously
//var1 was defined in file2.cpp
MyObject object1(var1);
//file2.cpp
//status is the preprocessor directive from file1
if (status == true)
{
int var1 = 1;
}
There is absolutely no interaction between preprocessor directives in multiple files.
If you recall, #include is just automated copy/paste. So you can put your directive in a header file, then #include that file into your other files. When you update the directive in the header file it will be like you updated the directive in both files, so they can't get out of sync, but there is still no cross-file interaction.
Officially, a translation unit is a source file after processing all the #includes. Preprocessor directives in one translation unit can't affect any other translation unit.
Related
In Stata, I'm trying to use the command include to run several regressions in one do file. The overall goal is to help in forecasting Natural Gas production.
I have a do file for each product and basin that I'm interested in. Within each do file I am running several versions of the regression based on data available at specific times (e.g. the first set of regressions is for product type 4, in basin 2, with information available in June 2020).
The regression command within the do file looks something like this:
include "gas\temp\NG var.doh"
foreach time in dec14 dec15 dec16 dec17 previous current {
arima price_`perm4type' `perm4pvars' if tin(,`yq_`time'') , ar(1) ma()
}
I have the perm4pvars defined in the file NG var.doh like this:
local perm4pvars txfreeze PNGHH_`time' d.CPIE_`time' d.IFNRESPUOR_`time' POILWTI_`time'
When I run my do file the time from my doh file doesn't show up. So I get an error message: "PNGHH_ is an ambiguous abbreviation"
How can I get the time to show up in my regress command?
I did try doing this in my .doh file
foreach time in dec14 dec15 dec16 dec17 previous current {
local perm4pvars txfreeze PNGHH_`time' d.CPIE_`time' d.IFNRESPUOR_`time' POILWTI_`time'
}
I got it to run, only for time=current
The relationship between the included .do file and the main .do file is not symmetric.
The point of include is that the included do file is read into the main do file. But the included do file knows nothing about any files it is included in. So definitions in the included do file may not usefully refer to definitions in the main .do file, or to any others, unless they in turn are included, or so I presume.
That explains why your reference to local macro time in the included file doesn't do what you want. It's not illegal in any do file or Stata program to refer to a local macro that doesn't exist (meaning, is not visible from the file) but as here the consequences may not be what you want.
Imo your issue is more basic than what the above answer suggests, in that you want to use locals from a loop outside of it. Or, in the case where you got it to run for time=current, misunderstanding what the loop does. Might be good to get a good understanding of the functioning of loops first, also since you seem to be using multiple other loops that are not detailed in your question and where I can only assume those are specified correctly.
Assuming the gas\temp\NG var.doh file only holds the line defining the local, i.e. local perm4pvars txfreeze PNGHH_`time' d.CPIE_`time' d.IFNRESPUOR_`time' POILWTI_`time', a way to get it working the way you want (or at least how I think you want it, since you do not detail the specifications you want to achieve with your loop) is to move the include inside the loop, changing your code to:
foreach time in dec14 dec15 dec16 dec17 previous current {
include "gas\temp\NG var.doh"
arima price_`perm4type' `perm4pvars' if tin(,`yq_`time'') , ar(1) ma()
}
This way, the line in the included .do file can use the local time from the loop. In the case there is more in the .doh file you would have to change your code a bit more to get it to work, however I can't help you with that unless you give me more information about the structure of the code and what you want to achieve with it.
In a C++ project, I describe methods and functions in my headers like so:
int foo(float, bool, std::string);
and in my implementation, name the parameters:
int
foo(float f,
bool b,
std::string str)
{
...
}
and if I generate my documentation with Doxygen with SOURCE_BROWSER=NO, VERBATIM_HEADERS=NO and EXTRACT_ALL=YES then the resulting documentation contains the function signature with the parameter names which is what I want. But I also end up with all of my .cpp files in the 'File List' section alongside the headers.
I want to completely hide my source files but then I want to also have my documentation to contain parameter names without having to go through the project and add thousands of them to the includes myself.
I have tried adding the src/ folder to EXCLUDE which does hide the sources but then they aren't parsed at all and the opposite problem arises where the parameters are nameless again.
Is there any way I can eat my cake and have it too?
It turns out if I disable EXTRACT_ALL=yes and add #file to the start of only the files I want to show (so all the headers) then I can retain the parameter names from sources while hiding the files.
Perhaps not the best solution given undocumented functions will no longer display but since they all are in this project it does not pose a problem.
I have a large C++ software application documented with doxygen. How can I set it up so that I can generate subdocuments for specific classes? The classes are documented with in-source commenting, their own .dox files, and images/ directory. I need to be able to generate a standalone pdf file specific to a single class.
I can use grouping to identify what will be included in that subdocument, but how do I generate output for a single group?
If you have a specific .dox file per requested output entity, then all you need to do is define in that file as input the files declaring and defining that class.
Say for example you want an output only for class MyClass which is declared in file myclass.hpp and whose implementation is in myclass.cpp, then in myclass.dox, just add this:
INPUT = ./myclass.cpp \
./myclass.hpp
Of course, you can have different paths for .cpp and .hpp. Or you can document more than one class.
Then, run doxygen on that myclass.dox file.
Also watch out for the output folder name. For the html output, the default name is html so you might want to rename it to avoid mixing up all the different outputs. For example, you might want to add in the dox file something like:
HTML_OUTPUT = html_myclass
I have source and target in an informatica powercenter developer. I heed some other header name to be imported in the target file automatically without any manual entry. How can I import customized headers to informatica target.
What have you tried?
You can use a header command in the session configuration for the target, I haven't used it, and couldn't find any documentation on it (i.e. what is possible and how, whether parameters can be used or not, etc.). I did test using (on Windows) an ECHO command to output its text to the header row, but it didn't seem to recognize parameters.
Or you can try to include the header as the first data output row. That means your output will have to be all string types and length restrictions may compound the issue.
Or you can try using two mappings, one that truncates the files and writes the header and one which outputs the data specifying append in the session. You may need two target definitions pointing to the same files. I don't know if the second mapping would attempt to load the existing data (i.e. typecheck), in which case it might throw an error if it didn't match.
Other options may be possible, we don't do much with flat files.
The logic is,
In session command, there is an option called user defined headers. Type echo followed by column name separated by comma delimited
echo A, B, C
I have copied header and cpp file from one project to another. I need to change the file names now. The header file has the following code that I don't understand. If I change the file name, how should I change this code? Thanks for helping.
#if !defined(AFX_MSELCFLCOMPDLG_H__8687FD1A_777D_4967_A331_42C8536DE2DE__INCLUDED_)
#define AFX_MSELCFLCOMPDLG_H__8687FD1A_777D_4967_A331_42C8536DE2DE__INCLUDED_
#if _MSC_VER > 1000
#pragma once
#endif
That is called an include guard and it prevents the file from being included more than once. You don't need to change it if you change the filename. The long string of digits following the name will be unique enough. If you want to keep the name and the constant in sync change the "MSELCFLCOMPDLG_H" portion to whatever you new file name is.
You don't need to, but you probably should.
This code is there as a header guard. It stops the contents from being included multiple times into a single source file.
Changing it is a matter of maintenance and good practice.