I have a configuration file system written in C++ which uses the yaml-cpp library to parse and write to YAML files. I have this as part of my static library.
I would like the ability to return a default value for a field that is requested by a user of the library (calling from their code), but which has not been defined in the user's YAML file.
For example say the user wants to use the field foo from their custom config.yaml file:
int bar = config_reader.read<int>( "config.yaml", "foo" );
If they have foo: 10 in their config.yaml then bar will be set to 10. However I would also like to provide a default value (for example 4) in the case where foo is omitted from config.yaml.
There are two possibilities I have thought of:
Have a set of static maps between field names and default values in a cpp file which gets compiled into the static library, however I will need to have different maps for different types and I feel this could get messy with type checking and maybe requiring template specialization methods.
Have a YAML file which contains all of the default values for expected fields, which the configuration system falls back on if it cannot find the field in the user's config file. I think this would be the preferred solution for me, but I cannot think of a neat way of packaging this YAML file. I would rather the user didn't have to copy or point to this file each time they set up a new project linking the static library.
I would provide the defaults in a YAML file in a global (i.e. non-user specific place) and allow to override the values with user-specific ones.
Consider just throwing an error if the global defaults are missing an entry, this will not happen by accident.
The global defaults you can put in /etc/default/YOUBLIBNAME.yaml. The user's configuration nowadays mostly follows the XDG base directory specification. For that use $XDG_CONFIG_HOME/YOURLIBNAME/config.yaml if XDG_CONFIG_HOME is set in the environment, if not set use $HOME/.config/YOURLIBNAME/config.yaml.
If your library has to work under Windows, I would put the user specific data under %APPDATA% in a subdir YOURBLINAME.
Related
I use Boost.Locale with ICU backend (internally using GNU gettext) to do the translations. This translation uses dictionaries stored on disk. Search paths are provided through boost's generator class like so:
boost::locale:generator gen;
gen.add_messages_path("path");
By inspecting the boost's source code, this internally later passes the paths to localzation backend through localization_backend::set_option. In case of ICU localization backend implementation that I use, this finally makes the paths to be set in gnu_gettext::messages_info (paths field).
Now as for my question - my requirement is to make sure that the user will not change the texts e.g. by modifying the .mo dictionary file on disk. The reason I use Boost.Locale is its codepage translations support, multiple languages support etc. and I want to keep that, but I don't want the ability for the user to freely define the texts later used in the application. My initial thought was to use the dictionaries "in memory" in some way, e.g. by storing .mo file contents inside executable and pass already read data into the localization_backend somehow. However, after checking how it works internally (described above) it seems that the only supported option is to have the dictionaries read in "real time" as I do the translations, which may include any changes to those files done by the user. It's either that or maybe I'm missing something?
What are my options?
You can use the callback field on gnu_gettext::messages_info to provide a function that will be called instead of loading messages files from disk. From Custom Filesystem Support:
namespace blg = boost::locale::gnu_gettext;
blg::messages_info info;
info.language = "he";
info.country = "IL";
info.encoding="UTF-8";
info.paths.push_back(""); // You need some even empty path
info.domains.push_back(blg::messages_info::domain("my_app"));
info.callback = some_file_loader; // Provide a callback
The callback signature is std::vector<char>(std::string const &file_name, std::string const &encoding). There's an example in the tests; this actually loads from disk, but you can adapt it to return hard-coded data instead.
I am currently using Bruno Paz's extension File Templates to create some file templates...lol. The issue is I am having trouble locating where the templates are stored once they are created. I want to find where and how they are stored so I can hard code some templates in. That way users of my version of the extension will have premade templates they can immediately access instead of having to create them first. The readme indicates that the Templates should be stored as follows for windows:
C:\Users\User\AppData\Roaming\Code\User\FileTemplates
However, that path does not exist for me. The readme also states the following (I could use this a possible work around):
However, you can change the default location by adding the following to your user or workspace settings:
"fileTemplates.templates_dir": "path/to/my/templates"
However, Im unaware of what workspace settings are in VS Code.
I have a large C++ software application documented with doxygen. How can I set it up so that I can generate subdocuments for specific classes? The classes are documented with in-source commenting, their own .dox files, and images/ directory. I need to be able to generate a standalone pdf file specific to a single class.
I can use grouping to identify what will be included in that subdocument, but how do I generate output for a single group?
If you have a specific .dox file per requested output entity, then all you need to do is define in that file as input the files declaring and defining that class.
Say for example you want an output only for class MyClass which is declared in file myclass.hpp and whose implementation is in myclass.cpp, then in myclass.dox, just add this:
INPUT = ./myclass.cpp \
./myclass.hpp
Of course, you can have different paths for .cpp and .hpp. Or you can document more than one class.
Then, run doxygen on that myclass.dox file.
Also watch out for the output folder name. For the html output, the default name is html so you might want to rename it to avoid mixing up all the different outputs. For example, you might want to add in the dox file something like:
HTML_OUTPUT = html_myclass
I have a multi file template in resharper and I can use $NAME$ macro to get the name of the original file to use to name the other files in the template. But I also want to use the $NAME$ of the original file in the content of the other file template.
Is this possible? I can't see a macro which seems suitable for the internal variables as onlt the Current File Name seems available.
Anyone know if this is possible or how I might workaround this?
As a workaround, you may create a parameter $FILENAME$ (macro "Current file name without extension") in the first file e.g. in the comments, like:
class Foo
{
//$FILENAME$
}
Then you may call this parameter in other files of the multifile template - this parameter will contain the name of the first file since the first file will be generated before other ones.
Unfortunately, there isn't a macro that will give you this. I've added a feature request that you can vote on and track (and more specific detail as to what your requirements are would be useful) - http://youtrack.jetbrains.com/issue/RSRP-415055
It is possible to write your own macros as part of a plugin, but there isn't a sure-fire way of getting the name of the first document in the created file set. The IHotspotSessionContext instance that is passed to the macro via IHotspotSession.Context property includes an enumerable of IDocument, from which you can get IDocument.Moniker, which will be the full path for file based documents. However, there's no guarantee of the order of the enumerable - it's backed by a hashset. You might be able to rely on implementation details (small set, no removes) to be able to use the first document as the original, but there is really no guarantee of this.
I have a mapping like
SA-->SQ--->EXPR--->TGT
The source will be of the same structure and the tartget also.
There are a bunch of files(with the same structure) which will go through this mapping .
So i want to use a parameter file through which i will give the file names for every run manually.
How to use the param file in session for Source filename attribute
Please suggest..
you could use indirect source type, wherein your source file is basically a list of files, and in turn the session reads each of the files one by one.
the parameter file could reference a source file name (the list) as
$InputFile_myName=/a/b/c.list
In line with what Raghav says, indicate the name of a file that will hold a list of input files in the 'Source filename' property box for the SQ in question in the Mapping tab, making the file 'Source filetype' be 'Indirect', specified in the Session Properties. If you already know ahead of time the names of the input files, you can specify them in that file and deploy that file with the workflow to the location you indicate in the 'Source file directory' property box. However if you won't know the names of the input files until run-time but know the files' naming standard (e.g: "Input_files_name_ABC_" where "" represents variable text, such as a numeric value incremented per input file generated by some other process), then one way to deal with that is to use a Pre-Session Command specifiable in the 'Components' tab of the Session. Create one that will build a new file at the location and with the name specified for the Indirect input file referenced above by using the Unix shell (or if running on Windows, the cmd shell) to list the files conforming to the naming standard for them and redirect the listing output to that file.
Tricky thing is that there must be one or more files listed in that Indirect type of input file. If that file is empty, the workflow will fail (abend). An Indirect file type must have in it listed at least one file (even if that file is empty) and that file must exist. The workflow fails if the indirect file reader gets no files to read from or if a file listed in it is not present on the server to be read from. One way to get around this is to make sure an empty file is present at all times that conforms to the naming standard. This can be assured by creating a "touchfile" before executing the listing command to build the Indirect file type listing file. In Unix, you'd use the 'touch {path}/{filename}' command ({filename} could be, for example, "Input_files_name_ABC_TOUCHFILE"), or on Windows you'd redirect an empty string to a file likewise named via cmd shell process. Either way, that will help you avoid an abend. Cleaning up that file is easy to do: a Post-Session command can be used to delete the empty touchfile. Likewise, you can do the same for the Indirect type of file if desired.