How to enable a crate feature dynamically? [duplicate] - build

This question already has an answer here:
How do I 'pass down' feature flags to subdependencies in Cargo?
(1 answer)
Closed 2 years ago.
I've got a crate that can be compiled with or without a feature, let's say feat_crate.
I use that crate from a program that can also be compiled with or without a feature feat_app.
I'd like to enable feat_crate for the dependency whenever feat_app is enabled; feat_app being enabled when building the app (like in cargo run -- --feat_app ⚠ EDIT, like in cargo run --features feat_app).
I cannot find a simple way to do so without modifying the Cargo.toml file each time I want to change the enabled feature. I tried looking at build scripts but the script for the app is executed after the dependencies are compiled, so it doesn't seem to help.
I probably can use an environment variable fetched from the crate build script, meaning I would have to set that environment variable accordingly... but I was hoping for a better solution. 🙄

From the features documentation:
Features can be used to reexport features of other packages. The session feature of package awesome (nb: that's the "current" package) will ensure that the session feature of the package cookie is also enabled.
session = ["cookie/session"]

Related

julia: rerun unittests upon changes to files

Are there julia libraries that can run unittests automatically when I make changes to the code?
In Python there is the pytest.xdist library which can run unittests again when you make changes to the code. Does julia have a similar library?
A simple solution could be made using the standard library module FileWatching; specifically FileWatching.watch_file. Despite the name, it can be used with directories as well. When something happens to the directory (e.g., you save a new version of a file in it), it returns an object with a field, changed, which is true if the directory has changed. You could of course combine this with Glob to instead watch a set of source files.
You could have a separate Julia process running, with the project's environment active, and use something like:
julia> import Pkg; import FileWatching: watch_file
julia> while true
event = watch_file("src")
if event.changed
try
Pkg.pkg"test"
catch err
#warn("Error during testing:\n$err")
end
end
end
More sophisticated implementations are possible; with the above you would need to interrupt the loop with Ctrl-C to break out. But this does work for me and happily reruns tests whenever I save a file.
If you use a Github repository, there are ways to set up Travis or Appveyor to do this. This is the testing method used by many of the registered modules for Julia. You will need to write the unit test suite (with using Test) and place it in a /test subdirectory on the github repository. You can search for julia and those web services for details.
Use a standard GNU Makefile and call it from various places depending on your use-case
Your .juliarc if you want to check for tests on startup.
Cron if you want them checked regularly
Inside your module's init function to check every time a module is loaded.
Since GNU makefiles detect changes automatically, calls to make will be silently ignored in the absence of changes.

OpenLayers 3 Build from master

I've cloned the OpenLayers 3 repo and merged the latest from master. There exists a recently merged pull request that I'm interested in exploring, but I'm not sure how to create a regular old comprehensive, non-minified build.
Does anyone know how to create a non-minified, kitchen sink (everything included) build for OpenLayers?
(similar to ol-debug.js).
You can use the ol-debug.json config to concatenate all sources for the library without any minification.
node tasks/build.js config/ol-debug.json ol-debug.js
Where the ol-debug.json looks like this:
{
"exports": ["*"],
"umd": true
}
The build.js task generates builds of the library given a JSON config files. The custom build tutorial describes how this can be used to create minified profiles of the library. For a debug build, you can simply omit the compile member of the build config. This is described in the task readme:
If the compile object is not provided, the build task will generate a "debug" build of the library without any variable naming or other minification. This is suitable for development or debugging purposes, but should not be used in production.

set Vim as a basic C++ Editor [duplicate]

This question already has answers here:
Configuring Vim for C++
(3 answers)
Closed 9 years ago.
I want to set Vim to work with C++, I just want to perform these tasks:
write code (you don't say?)
check and highlight C++ syntaxis
autocompletion (if is possible)
compile, run, debugging and return to the editor
tree-view project files on the side
statusbar
I know that much of this tasks can be done with plugins, so I need your help to make a list of required plugins and how to set them up together.
why basic? well, I'm taking the programming course level 1 in my university, and we will make simple command-line programs, simple such a mathematical evaluations (functions, array even or odd numbers, draw triangles with asterisks and so.)
I don't think you need any plugins... the features you want are already there.
-write code (you don't say?)
this is a given
-check and highlight C++ syntax
:syntax enable
-autocompletion (if is possible)
in insert mode, try
ctrl-n
ctrl-p
-compile, run, debugging and return to the editor
vim is an editor, not a complier. You can, however, drop into a shell to run these commands or use :!commandname. Try one of the following
ctrl-z
g++ -o myprogram myprogram.cpp
fg
or
:!g++ -o myprogram myprogram.cpp
or just keep another terminal open.
-tree-view project files on the side
:!tree -C | less -R
-statusbar
already at the bottom. Try gvim for more toolbars et cetra.
Have fun!
BTW - this message was brought to you via vim and pentadactyl
Some plugins that might help you and I tried in the past when I was trying to get started with vim long ago:
IDE: http://www.vim.org/scripts/script.php?script_id=213
Tree view: http://www.vim.org/scripts/script.php?script_id=1658
Debugging: http://www.vim.org/scripts/script.php?script_id=3039
Completion: http://ctags.sourceforge.net/ and http://www.vim.org/scripts/script.php?script_id=1520
Statusbar: http://www.vim.org/scripts/script.php?script_id=3881 and its successor http://usevim.com/2013/01/23/vim-powerline/
You can search for further plugins at http://www.vim.org/scripts/index.php
That being said, I use vim just fine without any plugin for daily C++ development. It is also handy because I can use the same workflow when ssh'ing onto a server or someone else's machine without the consideration of major differences.
Also C++ syntax highlight works by default as such plugins for languages are usually included into the distributed vim, already.

Establish gtest version

How do I know which version of Gtest is being used in the project I'm working with? I'm working on a linux platform.
The source code of libgtest or libgtest_main libraries doesn't contain special functions which allow recognize their version (something like GetGTestVersion () or something else).
Also header files doesn't have any defined identifiers (something like GTEST_VERSION or something else).
So you can’t check version of Google C++ Testing Framework at runtime inside user code.
But maintainers provide as part of the framework special script scripts/gtest-conf which:
...
provides access to the necessary compile and linking
flags to connect with Google C++ Testing Framework, both in a build prior to
installation, and on the system proper after installation.
...
Among other things this script has several options which connected with version:
...
Installation Queries:
...
--version the version of the Google Test installation
Version Queries:
--min-version=VERSION return 0 if the version is at least VERSION
--exact-version=VERSION return 0 if the version is exactly VERSION
--max-version=VERSION return 0 if the version is at most VERSION
...
The script also contain usage example of it:
Examples:
gtest-config --min-version=1.0 || echo "Insufficient Google Test version."
...
It means that user can test version of the framework in build time using script gtest-config.
Note:
The script gtest-config get actual version of the framework during configuration through variables declared in configure.ac.
...
AC_INIT([Google C++ Testing Framework],
[1.7.0],
[googletestframework#googlegroups.com],
[gtest])
...
And after calling autoconf the following identifiers inside configure file populated:
...
# Identity of this package.
PACKAGE_NAME='Google C++ Testing Framework'
PACKAGE_TARNAME='gtest'
PACKAGE_VERSION='1.7.0'
PACKAGE_STRING='Google C++ Testing Framework 1.7.0'
PACKAGE_BUGREPORT='googletestframework#googlegroups.com'
PACKAGE_URL=''
...
# Define the identity of the package.
PACKAGE='gtest'
VERSION='1.7.0'
...
As far the framework compiled with option AC_CONFIG_HEADERS this identifiers stored into file build-aux/config.h and availiable for user at compile time.
The file CHANGES, in the gtest home directory, contains a gtest version number.
If you have cloned the official repo you can check the latest Git commit inside Google Test's directory (using for example git log -n 1 or git rev-parse HEAD) and compare it with the list of released versions.
In my case, the commit hash is ec44c6c1675c25b9827aacd08c02433cccde7780, which turns out to correspond to release-1.8.0.

How to automate module reloading when unit testing with Erlang?

I'm using Emacs and trying to get my unit testing work flow as automated as possible. I have it set up so it is working but I have to manually compile my module under test or the module containing the tests before the Erlang Shell recognizes my changes.
I have two files mymodule.erl and mymodule_tests.erl. What I would like to be able to do is:
Add test case to mymodule_tests
Save mymodule_tests
Switch to the Erlang Shell
Run tests with one line, like eunit:test(mymodule) or mymodule_tests:test()
Have Erlang reload mymodule and mymodule_tests before actually performing the tests
I have tried writing my own test method but it doesn't work.
-module (mytests).
-export([test/0]).
-import(mymodule).
-import(mymodule_tests).
-import(code).
test() ->
code:purge(mymodule),
code:delete(mymodule),
code:load_file(mymodule),
code:purge(mymodule_tests),
code:delete(mymodule_tests),
code:load_file(mymodule_tests),
mymodule_tests:test().
I have also tried by putting -compile(mymodule). into mymodule_tests to see if I could get mymodule to automatically reload when updating mymodule_tests but to no avail.
I have also googled quite a bit but can't find any relevant information. As I'm new to Erlang I'm thinking that I'm either searching for the wrong terms, e.g. erlang reload module, or that you are not supposed to be able reload other modules when compile another module.
Maybe the Erlang make can help you.
make:all([load]).
Reading from the doc:
This function first looks in the
current working directory for a file
named Emakefile (see below) specifying
the set of modules to compile and the
compile options to use. If no such
file is found, the set of modules to
compile defaults to all modules in the
current working directory.
And regarding the "load" option:
Load mode. Loads all recompiled
modules.
There's also a make:files/1,2 which allows you to specify the list of modules to check.
Have you tried using l(mymodule). to reload the module after it's been compiled?