I am attempting to build a file in Puppet 5 using an ERB template. This ERB file uses class variables in the normal fashion, but is also constructed by inserting another Puppet-managed local file. However, I find that whenever I update the inserted file, it takes two Puppet runs to update the ERB-generated file. I want the updating to happen in one Puppet run.
It is easiest to see this with an example:
# test/manifests/init.pp
class test {
# This file will be inserted inside the next file:
file { '/tmp/partial.txt':
source => 'puppet:///modules/test/partial.txt',
before => File['/tmp/layers.txt'],
}
$inserted_file = file('/tmp/partial.txt')
# This file uses a template and has the above file inserted into it.
file { '/tmp/layers.txt':
content => template('test/layers.txt.erb')
}
}
Here is the template file:
# test/templates/layers.txt.erb
This is a file
<%= #inserted_file %>
If I make a change to the file test/files/partial.txt it takes two Puppet runs for the change to propagate to /tmp/layers.txt. For operational reasons it is important that the update happen in only one Puppet run.
I have tried using various dependencies (before, require, etc.) and even Puppet stages, but everything I tried still requires two Puppet runs.
While it is possible to achieve the same result using an exec resource with sed (or something similar), I would rather use a "pure" Puppet approach. Is this possible?
I am attempting to build a file in Puppet 5 using an ERB template. This ERB file uses class variables in the normal fashion, but is also constructed by inserting another Puppet-managed local file.
A Puppet run proceeds in three main phases:
Fact collection
Catalog building
Catalog application
Puppet manifests are completely evaluated during the catalog building phase, including evaluating all templates and function calls. Moreover, with a master / agent setup, catalog building happens on the master, so that's "the local system" during that phase. All target system modifications happen in the catalog application phase.
Thus your
$inserted_file = file('/tmp/partial.txt')
runs during catalog building, before File[/tmp/partial.txt] is applied. Since you give an absolute path to the file() function, it attempts to use the version already present on the catalog-building system, which is not necessarily even the machine for which the manifest is being built.
It's unclear to me why you want to install and manage the partial result in addition to the full templated file, but if indeed you do, then it seems to me that the best way to do so would be to feed both from the same source instead of trying to feed one from the other. To do this, you can make use of the file function's ability to load data from a file in the (any) module's files/ directory, similar to how File.source can do.
For example,
# test/manifests/init.pp
class test {
# Reads the contents of a module file:
$inserted_file = file('test/tmp/partial.txt')
file { '/tmp/partial.txt':
content => $inserted_file,
# resource relationship not necessary
}
file { '/tmp/layers.txt':
# interpolates $inserted_file:
content => template('test/layers.txt.erb')
}
}
Note also that the comments in your example manifest are misleading. Neither the file resource you present nor the contents of the file it manages are interpolated into your template, unless incidentally. What is interpolated is the value of the $inserted_file variable of the class that evaluates the template.
Related
in my projects I adopt a semantic versioning scheme following the standard described by semver. I obtain something like this: product_v1.2.3-alpha-dirty.elf .
I work with embedded system and with make I usually generate a version_autogen.h file at compile time that contains both information of the version number , e.g 1.4.3.1, and current git repository, e.g --dirty, --clean and so on, using shell commands.
I'm starting to using meson and it is very easy and flexible but the custom commands like
run_command('command', 'arg1', 'arg2', 'arg3')
are available only at configure time while I need them at compile time to retrieve information like git status and similar.
How can I do that?
After a deeper research I found that custom_target() (as suggested by nielsdg) can do my job. I did something like this:
# versioning
version_autogen_h = custom_target(
'version_autogen.h',
output : 'version_autogen.h',
input : 'version_creator.sh',
command : ['#INPUT#', '0', '0', '1', 'alpha.1', '#OUTPUT#' ],
)
where version_creator.sh is my bash script that retrieves git info and creates the file version_autogen.h given the version numbers passed as the command arguments. The custom target is created at compile time, so my script is executed at compile time as well, exactly when I want it to be.
I also discovered that in meson there is the possibility to use generators to do something similar to this but in that case they transform an input file in one or more output file, so they didn't fit my case where I didn't need to have a file as input but just versioning numbers.
meson has specialized command for this job - vcs_tag
This command detects revision control commit information at build time
and places it in the specified output file. This file is guaranteed to
be up to date on every build. Keywords are similar to custom_target.
, so it'd look a bit shorter with possibility to avoid generation script and have just
git_version_h = vcs_tag(input : 'version.h.in',
output : 'version.h')
where version.h.in file that you should provide with #VCS_TAG# string that will be replaced, e.g.
#define MYPROJ_VERSION "#VCS_TAG#"
of course, you can have file header and naming according to your project style and possibly to add other definitions also. It is also possible to use another replace string and own command line to generate version, e.g.
vcs_tag(command: [
'git', '--git-dir', meson.build_root(),
'describe', '--tags', '--long',
'--match', '?.*.*', '--always'
],
...
)
which I found and adapted from here
I have a suite which has 50 test cases. When I execute my suite, I get all the failed screenshots listed in the project's folder. I want to point and store those screenshots to a different directory with the name of the test case. I wanted it to be a one time setup than doing it explicitly for every test cases.
There's quite a few ways to change the screenshots default directory.
One way is to set the screenshot_root_directory argument when importing Selenium2Library. See the importing section of Selenium2Library's documentation, and importing libraries in the user guide.
Another way is to use the Set Screenshot Directory keyword, which will do pretty much the same thing as specifying a path when importing the library. Though, using this keyword you can set the path to a new one whenever you like. For example, you could make it so that each test case could have it's own screenshot directory using this keyword. According to your question, this may be the best solution.
And finally, you may also post-process screenshots using an external tool, or even a listener, that would move all screenshots to another directory. Previously mentioned solutions are in most cases much better, but you still may want to do this in some cases, where say, the directory where you want screenshots to be saved would be created only after the tests have finished executing.
I suggest you to do the follow:
For new directory, you should put the following immediately after where you open a browser such:
Open Browser ${URL} chrome
Set screenshot directory ${OUTPUT FILE}${/}..${/}${TEST_NAME}${/}
For replace the screenshot name from the default to your own name, create the following keyword:
sc
Capture page screenshot filename=${SUITE_NAME}-{index}.png
Then, create another keyword and run it on Setup's test case:
Register Keyword To Run On Failure sc
In the above example, I created a new folder with the test case name, which create a screenshot (in case of failure) with the name of suite project name (instead of 'selenium-screenshot-1.png').
I have a configuration file system written in C++ which uses the yaml-cpp library to parse and write to YAML files. I have this as part of my static library.
I would like the ability to return a default value for a field that is requested by a user of the library (calling from their code), but which has not been defined in the user's YAML file.
For example say the user wants to use the field foo from their custom config.yaml file:
int bar = config_reader.read<int>( "config.yaml", "foo" );
If they have foo: 10 in their config.yaml then bar will be set to 10. However I would also like to provide a default value (for example 4) in the case where foo is omitted from config.yaml.
There are two possibilities I have thought of:
Have a set of static maps between field names and default values in a cpp file which gets compiled into the static library, however I will need to have different maps for different types and I feel this could get messy with type checking and maybe requiring template specialization methods.
Have a YAML file which contains all of the default values for expected fields, which the configuration system falls back on if it cannot find the field in the user's config file. I think this would be the preferred solution for me, but I cannot think of a neat way of packaging this YAML file. I would rather the user didn't have to copy or point to this file each time they set up a new project linking the static library.
I would provide the defaults in a YAML file in a global (i.e. non-user specific place) and allow to override the values with user-specific ones.
Consider just throwing an error if the global defaults are missing an entry, this will not happen by accident.
The global defaults you can put in /etc/default/YOUBLIBNAME.yaml. The user's configuration nowadays mostly follows the XDG base directory specification. For that use $XDG_CONFIG_HOME/YOURLIBNAME/config.yaml if XDG_CONFIG_HOME is set in the environment, if not set use $HOME/.config/YOURLIBNAME/config.yaml.
If your library has to work under Windows, I would put the user specific data under %APPDATA% in a subdir YOURBLINAME.
I'm using Knime 3.1.2 on OSX and Linux for OPENMS analysis (Mass Spectrometry).
Currently, it uses static filename.mzML files manually put in a directory. It usually has more than one file pressed in at a time ('Input FileS' module not 'Input File' module) using a ZipLoopStart.
I want these files to be downloaded dynamically and then pressed into the workflow...but I'm not sure the best way to do that.
Currently, I have a Python script that downloads .gz files (from AWS S3) and then unzips them. I already have variations that can unzip the files into memory using StringIO (and maybe pass them into the workflow from there as data??).
It can also download them to a directory...which maybe can them be used as the source? But I don't know how to tell the ZipLoop to wait and check the directory after the python script is run.
I also could have the python script run as a separate entity (outside of knime) and then, once the directory is populated, call knime...HOWEVER there will always be a different number of files (maybe 1, maybe three)...and I don't know how to make the 'Input Files' knime node to handle an unknown number of input files.
I hope this makes sense.
Thanks!
Thanks to Gábor for getting me on the right track. Although I ended up doing a slightly different route after much experimentation.
===
Being new to Knime, I don't know if this is an efficient use of Knime, or a complete Kluge...but it does work.
So, part of the problem is some of the Knime specific objects - One of which is called URIDataValue.
A Python Pandas dataframe is, apparently, interchangable with the Knime tables. However, I don't know if there's a way to import one of these URIDataValue objects into Python. So here's what I did...
1. I wrote a Python script that creates a Pandas Dataframe, and populates it with one Column. Everything is a string, including the column header:
from pandas import DataFrame
# Create empty table
T = DataFrame(
[
['file:///Users/.../copy/lfq_spikein_dilution_1.mzML'],
['file:///Users/.../copy/lfq_spikein_dilution_2.mzML'],
],
)
T.columns = ['URIDataValue']
#print T
output_table = T
That creates this dataframe:
Note: The column name and values are just strings. But it is (apparently) important that the column header be 'URIDataValue'...even though HERE it's just text. If the column name is not 'URIDataValue' the next node doesn't know what to do.
NEXT, the 'output_table' from the 'Python Source' node is patched to a 'String to URI' node, which (apparently and magically) knows to change the entire columns string values to URIDataValues (presumably based on the name of the first column...don't know that for sure).
Finally, the NEW table, with the correct data objects goes to a 'URI to PORT' node...since apparently 'Port' objects and a 'URI' object are different.
This, then, matches the needed input to the ZipLoop...which is normally the out put from a static (hard coded) 'Input Files' node.
Now, to actually solve the question above, I just have to add the code to my 'Python Source' to download and unzip the S3 files, then annotate the dataframe with their locations, and go.
I have no idea what I'm doing, but it worked.
There are multiple options to let things work:
Convert the files in-memory to a Binary Object cells using Python, later you can use that in KNIME. (This one, I am not sure is supported, but as I remember it was demoed in one of the last KNIME gatherings.)
Save the files to a temporary folder (Create Temp Dir) using Python and connect the Pyhon node using a flow variable connection to a file reader node in KNIME (which should work in a loop: List Files, check the Iterate List of Files metanode).
Maybe there is already S3 Remote File Handling support in KNIME, so you can do the downloading, unzipping within KNIME. (Not that I know of, but it would be nice.)
I would go with option 2, but I am not so familiar with Python, so for you, probably option 1 is the best. (In case option 3 is supported, that is the best in my opinion.)
I have a mapping like
SA-->SQ--->EXPR--->TGT
The source will be of the same structure and the tartget also.
There are a bunch of files(with the same structure) which will go through this mapping .
So i want to use a parameter file through which i will give the file names for every run manually.
How to use the param file in session for Source filename attribute
Please suggest..
you could use indirect source type, wherein your source file is basically a list of files, and in turn the session reads each of the files one by one.
the parameter file could reference a source file name (the list) as
$InputFile_myName=/a/b/c.list
In line with what Raghav says, indicate the name of a file that will hold a list of input files in the 'Source filename' property box for the SQ in question in the Mapping tab, making the file 'Source filetype' be 'Indirect', specified in the Session Properties. If you already know ahead of time the names of the input files, you can specify them in that file and deploy that file with the workflow to the location you indicate in the 'Source file directory' property box. However if you won't know the names of the input files until run-time but know the files' naming standard (e.g: "Input_files_name_ABC_" where "" represents variable text, such as a numeric value incremented per input file generated by some other process), then one way to deal with that is to use a Pre-Session Command specifiable in the 'Components' tab of the Session. Create one that will build a new file at the location and with the name specified for the Indirect input file referenced above by using the Unix shell (or if running on Windows, the cmd shell) to list the files conforming to the naming standard for them and redirect the listing output to that file.
Tricky thing is that there must be one or more files listed in that Indirect type of input file. If that file is empty, the workflow will fail (abend). An Indirect file type must have in it listed at least one file (even if that file is empty) and that file must exist. The workflow fails if the indirect file reader gets no files to read from or if a file listed in it is not present on the server to be read from. One way to get around this is to make sure an empty file is present at all times that conforms to the naming standard. This can be assured by creating a "touchfile" before executing the listing command to build the Indirect file type listing file. In Unix, you'd use the 'touch {path}/{filename}' command ({filename} could be, for example, "Input_files_name_ABC_TOUCHFILE"), or on Windows you'd redirect an empty string to a file likewise named via cmd shell process. Either way, that will help you avoid an abend. Cleaning up that file is easy to do: a Post-Session command can be used to delete the empty touchfile. Likewise, you can do the same for the Indirect type of file if desired.