XSLT dependencies across OSGI Bundles - xslt

I have been researching OSGI to determine its viability for updating an existing project. The project currently consists of modules (which are basically just directories) that contain XSL Transforms. The transforms contain dependencies on transforms from other modules in the form of xsl:import and xsl:include statements. The reason I am considering OSGI is because as the number of modules increase, it is becoming more difficult to keep track of the dependencies and effectively test the modules.
Is it possible using the OSGI framework to declare XML/XSLT resources contained in a bundle and reference these resources in the import statements of XSL Transforms in a separate bundle.

Yes, this works as Lukasz indicated, you need to write a simple URIResolver based on the extender model. An interesting approach is to use the Provide-Capability and Require-Capability headers to model the dependencies. This will allow you to handle the dependencies with good diagnostics, allows you run multiple versions side-by-side, and it will work with OBR, a resolver that can find the missing parts. See http://www.osgi.org/blog/2012/03/requirements-and-capabilities.html
And this would be the first time I see use of the fact that XSLT is XML ... you could write a simple style sheet that generated the Require-Capability headers! :-)

Your question seems very interesting. Personally, I am working on a system that has two bundles. One bundle contains XSLT Processor implementation (we are using Saxon) while the second one contains multiple XSLT files (which make usage of xsl:import instruction). And it works cool in OSGi environment (Fuse ESB actually) however we needed to implement javax.xml.transform.URIResolver interface and pass it to converter.
I suppose you would need to use the similar approach. Hope this helps.

I would just use Maven for dependency management if I were you - it's simpler to set up your dependencies and it handles transitive dependencies very well indeed. Use OSGi if you need to be able to change the XSL modules at run-time. In both cases you'll need to implement the URIResolver mentioned in the other answer.

Related

Mocking ES Modules when running the Vite development server

I need to find out how I can instruct Vite to replace references to local/relative modules at runtime. The use case here is the test runner mocha-vite-puppeteer, which uses Vite to run tests, but then stubbing of modules of course does not work when using Node machinery such as proxyquire or rewire.
So I basically need to either be tipped of some existing software that can help me in doing this, or some tips on how to create my own "vite-proxyquire" using import.meta and friends.
A normal use for temporarily stubbing out ./my-ugly-module might be that you want to avoid loading some sub-dependency that has some ugly transitive dependencies that suck the entire application tree into your little test, or you want to avoid loading a sub-dependency that has some ugly side effects on the global state.
Existing solutions
Modern Web, a refreshing "bundler- and frameworkless" approach to web development using standard tools, talk a bit about the issue around how the immutable nature of ES Modules prevent usual stubbing patterns. They present a solution in the form of Import Maps, which essentially would be similar to the alias config in Vite (Rollup really), mapping a path to a module to some other file. The problem with a static solution like this is that it would replace all imports of a given module, not just for a single test. Modern Web has a solution to this where they have chosen to use a custom html page for each such test. To make this less of a hassle with regards to running, they then have a custom test runner that handles dealing with all these extra test html files. Doing something like that could be one way of fixing it, but it would require developing quite a bit of middleware/plugin code IMHO to make it work transparently with Vite. Without any advanced tooling it would also introduce a lot of extra files that seems a bit of a downside compared to todays imperative mocking of dependencies with proxyquire, Jest or Test Double from inside of the test files.

Extract subset of methods from WSDL

I'm integrating with a web service providing a huge WSDL file containing lots of methods. Of those methods, I only need a few (up to 10) plus (obviously) corresponding types used in them. Is there a way (except manually editing the WSDL file) to extract only a subset of methods and create new WSDL file for the very same web service? Maybe there is a tool or a script of some kind in existance? I failed finding one myself.
The reason I'm asking is that I'm using gSOAP-provided wsdl2h and soapcpp2 utilities to convert aforementioned WSDL file into C++ wrapper, and then compile it into static library. The size of the library then comes out around 300-500 MB depending on compiler and type of build (debug or release), that is if I even succeed compiling it, which obviously is too much for a simple integration I'm implementing, and sometimes even too much for compiler to then link the library with an executable.
gSOAP-specific solutions are acceptable.
That is one huge WSDL. I would suggest that you get yourself a copy of either Liquid XML designer or Altova XML spy. These tools will make editing the WSDL much easier. I don't know of any scripts that will automate this for you.
It would be a simple matter of visually deleting the operations you don't need in the WSDL if you use either one of the two tools mentioned above.
I have used both these tools for editing WSDL's and they make the job trivial.

manage and compile large emberjs app

Ember don't seems to play well with requirejs or building with r.js without some hackery.
Bit hard to manage a large application, I like to break down my file structure as HMVC, which manages modules bit easier :
app
- blah
- modules
-module1
- controller
- model
- view.
Has anyone come up a build stack to automate the compilation into single file with dependency management?
Update: You should now be starting new projects using ember-cli, which provides all the build/dev tools plus many other useful features.
Original answer:
An example Ember project using grunt.js as a build system:
https://github.com/trek/ember-todos-with-build-tools-tests-and-other-modern-conveniences
I've used this as a starting point for some Ember apps and it works well.
Ember don't seems to play well with requirejs or building with r.js without some hackery.
Why do you want to use require.js in the first place?
As for making this all work you'll need a build tool. I recommend brunch (Node) or iridium (Ruby). Brunch is more simple and supports many different module formats. Iridium is much more powerful. It uses Minispade for modules. Require.js/AMD is not needed because your js is always delivered as a single concatenated file.
For stand-alone ember dev I use ember-skeleton as a starting point, which is itself based primarily on rake-pipeline.
https://github.com/interline/ember-skeleton
This provides the whole kit-and-kaboodle for beginning an app, but the meat of it is the rake-p Assetfile, which describes how rake-pipeline will digest all the files, filter their contents, and ultimately combine them into the final handful of files. Additionally, the combination of loader.js and the minispade filter allows the use of 'require()' statements for managing dependencies amongst your files.

Common test data for multiple independent maven projects

I have a maven project that converts text files of a specific format to another format.
For testing I have put in src/test/resources a large amount of test files.
I also have another project that uses the first one to do the conversion and then do some extra stuff on the output format. I also want to test this project against the same test dataset, but I dont want to have duplicate data sets and I want to be able to test the first project alone since it is also a standalone converter project.
Is there any common solution for doing that? I dont mind not having the test dataset inside the projects source tree as long as each project can access the data set independently of the other. I dont want to setup a database for that also. I am thinking something like a repository of test data simpler than an RDBMS. Is there any application for this kind of need that I can use with a specific maven plugin? Ease of setup and simplicity is my priority. Also I m thinking something like packaging the test data and putting it in a internal maven repo and then downloading it and unzip it in the junit code. Or better, is there a maven plugin that can do this for me?
Any ideas?
It is possible to share resources with Maven, for example with the help of the Assembly and Dependency plugins. Basically:
Create a module with a packaging of type pom to hold the shared resources
Use the assembly plugin to package the module as a zip
Declare a dependency on this zip in "consumer" modules
And use dependency:unpack-dependencies (plus some exclusion rules) to unpack the zip
This approach is detailed in How to share resources across projects in Maven. It requires a bit of setup (which is "provided") and is not too complicated.
Just place them in a tree in the src/main/resources directory of a separate module specially to share the test data. They will be added to the jar file and me nicely compressed and versioned in your nexus repository, file-share, ~/.m2/repository or whatever you use to store/distribute maven artifacts.
Then just add a dependency in the projects you need the data in the test scope and use resource loading to get them from the jars.
You do not need any special plugins or other infrastructure. This just works.

Good, simple configuration library for large c++ project?

We are developing a rather large project in C++, where many components require configuration parameters. We would like to use a central place to configure everything (like a registry), preferably with a nice and simple GUI (e.g. like Firefox's about:config) and a simple API.
I am pretty sure this that many applications have this kind of problem, but could not find any libraries available that can be readily used for this. Does anyone know of a good (preferably free) library to use for this?
This should work cross platform in Windows and Linux.
boost::program_options provides unified (and cross platform) support for configuration from command line, environment variables and configuration files. It seems like it ought to scale to multiple bits of a large software system registering an interest in various parameters (e.g option groups). Not much help with the GUI or persistence side of things though (but then what's wrong with editing a config file with a text editor ?).
I've used libconfig before, works well easy and LGPL.
http://www.hyperrealm.com/libconfig/
I've used a modified version of John Torjo code from TechRepublic/DDJ (source)
The multi platform ACE library has a configuration class that uses config files that have the Windows .ini format.
I've often used a simple wrapper around pugxml. I find that creating a configuration class with parameter validation for enumerated types and so on makes the rest of the code much cleaner. If you are just dealing with key/value pairs you will have to validate the data all throughout your code. By writing a custom class for each application you can put all that in one place.
Try Configurator. There is no GUI, but it's easy-to-use and flexible C++ library for configuration file parsing (from simplest INI to complex files with arbitrary nesting and semantic checking). Header-only and cross-platform. Uses Boost C++ libraries.
See: http://opensource.dshevchenko.biz/configurator