What is even then point of eager shared libraries with Module Federation if they can never be shared? - webpack-module-federation

I have two module federation builds (one loads the other), both using the same library, the host eager, the loaded one not, and no matter how I try to achieve this, it’s always loaded twice. If eager disables sharing why do you even list it under shared? I’m using the same shareScope for both modules, so I have no idea why it’s alway loaded twice. I literally tried almost every possible situation. Eager: true just doesn’t allow sharing. If I set eager to false on both it’s only loaded once, so I know my configuration is correct. I’ve set break points in the code to prove all of this.

Related

Executable suddenly stopped working: silent exit, no errors, no nothing

I am facing a rather peculiar issue: I have a Qt C++ application that used to work fine. Now, suddenly I cannot start it anymore. No error is thrown, no nothing.
Some more information:
Last line of output when application is started in debug mode with Visual Studio 2012:
The program '[4456] App.exe' has exited with code -1 (0xffffffff).
Actual application code (= first line in main()) is never called or at least no breakpoints are triggered, so debugging is not possible.
The executable process for a few seconds appears in the process list and then disappears again.
Win 7 x64 with latest Windows updates.
The issues simultaneously appeared on two separate machines.
Application was originally built with Qt 5.2.1. Today I test-wise switched to Qt 5.4.1. But as expected no change.
No changes to source code were made. The issue also applies to existing builds of the application.
Running DependencyWalker did not yield anything of interest from my point of view.
I am flat out of ideas. Any pointers on what to try or look at? How can an executable suddenly stop working at all with no error?
I eventually found the reason for this behavior...sort of. The coding (e. g. my singletons) were never the problem (as I expected since the code always worked). Instead an external library (SAP RFC SDK) caused the troubles.
This library depends on the ICU Unicode libraries and apparently specific versions at that. Since I wasn't aware of that fact, I only had the ICU libraries that my currently used Qt version needs in my application directory. The ICU libraries for the SAP RFC SDK must have been loaded from a standard windows path so far.
In the end some software changes (Windows updates, manual application uninstalls, etc.) must have removed those libraries which resulted in that described silent fail. Simply copying the required ICU library version DLLs into my application folder, solved the issue.
The only thing I am not quite sure about, is why this was not visible when tracing the loaded DLLs via DependencyWalker.
"Actual application code (= first line in main()) is never called. So debugging is not possible."
You probably have some static storage initialization failing, that's applied before main() is called.
Do you use any interdependent singletons in your code? Consolidate them to a single singleton if so (remember, there shouldn't be more than one singleton).
Also note, debugging still is possible well for such situation, the trap is ,- for such case as described in my answer -, main()'s bodies' first line is set as the first break point as default, when you start up your program in the debugger.
Nothing hinders you to set breakpoints, that are hit before starting up the code reaches main() actually.
As for your clarification from comments:
"I do use a few singletons ..."
As mentioned above, if you are really sure you need to use a singleton, use actually a single one.
Otherwise you may end up, struggling with undefined order of initialization of static storage.
Anyway, it doesn't matter that much if static storage data depends on each other, provide a single access point to it throughout your code, to avoid cluttering it with heavy coupling to a variety of instances.
Coupling with a single instance, makes it easier to refactor the code to go with an interface, if it turns out singleton wasn't one.

Handling QMetaType registration in Qt5 with dynamic plug-ins

My company is considering the jump from Qt 4.8.4 to Qt 5.4, but I came across a change that could be a showstopper for us: QMetaType::unregisterType() is removed (http://doc.qt.io/qt-5/sourcebreaks.html).
Our GUI requires plug-ins to be loaded at runtime, with the same plug-in potentially loaded and unloaded more than once during a session of the GUI. In Qt 4, we ran into an issue where when the plug-in was loaded the second time, any signal/slot that used one of the custom types registered by the plug-in would cause access violations because the meta type had been registered by the first instance of the plug-in (which was now unloaded, so the memory space was invalid). We worked around this issue by defining our own macros to register and unregister meta types safely as the plug-in was loaded and unloaded.
With QMetaType::unregisterType() no longer present, I fear that this issue will come back with no real way to solve the problem. Upgrading to Qt 5.4 would be a significant investment to even get to the point that I could test this issue, so I'm hoping I can get some indication from the experts here.
Is there any way to unregister a meta type in Qt 5? If not, does Qt 5 now have some sort of system that can detect when the DLL is being unloaded and unregister the meta types itself (highly unlikely I'd assume)? Alternatively, if we switch to the new Qt 5 signal/slot syntax, does that absolve us of the need for meta types entirely? If so, does the new syntax still allow for queued connections? Please forgive my ignorance on the subject, but I don't see it explicitly listed as supported or not.
Please forgive my ignorance on the subject, but I don't see it explicitly listed as supported or not.
This is currently unsupported, which means that do not unload plugins with Qt 5 as of writing this. Usually, you do not load and unload plugins anyway, as it is done during the launch up in general. The corresponding change in the repository also claims:
The function hasn't been working properly. It was not well tested, for example it is undefined how QVariant should behave if it contains an instance of an unregistered type.
Concept of unregistering types was inspired by plug-in system, but in most supported platforms we do not unload plug-ins.
Idea of type unregistering may block optimizations in meta object system, because it would be not possible to cache a type id. QMetaType::type() could return different ids for the same name.
Thereby, even though you thought it was working, it was unreliable which means you could have observed difficult to find bugs in its operation, resulting unreliable software on your part. I am sure you do not want to release such a software, especially if it is not recommended by the Qt Project to be used.

How to sync 2 or more watched folders

We need to implement a feature to our program that would sync 2 or more watched folders.
In reality, the folders will reside on different computers on the local network, but to narrow down the problem, let's assume the tool runs on a single computer, and has a list of watched folders that it needs to sync, so any changes to one folder should propagate to all others.
There are several problems I've thought about so far:
Deleting files is a valid change, so if folder A has a file but folder B doesn't, it could mean that the file was created in folder A and needs to propagate to folder B, but it could also mean that the file was deleted in folder B and needs to propagate to folder A.
Files might be changed/deleted simultaneously in several directories, and with conflicting changes, I need to somehow resolve the conflicts.
One or more of the folders might be offline at any time, so changes must be stored and later propagated to it when it comes online.
I am not sure what kind of help if any the community can offer here, but I'm thinking about these:
If you know of a tool that already does this, please point it out. Our product is closed-source and commercial, however, so its license must be compatible with that for us to be able to use it.
If you know of any existing literature or research on the problem (papers and such), please link to it. I assume that this problem would have been researched already.
Or if you have general advice on the best way to approach this problem, which algorithms to use, how to solve conflicts, or race conditions if they exists, and other gotchas.
The OS is Windows, and I will be using Qt and C++ to implement it, if no tools or libraries exist.
It's not exceptionally hard. You just need to compare the relevant change journal records. Of course, in a distributed network you have to assume the clocks are synchronized.
And yes, if a complex file (anything you can't parse) is edited while the network is split, you cannot avoid problems. This is known as the
CAP theorem . Your system cannot be Consistent, Always Available and also resistant against Partitioning (going offline)

dlopen and implicit library loading: two copies of the same library

I have 3 things: open source application (let's call it APP),
closed source shared library (let's call it OPENGL)
and open source plugin for OPENGL (let's call it PLUGIN)[also shared library].
OS: Linux.
There is need to share data between APP and PLUGIN,
so APP linking with PLUGIN, and when I run it,
system load it automatically.
After that APP call eglInitialize that belongs to OPENGL,
and after that this function load PLUGIN again.
And after that I have two copies of PLUGIN in the APP memory.
I know that because of PLUGIN have global data, and after debugging
I saw that there are two copies of global data.
So question how I can fix this behaviour?
I want one instance of PLUGIN, that used by APP and OPENGL.
And I can not change OPENGL library.
It obviously depends a lot on exactly what the libraries are doing, but in general some solution should be possible.
First note that normally if a shared library with the same name is loaded multiple times it will continue to use the same library. This of coruse primarily applies to loading via the standard loading/linking mechanism. If the library calls dlopen on its own it still can get the same library but it depends on the flags to dlopen. Try reading the docs on dlopen to get an understanding of how it works and how you can manipulate it.
You can also try positioning the PLUGIN earlier in the linker command so that it gets loaded first and thus might avoid a double load later one. If you must load the PLUGIN dynamically this obviously won't help. You can also check if LD_PRELOAD might resolve the linking order.
As a last resort you may have to resort to using LD_LIBARY_PATH and putting an interface library in from of the real one. This one will simply pass calls to the real one but will intercept duplicate loads and shunt them to the previous load.
This is just a general direction to consider. Your actual answer will depend highly on your code and what the other shared libraries do. Always investigate linker load ordering first, as it is the easiest to check, and then dlopen flags, before going into the other options.
I suspect that OPENGL is loading PLUGIN with the RTLD_LOCAL flag. This
is normally what you want when loading a plugin, so that multiple
plugins don't conflict.
We've had similar problems with loading code under Java: we'd load a
dozen or so different modules, and they couldn't communicate with one
another. It's possible that our solution would work for you: we wrote a
wrapper for the plugin, and told Java that the wrapper was the plugin.
That plugin then loaded each of the other shared objects, using dlopen
with RTLD_GLOBAL. This worked between plugins. I'm not sure that it
will allow the plugins to get back to the main, however (but I think it
should). And IIRC, you'll need special options when linking main for
its symbols to be available. I think Linux treats the symbols in main
as if main had been loaded with RTLD_LOCAL otherwise. (Maybe
--export-dynamic? It's been a while since I've had to do this, and I
can't remember exactly.)

Coldfusion seems to get caught up loading UDF's

I am currently running too many sites on a server and I don't think the template cache can handle it. But, what really seems to be the biggest drag is when I load my UDF library per site. I say this because whenever I run Fusion Reactor to see where the holdup is, the stacktrace is always sitting on the template that loads the UDF's.
Is the only cure for this more RAM and a higher template cache, or is there a better way?
Maybe I am wrong as well, could there be another issue?
Before increasing the heap and template cache available, look at a few things.
First, do you actually have more templates in the system than you have template cache? If not, increasing it certainly won't help. Even if you do, if they aren't called often, it probably won't help, but that's harder to measure.
Second, examine whether the server is having difficulty actually loading the UDFs, or if the page is having a problem executing a UDF. Are the functions included on the same template that calls them?
Third, find out why it take so long to load this UDF library. Is it really that big? Can it be split into smaller libraries? Is there one (or more) particular UDF that seems to hang the compile process?
Finally, if there is a large UDF library that must be loaded for each request, I would look at using the Application scope to store it. Include the librar onApplicationStart(), then reference functions as application.myFunction(). This prevents CF from needing to load (and possibly compile) the file at each request.