XML as Virtual registry makes the application work slow - c++

I am building an "Application Virtualization" product. I use XML file as a virtual registry.
Virtual applications generated from my software accesses the virtual registry Xml.
It runs , however runs very slow.
I load and unload the XML on every Registry API calls, because multiple process threaded from the parent access a same registry file. This may cause the application to run slow.
Can any one let me know the alternative for XML...

You could use a database instead. It would be faster. Sqlite is lightweight and powerful.

If you load it into memory and operate on it from there then your problem isn't XML.
Profile your application to find out where it's spending most of it's time.
I think you will probably find it's spending most of it's time searching for the item you want to access.

Its text to tree transformation time.
I managed this in my code by Loadaing and Parsing the XML in all processes, only after a write has occured in any one of the process.

Well, you could of course always use the real registry, which is thread-safe and fast...
Otherwise, you'd have to create a separate process that manages your virtual XML registry, keeping the XML structure in memory so it doesn't have to read/write it all the time. Then the processes that need to access it can use IPC to communicate with the registry process.
Another idea, if the multiple processes are not likely to update the registry all the time: keep your virtual XML registry in memory, and write it to disk when changed, but asynchronously via a background thread. When accessing the registry, first check whether the file has been changed; if not, you don't need to reload it.

Related

How XML DOM Object is being loaded in Memory from Disk

Hello and happy new year.
I need a little guide through process of loading a XML DOM from disk to memory with C++, on windows.
Microsoft provide this example, but it doesn't cover the actual process of what ntKernel Functions are being used to do this, and it doesn't explain what process is behind the actual load .
Does the main process make a call to kernel function to load xml from disk to mem?
VariantFromString(L"stocks.xml", varFileName);
pXMLDom->load(varFileName, &varStatus);
Or there is global process that handle request's to load, and then after it load xml via Kernel Functions, it make's a link to DOM Object, and return it to the process were it was asking.
I want to know what Kernel Function does the job for loading .xml file from disk ?
Thanks !
There is no kernel function for 'loading XML' (at least not one used by the DOMDocument60 coclass.
Instead it simply uses generic file reading calls (in the kernel this is ZwReadFile), the DOMDocument60 code then parses the file content into whatever internal representation it uses.
The only context switch involved will be between user and kernel mode not between one process, kernel mode and another process (unless perhaps some kind of user-mode file system is involved but if it were you likely wouldn't need to be asking this question).

C++ Slow down another running process

I want to make an application run slower, it that possible? I have created application which read file created by another process but that process create file and delete it so fast, so it is possible to make that application be slow so I can read file faster?
I tried
SetPriorityClass(GetProcessHandleByName("dd.exe"), IDLE_PRIORITY_CLASS);
and set my process to
SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS);
but yet the process run faster it is possible to slow it down? thanks.
Modifying the working directory permissions to allow processes to read/write data to new files, but not modify/delete existing files might be a different approach that would work.
See https://superuser.com/questions/745923/ntfs-permissions-create-files-and-folder-but-prevent-deletion-and-modification
See the answer SO : Suspend/Resume a process. Which gives information on the three choices for suspending an application.
They are basically stop each thread. Use the undocumented SuspendProcess and Debug the process.
These are the methods of substantially delaying the process.

Possible to make QML application "offline capable" using caches?

I'm trying to make one of my QML apps "offline capable" - that means I want users to be able to use the application when not connected to the internet.
The main problem I'm seeing is the fact that I'm pretty much pulling a QML file with the UI from one of my HTTP servers, allowing me to keep the bulk of the code within reach and easily updatable.
My "main QML file" obviously has external dependencies, such as fonts (using FontLoader), images (using Image) and other QML components (using Loader).
AFAIK all those resources are loaded through the Qt networking stack, so I'm wondering what I'll have to do to make all resources available when offline without having to download them all manually to the device.
Is it possible to do this by tweaking existing/implementing my own cache at Qt/C++ level or am I totally on the wrong track?
Thanks!
A simple solution is to invert the approach: include baseline files within your application's executable/bundle. Upon first startup, copy them to the application's data directory. Then, whenever you have access to your server, you can update the data directory.
All modifications of the data directory should be atomic - they must either completely succeed, or completely fail, without leaving the data directory in an unusable state.
Typically, you'd create a new, temporary data folder, and copy/hardlink the files there, and download what's needed, and only once everything checks out you'd swap the old data directory with the new one.
Letting your application access QML and similar resources directly online is pretty much impossible to get right, unless you insist on explicitly versioning all the resources and having the version numbers in the url.
Suppose your application was started, and has loaded some resources. There are no guarantees that the user has went to all the QML screens - thus only some resources will be loaded. QML also makes no guarantees as to how often and when will the resources be reloaded: it maintains its own caches, after all. Sometime then you update the contents on the server. The user proceeds to explore more of the application after you've done the changes, but now the application he experiences is a frankenstein of older and newer pieces, with no guarantees that these pieces are still meant to work together. It's a bad idea.

Multiple instance of a dll in c++

I have 10.000 devices and I want to control them by one c++ application. Devices are server and I can control them only by dll. Dll is written for MFC and it wasn't written by me so i cant chance anything on it.
Dll establishs the TCP/IP communication between devices and my application.Every device has different variables. I need to open a new thread for each incoming connection and load an instance of my dll. I couldn't load the different instances of a dll for each thread. everytime it is using the same dll and same data.
How can load multiple instance of a dll ?
Is there any way to do it with c++.
Thanks in Advance
If the data are static it is not possible to have more instance in the same process. You have to modify the dll to have some sort of per context data ( usually class instance would do ). As a general suggestion anyway, never starts up to 10000 thread on a process, this will kill the performance. Write a thread pool and let manage the client be served by that pool.
Your situation does not sound hopeful.
Windows will not load more than one instance of a DLL within a given process, ever. If the DLL itself doesn't have the functionality to connect to multiple servers, you would have to create a separate process for each server you need to connect to. In practice, this would be a Bad Idea.
You COULD use LoadLibrary() and UnloadLibrary() to "restart" the DLL multiple times and switch frantically between the different servers that way. Sort of a LoadLibrary()... mess with server... UnloadLibrary()... do it againandagainandagain situation. It would be painful and slow, but might work for you.
The only (ugly) way to load a dll multiple times is for every new load you make a copy of the original dll with a unique name in a location that you're in control of.
Load the copy with LoadLibrary and setup appropiate function-pointers (GetProcAddress(...)) to functions in newly loaded dll for use in your program.
After you're done with it Unload the copy with FreeLibrary and remove the copy from disk.
I don't see an easy solution to this, as previously covered, you can't create multiple instances of a DLL within an app.
There may be a horrible solution, which is to write a lightweight proxy to listen for inbound requests, and spawn a new instance of the real app on each request, and forward traffic to it - there should be a way to load a new copy of a DLL in each instance (technically you'll be re-opening the same loaded DLL, but it should have separate data spaces).
However, with 10k devices, performance will be horrible. It sounds like the best solution is either to re-implement the protocol (either use a published spec, or reverse-engineer it).

Reading Data From Another Application

How do I read data from another window's application?
The other application has a TG70.ApexGridOleDB32 according to Spy++. It has 3 columns and a few rows. I need to read this data from another application I am writing. Can someone help me?
I am writing the code in MFC/C++
Operating systems donot allow directly reading data from different applications/processes. In case your "application" is a sub-process of main application, you can use shared objects to pass data to and fro.
However, in your case, it seems like the most appropriate would be to dump your data on disk. Suppose you have applications A and B. So B can generate the data and push this data onto a regular file or a database. Then A can access the file/database to proceed. Note that this will be a very costly implementation because of sheer number of I/Os performed.
So if your application is generating a lot of data, making both the applications as threads would be the way to go.