Using dialog_fselect to select file - c++

I want to use dialog_fselect for selecting file in c++ console application. I wonder how I get the result path of dialog_fselect?
For example when I run:
dialog_fselect("Path", "", getmaxy(main_window)-10, getmaxx(main_window)-10);
How I could get the selected path?

dialog_fselect copies the result to dialog_vars.input_result:
Certain widgets copy a result to this buffer. If the pointer is NULL,
or if the length is insufficient for the result, then the dialog
library allocates a buffer which is large enough, and sets DIALOG_VARS.input_length. Callers should check for this case if they have
supplied their own buffer.
(The capitialized DIALOG_VARS in the manual page refers to the type name rather than the actual variable of that type—see DATA STRUCTURES).

Related

Getting around the 8192 byte var limit in NSIS?

I'm supplying a c++ .dll to a user who is writing an installer via an NSIS script. Using System.dll, the user can call my .dll as such:
System::Call 'my.dll::GetJson(v) t .r0'
DetailPrint $0
The return value of GetJson() gets stored in $0. This is all working correctly, though GetJson() may return a json blob whose length is > 8192, in which case the value stored in $0 gets truncated.
I looked at trying to increase NSIS_MAX_STRLEN by building NSIS myself using scons, as mentioned here: https://nsis.sourceforge.io/Special_Builds
scons NSIS_MAX_STRLEN=16384 PREFIX=C:\somewhere install-compiler install-stubs
However, after doing this, the NSIS-compiled .exes crashed upon running. It seems like 8192 may be some kind of memory limitation.
Is there any way around this for me? For example, would it be possible to call
System::Call 'mydll::GetJson(v) t .r0'
But instead of the return value being stored in $0, have it be split into chunks? Perhaps it's possible to write the contents of GetJson() to a file first, and then NSIS can read that and split it?
Any help is appreciated. Thank you.
If the user needs to edit a very long string you basically have two options:
Use the system plug-in to fill a text field on a nsDialogs custom page. You can't use the registers to store the string, you need to use ...func()p.r0 to get the raw address of the string from your plug-in and use Sendmesage to fill the text field. To save you need to allocate memory, get the text with SendMessage and write it to a file and finally free the memory.
The other option is to create the custom page with your custom plug-in.

Why IStream::Commit failed to write data into a file?

I have a binary file, when I opend it, I used ::StgOpenStorage with STGM_READWRITE | STGM_SHARE_DENY_WRITE | STGM_TRANSACTED mode to get a root storage named rootStorage. And then, I used rootStorage.OpenStream with STGM_READWRITE | STGM_SHARE_EXCLUSIVE mode to get a substream named subStream.
Next, I wrote some data with subStream.Wirte(...), and called subStream.Commit(STGC_DEFAULT), but it just couldn't write the data in the file.
And I tried rootStorage.Commit(STGC_DEFAULT) also, the data can be written.
But when I used UltraCompare Professional - Binary Compare to compare the original file with the file I opend, a lot of extra data had been written at the end of the file. The extra data seems to be from the beginning of the file.
I just want to write a little data into the file while opening it. What should I do?
Binary file comparison will probably not work for structured storage files. The issue is that structured storage files often have extra space allocated in them--to handle transacted mode and to grow the file. If you want to do a file comparison, it will take more work. You will have to open the root storage in each file, then open the stream, and do a binary comparison on the streams.
I had found out why there are extra data on my file.
1. Why should I use IStorage.Commit()
I used STGM_READWRITE mode to create a storage. It's called transacted mode. In transacted mode, changes are accumulated and are not reflected in the storage object until an explicit commit operation is done. So I need to call rootStorage.Commit().
2. Why there are extra data after calling IStorage.Commit(STGC_DEFAULT)
According to this website:
The OLE-provided compound files use a two phase commit process unless STGC_OVERWRITE is specified in the grfCommitFlags parameter. This two-phase process ensures the robustness of data in case the commit operation fails. First, all new data is written to unused space in the underlying file. If necessary, new space is allocated to the file. Once this step has been successfully completed, a table in the file is updated using a single sector write to indicate that the new data is to be used in place of the old. The old data becomes free space to be used at the next commit. Thus, the old data is available and can be restored in case an error occurs when committing changes. If STGC_OVERWRITE is specified, a single phase commit operation is used.

Different _fileName values in Visual Studio debug watch

std::string filename;
In this code:osg::Image* image = osgDB::readImageFile(filename + ".dicom");
osg::Image type variable: image gets wrong returned values from read file. And by debugging to the line above, the watch window shows as follows:
The _fileName (std::string type) value indicated on the first and second lines are both "digest", but in the fourth line the value of _fileName turned out to be "iiiiii\x*6" with capacity equals to 0.
According to my understanding, the _fileName of the fourth line in the watch window should indicate the same member variable of osg::Image as the _fileName on the first and second lines. Thus, I think all the _fileName in the debug watch window should have the same value. But, I am not sure why there are such differences.
MSVC++ implementation of std::string uses different storage strategies for short strings and for long ones. Short strings (16 bytes or less) are stored in a buffer embedded directly inside the std::string object (you will see it as _Bx._Buf in Raw View). Long strings are stored in an independently-allocated block of memory located elsewhere (pointed by _Bx._Ptr).
If you violate the integrity of std::string object, you might easily end up in a situation as the one you describe. The objects thinks that the data should be stored in the external buffer, but in reality it is stored in the internal one and vice-versa. That might easily confuse the debugger as well.
I suggest you open the Raw View of your std::string object and check what it shows in _Bx._Buf and _Bx._Ptr. If the current _Myres value is smaller or equal to the internal buffer size, then the data is [supposed to be] stored in the internal buffer. Otherwise, it is stored in the external memory block. See if this invariant really holds. If it doesn't, then there's your problem. Then you'll just have to find at which point it got broken.
For some reason your filename argument isn't getting .dicom attached to it when it becomes _filename ("digest" should become "digest.dicom"). OSG decides which plugin to use for file loading by extension, so it will have no idea how to load the current one. And so the second reference to _filename doesn't get intialized by any plugin.
By the way, I don't think the dicom plugin is part of the standard OSG prebuilt package - you might have to gather the dependencies yourself and build the plugin.

can I create shared memory (using CreateFileMapping) in local namespace with the same name?

Please can you help me understand whether there can be 2 shared memory objects with the same name in local/global namespace? I know that if we have created a shared memory object first time then we need to call OpenFileMapping/MapViewOfFile to open an object. What would happen if I call CreateFileMapping on already created object?
e.g.
CreateFileMapping(INVALID_HANDLE_VALUE, // use paging file
NULL, // default security
PAGE_READWRITE, // read/write access
0, // max. object size
sizeof(BackupData), // buffer size
"SharedMemory"); // name of mapping object
The MSDN documentation for CreateFileMapping tells you exactly what you need to know:
So you cannot use the same name in the same namespace (Global or Local). You can't even use the same name for different types of kernel objects.
If you try to create a new file mapping with the same name as an existing file mapping, it will attempt to open the existing file mapping (subject to your page protection attributes). If this is successful, you will get a handle to the existing object, but you will also get a return value of ERROR_ALREADY_EXISTS from GetLastError.
If you think about it logically, the only way you can share a file mapping between processes is to use the same name, so it's pointless to have two different mappings with the same name!

Memory Mapped files - How to insert a strip of data in other process

My file(sample.txt) has 10 chars. I opened the file in write mode, have done a createfilemapping("mymapping"). Then I Hold the process by getchar.
Then opened another process which does the below
openfilemapping("mymapping"),
char*k = (char*)mapviewoffile
Now I can access the value of sample.txt and change it via k. However, how to insert/append a strip of another 10 characters into the file.
Shared-memory mappings are fixed in size upon creation. You will need to call CreateFileMapping again with a larger mapping size. This will automatically extend the file, at which point you can MapViewOfFile again, and write in whatever data you want. Note that you will need to change the name of the file mapping, unless you first close all handles and mappings to it so the file mapping is destroyed.
As an aside, it's unusual (but not illegal, of course) to use named file mappings backed by disk files. Generally, if you're mapping a real file, you'd pass NULL to the lpName parameter of CreateFileMapping. You'd pass in a name when you want to create a shared memory mapping with no backing file - that is, hFile would be NULL and lpName be the name of the shared memory segment.