qsettings different results - c++

I am using QSettings to try and figure out if an INI is valid.(using status() to check) I made a purposefully invalid INI file and loaded it in. The first time the code is called, it returns invalid, but every time after that, it returns valid. Is this a bug in my code?

It's a Qt bug caused by some global state. Note that the difference in results happens whether or not you call delete on your QSettings object, which you should. Here's a brief summary of what happens on the first run:
The result code is set to NoError.
A global cache is checked to see if your file is present
Your file isn't present the first time, so it's parsed on qsettings.cpp line 1530 (Qt-4.6.2)
Parsing results in an error and the result code is set (see qsettings.cpp line 1552).
The error result code is returned.
And the second run is different:
The result code is set to NoError.
A global cache is checked, your file is present.
The file size and timestamp are checked to see if the file has changed (see qsettings.cpp line 1424).
The result code is returned, which happens to be NoError -- the file was assumed to have been parsed correctly.

Checked your code, you need to delete the file object before returning.
Apart from that, your code uses the QSettings::QSettings(fileName, format) c'tor to open an ini-file. That call ends in the function QConfFile::fromName (implemented in qsettings.cpp). As I read it (there are a few macros and such that I decided not to follow), the file is not re-opened if the file already is open (i.e. you have not deleted the object since the last time). Thus the status will be ok the second time around.

Related

Is there a simple way to change the "text changed" status in QTextEdit?

I need to verify my source file and even omit some "service" lines,
so I do it using appendPlainText() of QPlainTextEdit. Appending a line
of course means a change,so after loading the file, the asterisk meaning that the file changed appears. I would like to have the more consistent behavior, that after loading, this status signal is not set. How can I reset it, after I loaded the file?
You can surround the part of the code that emits the unwanted signal by two QObject::blockSignals calls:
textEdit->blockSignals(true);
// load from file
textEdit->blockSignals(false);
or directly on QTextEdit::document (will block fewer other signals, I suppose):
textEdit->document()->blockSignals(true);
// load from file
textEdit->document()->blockSignals(false);
Maybe even call QTextEdit::setModified immediately after loading (two signals will be emitted).
Try each one of these out and give me know if any of them doesn't work.

python win32com shell.SHFileOperation - any way to get the files that were actually deleted?

In the code I maintain I run across:
from win32com.shell import shell, shellcon
# ...
result,nAborted,mapping = shell.SHFileOperation(
(parent,operation,source,target,flags,None,None))
In Python27\Lib\site-packages\win32comext\shell\ (note win32comext) I just have a shell.pyd binary.
What is the return value of shell.SHFileOperation for a deletion (operation=FO_DELETE in the call above) ? Where is the code for the shell.pyd ?
Can I get the list of files actually deleted from this return value or do I have to manually check afterwards ?
EDIT: accepted answer answers Q1 - having a look at the source of pywin32-219\com\win32comext\shell\src\shell.cpp I see that static PyObject *PySHFileOperation() delegates to SHFileOperation which does not seem to return any info on which files failed to be deleted - so I guess answer to Q2 is "no".
ActiveState Python help contains SHFileOperation description:
shell.SHFileOperation
int, int = SHFileOperation(operation)
Copies, moves, renames, or deletes a file system object.
Parameters
operation : SHFILEOPSTRUCT
Defines the operation to perform.
Return Value
The result is a tuple containing int result of the
function itself, and the result of the fAnyOperationsAborted member
after the operation. If Flags contains FOF_WANTMAPPINGHANDLE, returned
tuple will have a 3rd member containing a sequence of 2-tuples with
the old and new file names of renamed files. This will only have any
content if FOF_RENAMEONCOLLISION was specified, and some filename
conflicts actually occurred.
Source code can be downloaded here: http://sourceforge.net/projects/pywin32/files/pywin32/Build%20219/ (pywin32-219.zip)
Just unpack and go to .\pywin32-219\com\win32comext\shell\src\

Scilab compilation "cannot allocate this quantity of memory"

I am facing issues with memory allocation in Scilab after compiling.
I am compiling on a Red Hat on ppc64 (POWER8). Stack limits are already set to unlimited (ulimit -s unlimited). The ./configure script (with several options I am not showing here) runs successfully, but the make all fails and stops. When it stops, it is stuck at the Scilab command prompt with this message:
./bin/scilab-cli -ns -noatomsautoload -f modules/functions/scripts/buildmacros/buildmacros.sce
stacksize(5000000);
!--error 10001
stacksize: Cannot allocate memory.
%s: Cannot allocate this quantity of memory.
at line 27 of exec file called by :
exec('modules/functions/scripts/buildmacros/buildmacros.sce',-1)
-->
I have investigated a bit, and that error message seems to be called of course at line 00027 in buildmatros.sce, where the function stacksize(5000000) is called.
This function is defined in:
scilab-5.5.1/modules/core/sci_gateway/c/sci_stacksize.c
I found a version of the file at this page: http://doxygen.scilab.org/master_wg/d5/dfb/sci__stacksize_8c_source.html.
The condition that is FALSE and that triggers the message seems to me to show up at line 00295.
Inside that file, you see that error is displayed whenever the stacksize given as input is LARGER than what is returned by the method get_max_memory_for_scilab_stack() from the class:
scilab-5.5.1/modules/core/src/c/stackinfo.c
Again I found a version online at the following page:
http://doxygen.scilab.org/master_wg/dd/dfb/stackinfo_8h.html#afbd65a57df45bed9445a7393a4558395
The Method is declared from line 109.
It seems to invoke a variable called MAXLONG, which is however NEVER explicitly declared! As you see, it is declared several times (line 00019, 00035, 00043, 00050), but all lines are commented! [correction: the lines are NOT commented, it was my false understanding of # being a comment sign, but it's not]
So my guess is: MAXLONG is not declared, so the function does not return a value (or it returns 0) and therefore the error message is triggered because the stacksize given as input is higher than 0 or NULL or N/A.
My questions are then:
Why are all lines commented where MAXLONG is defined?
Where does MAXLONG originate from? Is it something passed from the kernel?
How can I solve the problem?
Thanks!
PS - I tried to uncomment the line in buildmacros, and it compiled and installed without issues. However, when I started scilab-cli, it displayed the same message again.
Edit after further investigation:
After further investigation, I found out that what I thought were the comments are indeed instructions for the compiler... but I kept those errors of mine, so that the answer to my question is understandable.
Here are my new points.
In Scilab I noticed that by giving an input stacksize out of bounds, the same method get_max_memory_for_scilab_stack() is invoked, to get the upper bound. The lower bound I've seen it's defined by default.
-->stacksize(1)
!--error 1504
stacksize: Out of bounds value. Not in [180000,268435454].
Also the stacksize used seems fine:
-->stacksize()
ans =
7999994. 332.
However, when trying to give such value an input inbetween, it fails.
-->stacksize(1)
!--error 1504
stacksize: Out of bounds value. Not in [180000,268435454].
It seems to invoke a variable called MAXLONG
It's not a variable, but a pre-processor macro.
Why are all lines commented where MAXLONG is defined?
You should ask that from the person who commented the lines. They're not commented in scilab-5.5.1 that's online.
Where does MAXLONG originate from? Is it something passed from the kernel?
It's defined in the file scilab-5.5.1/modules/core/src/c/stackinfo.c. It's defined to the same value as LONG_MAX which is defined by the standard c library (<limits.h> header). If the macro is not supplied by the standard library, then it's defined to some other, platform specific value.
How can I solve the problem?
If your problem originates from the lack of definition for MAXLONG, then you must define it. One way going about it is to uncomment the lines that define it. Or re-download the original sources since yours don't appear to match with the official ones.

C++: Rename instead of Delete & Copy when using Sync

Currently I have the following part code in my Sync:
...
int index = file.find(remoteDir);
if(index >= 0){
file.erase(index, remoteDir.size());
file.insert(index, localDir);
}
...
// Uses PUT command on the file
Now I want to do the following instead:
If a file is the same as before, except for a rename, don't use the PUT command, but use the Rename command instead
TL;DR: Is there a way to check whether a file is the same as before except for a rename that occurred? So a way to compare both files (with different names) to see if they are the same?
check the md5sum, if it is different then the file is modified.
md5 check sum of a renamed file will remain same. Any change in content of file will give a different value.
I first tried to use Renjith method with md5, but I couldn't get it working (maybe it's because my C++ is for windows instead of Linux, I dunno.)
So instead I wrote my own function that does the following:
First check if the file is the exact same size (if this isn't the case we can just return false for the function instead of continuing).
If the sizes do match, continue checking the file-buffer per BUFFER_SIZE (in my case this is 1024). If the entire buffer of the file matches, return true.
PS: Make sure to close any open streams before returning.. My mistake here was that I had the code to close one stream after the return-statement (so it was never called), and therefore I had errno 13 when trying to rename the file.

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)