Strange semantic error - c++

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).

You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...

The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)

Related

Error with std::filesystem::copy copying a file to another pre-existing directory

See below for the following code, and below that, the error that follows.
std::string source = "C:\\Users\\cambarchian\\Documents\\tested";
std::string destination = "C:\\Users\\cambarchian\\Documents\\tester";
std::filesystem::path sourcepath = source;
std::filesystem::path destpath = destination;
std::filesystem::copy_options::update_existing;
std::filesystem::copy(sourcepath, destpath);
terminate called after throwing an instance of 'std::filesystem::__cxx11::filesystem_error'
what(): filesystem error: cannot copy: File exists [C:\Users\cambarchian\Documents\tested] [C:\Users\cambarchian\Documents\tester]
Tried to use filesystem::copy, along with trying different paths. No luck with anything. Not too much I can write here as the problem is listed above, could be a simple formatting issue. That being said, it worked on my home computer using visual studio 2022, however using VS Code with gcc 11.2 gives me this issue.
Using:
filesystem::copy_file(oldPath, newPath, filesystem::copy_options::overwrite_existing);
The overloads of std::filesystem::copy are documented. You're using the first overload, but want the second:
void copy(from, to) which is equivalent to [overload 2, below] using copy_options::none
void copy(from, to, options)
Writing the statement std::filesystem::copy_options::update_existing; before calling copy doesn't achieve anything at all, whereas passing the option to the correct overload like
std::filesystem::copy(sourcepath, destpath,
std::filesystem::copy_options::update_existing);
should do what you want.
... it worked on my home computer using visual studio 2022 ...
you don't say whether the destination file existed in that case, which is the first thing you should check.
I put the copy_options within the copy function but it didn't work so I started moving it around, I probably should have mentioned that.
Randomly permuting your code isn't a good way of generating clean examples for others to help with.
In the rare event that hacking away at something does fix it, I strongly recommend pausing to figure out why. When you've hacked away at something and it still doesn't work, by all means leave comments to remind yourself what you tried, but the code itself should still be in a good state.
Still doesn't work when I write std::filesystem::copy(sourcepath, destpath, std::filesystem::copy_options::recursive)
Well, that's a different option, isn't it? Were you randomly permuting which copy_options you selected as well?
Trying recursive and update_existing yields the same issue.
The documentation says
The behavior is undefined if there is more than one option in any of the copy_options option group present in options (even in the copy_file group).
so you shouldn't be setting both anyway. There's no benefit to recursively copying a single file, but there may be a benefit to updating or overwriting one. If the destination already exists. Which, according to your error, it does.
Since you do have an error explicitly saying "File exists", you should certainly look at the "options controlling copy_file() when the file already exists" section of the table here.
Visual Studio 2022 fixed the problem

Chronic ripgrep / vim Plugin Error on Load: "-complete Used Without -nargs"?

I recently added ripgrep to my list of vim plugins and, immediately after installation, I began receiving this error message whenever I loaded up vim:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/vim-ripgrep.vim:
line 149: E1208: -complete used without -nargs
Press ENTER or type command to continue
Opening the offending file and reviewing lines 148-149 reveals:
148 command! -nargs=* -complete=file Rg :call s:Rg(<q-args>)
149 command! -complete=file RgRoot :call s:RgShowRoot()
I am well & truly out of my depth here, especially considering that this error was generated by simply installing the plugin; I've made 0 changes to the underlying file (vim-ripgrep.vim).
Has anyone encountered a similarly chronic error after installing ripgrep and, if so, how did you resolve it?
Congratulations, you have found a bug in a FOSS program. Next step is to either notify the maintainer via their issue tracker or, if you know how to fix it, submit a patch.
Case in point, the author assigns a completion method, -complete=file, but custom commands like :RgRoot don't accept arguments by default so the command makes no sense as-is: you can't complete arguments if you can't pass arguments.
It only needs a -nargs=*, like its upstairs neighbour, :Rg, to work properly and the error message is pretty clear about it:
line 149: E1208: -complete used without -nargs
See :help -complete, :help -nargs, and more generally, :help user-commands.
As the other answer stated, it is a bug in this plugin. There is a currently open pull request to fix this: https://github.com/jremmen/vim-ripgrep/pull/58 Unfortunately, the repository is currently unmaintained, so it is unlikely to be merged any time soon. This active forks page may help you identify a new maintainer.
Until there is a new maintainer for vim-ripgrep, I suggest checking out that branch in your ~/.vim/plugged/vim-ripgrep directory and reopening vim.
I met the functionally same error on VIM plugins while using vim ~/.vimrc.
My met error liking yours:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep
/plugin/vim-ripgrep.vim:
I fixed the upstairs with the below:
cd /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/
git pull --rebase
END!
If you are using vim-plug, try to change
Plug "jremmen/vim-ripgrep"
to
Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
Here are detail steps:
comment Plug "jremmen/vim-ripgrep"
:PlugClean
add Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
:PlugInstall

exception of boostlog when date changed to next day

I use boost log by this config.
[Sinks.2]
Filter="%Severity% >= 2"
Destination=TextFile
AutoFlush=true
Format="[%TimeStamp%] [%ThreadID%] <%Severity%> %Message%"
Asynchronous=false
Target="logs"
FileName="logs/quo.%Y%m%dT%H%M%S.%a.%5N.log.detail"
RotationTimePoint="00:00:00"
RotationSize=104857600
MinFreeSpace=4294967296
MaxSize=4294967296
ScanForFiles=All
when date change to next day. my program crash by exception:
terminate called after throwing an instance of
'boost::filesystem::filesystem_error'
what(): boost::filesystem::last_write_time: No such file or directory: "/root/work/hy-trade/bin/debug/logs/quo.20181027T173106.Sat.00000.log.detail"
I check my disk space, find the free space less than MinFreeSpace in config and the file quo.20181027T173106.Sat.00000.log.detail not exists.
how to avoid this exception?
version of boost is 1.67
thank you
It looks like someone had already deleted the log file before it was rotated. It may have been an external process, or Boost.Log.
With Boost.Log, this can happen if you have multiple file sinks that write log files into the same directory, which is also used as the target directory for the rotated files (i.e. the FileName parameter includes the path specified in the Target parameter, and there are multiple sinks that use that path). The problem is because, according to ScanForFiles=All, the library scans the target directory for any files but does not update the file counter to be used for creating new files. This means that if the file "quo.20181027T173106.Sat.00000.log.detail" was present in that directory when your process started then it would be considered as an old file, even if upon starting your process would be still writing new logs to that file. Then, when a file rotation happens and storage limits are exceeded (e.g. if MinFreeSpace is not satisfied), that file may be deleted. The rotation has to happen on another sink that still stores files into the same "logs" directory.
To solve the problem you can do one of the following:
Use ScanForFiles=Matching in your settings so that the file counter is updated after scanning. This will make sure that new log files have unique names and don't get deleted prematurely.
Write log files to a different directory from your target storage. I.e. specify FileName so that it doesn't point to the same directory as Target.
Also, you may want to add exception handling to avoid crashing in case of errors (which may still happen for whatever reason on filesystem operations). See here and here for more info (also, follow the links in those sections).

Immuconf with Clojure not handling tree config files

Whenever I add a third config file to my .immuconf.edn I get:
No configuration files were specified, and neither an .immuconf.edn file nor
an IMMUCONF_CFG environment variable was found
This is driving me crazy since I cant really find anything wrong.
Using this loads thing OK:
["configs/betfair.edn" "configs/web-server.edn"]
however this generated an error:
["configs/betfair.edn" "configs/web-server.edn" "~/betfair.edn"]
This is the content of betfair.edn
{:betfair {:usr "..."
:pwd "..."
:app-key "..." ;; key used
:app-key-live "..."
:app-key-test "..."}}
(where ... is replaced with actual strings)
Why am I getting this error when adding the third file and how can I fix this?
Make sure that the last file specified in your <project dir>/.immuconf.edn (~/betfair.edn) exists in your home directory.
Immuconf does some magic to replace ~ in filenames specified in .immuconf.edn with a value of (System/getProperty "user.home") so you might check if that system property points to the same directory where your ~/betfair.edn file is located.
I have recreated your setup and it works on my machine so it is probably a problem with locations or access rights to your files. Unfortunately, error handling for the no arg invocation of (immuconf.config/load) doesn't help in troubleshooting as it swallows any exceptions and returns nil. That exception would probably tell you what kind of error occured (some file not found or some IO error happened). You might want to file a pull request with a patch to log such errors as warnings instead of ignoring them.

review board, post-review and a deleted file

Googling and reading Review Board's documentation (and bugging coworkers) hasn't solved this problem so far.
I'm using Review Board (1.5) for code review. When doing a command line post-review, Review Board doesn't like it when I've deleted a file (svn del, that is).
In other words, r1 for example, had foo.js but r3 has had foo.js svn deleted during a reorganization and cleanup of files no longer used.
When doing the post-review, the error message is:
server$ post-review --revision-range=r1:r3 --submit-as="jody"
Failed to execute command: ['svn', 'info', 'js/app.conf.js']
['js/foo.js: (Not a versioned resource)\n', '\n', 'svn: A problem occurred; see other errors for details\n']
How can one svn del an unneeded file but move forward with the post-review without the error?
I think this is still a bug with review board. I ran into a similar situation but thankfully my latest commit consisted of just deleting one file.
so i could just do post-review --revison-range 192:205 , i.e range one commit earlier than the delete if the commit where i deleted the file was 206.
One hack that worked for me was to reset the working copy to the revision where the file still exists (svn update -r1) and post the review from there. I guess that might not work if you've also added new files along with the deletions.