where should i put the text_loader.json? - questdb

I've put the text_loader.json file in /root/.questdb/conf/ directory, and I changed the cairo.sql.copy.formats.file = /root/.questdb/conf/text_loader.json which in server.conf, finally when i start questDB , it appeared can't find the file.
Exception in thread "main" io.questdb.cutlass.json.JsonException: [0] could not find [resource=/root/.questdb/conf/text_loader.json]
at io.questdb#6.2/io.questdb.std.ThreadLocal.initialValue(ThreadLocal.java:36)
at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
at io.questdb#6.2/io.questdb.cutlass.json.JsonException.position(JsonException.java:43)
at io.questdb#6.2/io.questdb.cutlass.json.JsonException.$(JsonException.java:39)
at io.questdb#6.2/io.questdb.cutlass.text.types.InputFormatConfiguration.parseConfiguration(InputFormatConfiguration.java:148)
at io.questdb#6.2/io.questdb.PropServerConfiguration.<init>(PropServerConfiguration.java:698)
at io.questdb#6.2/io.questdb.ServerMain.readServerConfiguration(ServerMain.java:513)
at io.questdb#6.2/io.questdb.ServerMain.<init>(ServerMain.java:96)
at io.questdb#6.2/io.questdb.ServerMain.main(ServerMain.java:289)
this picture is my storage localtion.
enter image description here
and document is here https://questdb.io/docs/guides/importing-data
please tell me how to deal with it. thanks

This seems to be a bug, raised an issue.
Will come back to this thread when fixed.

Related

Chronic ripgrep / vim Plugin Error on Load: "-complete Used Without -nargs"?

I recently added ripgrep to my list of vim plugins and, immediately after installation, I began receiving this error message whenever I loaded up vim:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/vim-ripgrep.vim:
line 149: E1208: -complete used without -nargs
Press ENTER or type command to continue
Opening the offending file and reviewing lines 148-149 reveals:
148 command! -nargs=* -complete=file Rg :call s:Rg(<q-args>)
149 command! -complete=file RgRoot :call s:RgShowRoot()
I am well & truly out of my depth here, especially considering that this error was generated by simply installing the plugin; I've made 0 changes to the underlying file (vim-ripgrep.vim).
Has anyone encountered a similarly chronic error after installing ripgrep and, if so, how did you resolve it?
Congratulations, you have found a bug in a FOSS program. Next step is to either notify the maintainer via their issue tracker or, if you know how to fix it, submit a patch.
Case in point, the author assigns a completion method, -complete=file, but custom commands like :RgRoot don't accept arguments by default so the command makes no sense as-is: you can't complete arguments if you can't pass arguments.
It only needs a -nargs=*, like its upstairs neighbour, :Rg, to work properly and the error message is pretty clear about it:
line 149: E1208: -complete used without -nargs
See :help -complete, :help -nargs, and more generally, :help user-commands.
As the other answer stated, it is a bug in this plugin. There is a currently open pull request to fix this: https://github.com/jremmen/vim-ripgrep/pull/58 Unfortunately, the repository is currently unmaintained, so it is unlikely to be merged any time soon. This active forks page may help you identify a new maintainer.
Until there is a new maintainer for vim-ripgrep, I suggest checking out that branch in your ~/.vim/plugged/vim-ripgrep directory and reopening vim.
I met the functionally same error on VIM plugins while using vim ~/.vimrc.
My met error liking yours:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep
/plugin/vim-ripgrep.vim:
I fixed the upstairs with the below:
cd /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/
git pull --rebase
END!
If you are using vim-plug, try to change
Plug "jremmen/vim-ripgrep"
to
Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
Here are detail steps:
comment Plug "jremmen/vim-ripgrep"
:PlugClean
add Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
:PlugInstall

GATE_Using for Thesis_Run-time Error

When I am trying to run corpus pipeline on language resources. It is throwing the below (even though I follow the order as Document reset, english tokeniser, sentence splitter)
Can someone help me with the process to debug this run-time error
Error:
gate.creole.ExecutionException: No sentences or tokens to process in document Password_Safe-window1.txt_0003E
Please run a sentence splitter and tokeniser first!
at gate.creole.POSTagger.execute(POSTagger.java:257)
at gate.util.Benchmark.executeWithBenchmarking(Benchmark.java:291)
at gate.creole.SerialController.runComponent(SerialController.java:225)
at gate.creole.SerialController.executeImpl(SerialController.java:157)
at gate.creole.SerialAnalyserController.executeImpl(SerialAnalyserController.java:223)
at gate.creole.SerialAnalyserController.execute(SerialAnalyserController.java:126)
at gate.util.Benchmark.executeWithBenchmarking(Benchmark.java:291)
at gate.gui.SerialControllerEditor$RunAction$1.run(SerialControllerEditor.java:1759)
at java.lang.Thread.run(Thread.java:745)
Edit:
The files are not empty. As i tried to implement #dedek's suggestion, it has thrown no errors. But raised one more problem as follows:
Exception in thread "ApplicationViewer1" java.lang.OutOfMemoryError: Java heap space
I think it is because your document is empty.
Can you confirm that?
There is a run-time param failOnMissingInputAnnotations of the POSTagger, set it to false and it should be ok.
See also the docs:
failOnMissingInputAnnotations - if set to false, the PR will not fail with an ExecutionException if no input Annotations are found and instead only log a single warning message per session and a debug message per document that has no input annotations (run-time, default = true).
Concerning the OutOfMemoryError: Java heap space
See following questions:
Getting OOM while using GATE on large data set
GATE PersistenceManager.loadObjectFromFile outofmemory error while loading .gapp files
JAVA PermGem memory

Timed Indexed Color sets in CPN Tools that results in Unhandled Exception Error

I am using CPN Tools to model a distributed system. CPN Tools uses CPN ML an extension of SML. The project homepage is: cpntools.org
I started with a simple model and when I try to make a particular indexed color set timed, I get an "Internal error". There is another indexed colorset within my Petri-net model that is timed and works correctly. I am not sure how I can troubleshoot since I don't understand the error message. Could you help me interpret the error message or give me some hints on what I could be doing wrong?
The model is:
http://imgur.com/JUjPRHK
The declarations of the model are:
http://imgur.com/DvvpyvH
The error message is:
Internal error: Compile error when generating code. Caught error.../compiler/TopLevel/interact/evalloop.sml:296.17-296.20../compiler/TopLevel/interact/evalloop.sml:44.55../compiler/TopLevel/interact/evalloop.sml:66.19-66.27
structure CPN`TransitionID1413873858 = struct ... end (* see simulator debug info for full code *)
simglue.sml:884.12-884.43
"
Thank you~
I know this is an old question, but I run in the same problem and wasted too much time on this, so maybe it will help someone else in the future.
I didn't understand exactly the reason for this, but it seems the problem appears when you play with time values on an arch that ends to a transition (I was updating an integer value to the current time, using IntInf.toInt(time())). Now, if I move the code on the outgoing arch of that transition (that is: the one that ends in a place) there is no error.

Why are my files smaller after I FTP them using this Python program?

I'm trying to send some files (a zip and a Word doc) to a directory on a server using ftplib. I have the broad strokes sorted out:
session = ftplib.FTP(ftp.server, 'user','pass')
filewpt = open(file, mode)
readfile = open(file, mode)
session.cwd(new/work/directory)
session.storbinary('STOR filename.zip', filewpt)
session.storbinary('STOR readme.doc', readfile)
print "filename.zip and readme.doc were sent to the folder on ftp"
readfile.close()
filewpt.close()
session.quit()
This may provide someone else what they are after but not me. I have been using FileZilla as a check to make sure the files were transferred. When I see they have made it to the server, I see that they are both way smaller or even zero K for the readme.doc file. Now I'm guessing this has something to do with the fact that I stored the file in 'binary transfer mode' <--- whatever that means.
This is where my problems lie. I have no idea at all (yet) what is meant by binary transfer mode. Is it simply that I have to use retrbinary to return the files to their original state?
Could someone please explain to me like I'm a two year old what has happened to my files? If there's any more info required, please let me know.
This is a fantastic resource. Solved most of my problems. Still trying to work out the intricacies of FTPs, but I guess I will save that for another day. The link below builds a function to effortlessly upload files to an FTP without the partial upload problem that I've seen experienced by more than one Stack Exchanger.
http://effbot.org/librarybook/ftplib.htm

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)