Exception when using play framework 1.2.3 on Ubuntu - playframework-1.x

My instance of play v1.2.3 on Ubuntu was working fine till yesterday. I am not entirely certain if I installed any new packages on Ubuntu in the meanwhile. When I now try running play (run/start), I get the exception copied below. I have tried cleaning the tmp directory but it did not help. Any other thoughts (besides setting up play again) will be greatly appreciated. Thanks
Exception in thread "main" play.exceptions.UnexpectedException: Unexpected Error
at play.vfs.VirtualFile.contentAsString(VirtualFile.java:180)
at play.classloading.hash.ClassStateHashCreator.getClassDefsForFile(ClassStateHashCreator.java:83)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:58)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:63)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:63)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:63)
at play.classloading.hash.ClassStateHashCreator.computePathHash(ClassStateHashCreator.java:48)
at play.classloading.ApplicationClassloader.computePathHash(ApplicationClassloader.java:371)
at play.classloading.ApplicationClassloader.<init>(ApplicationClassloader.java:62)
at play.Play.init(Play.java:272)
at play.server.Server.main(Server.java:158)
Caused by: java.lang.RuntimeException: java.io.IOException: Input/output error
at play.libs.IO.readContentAsString(IO.java:62)
at play.libs.IO.readContentAsString(IO.java:49)
at play.vfs.VirtualFile.contentAsString(VirtualFile.java:178)
... 10 more
Caused by: java.io.IOException: Input/output error
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:220)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.Reader.read(Reader.java:123)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1364)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1340)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1315)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:525)
at play.libs.IO.readContentAsString(IO.java:60)

One of the Java classes in my project somehow got corrupted. I noticed it when I tried copying the whole directory to another location - the copy operation generated an error message specifying the corrupted file (which I'm sure one could have detected through other means as well).
Removing the corrupted file (& replacing with the same code) resulted in normal behavior. Hope this helps others.

Related

reading (not writing) last_write_time gives a file creation error

I'm stumped... it took me a while to track this down.. because I don't have visual C++ IDE installed on the problematic system (it is windows server 2019)... my code works fine with VS 2022 on my laptop (W11 22H2)... anyhow.. I get this exception
Cannot create a file when that file already exists
I tracked it down to this code:
const auto fileTime = fs::last_write_time(p);
apparently this function can also write to the file to modify the time.
but I'm just trying to read it... (I didn't add the arguments necessary to write)
does anyone have any idea why this error might be happening?
https://en.cppreference.com/w/cpp/filesystem/last_write_time
please note it is highly likely that sometimes the file is actually being written to when I call this code (this code is called in a loop, and so is the file that is being written by another program)

Crash on importing audio after packaging game EXCEPTION_ACCESS_VIOLATION OVRlipsync Plugin UE4

Been working on a lipsync project on UE4.27 and the Oculus OVRlipsync and the project has been working very well on UE editor. Packaging the game to ship it to the client, I started facing issues that is related on cooking frame sequence from WAV files resulting in crash in the packaged app.
The resulted crash log is
Unhandled Exception: EXCEPTION_ACCESS_VIOLATION reading address 0x0000024bc963002c
OVRLipSync
OVRLipSync
OVRLipSync
OVRLipSync
MyProject_Win64_Shipping!ovrLipSync_ProcessFrameEx() [\software\coretech\src\engines\tracking\facetracking\facewave\ovrlipsyncshim.cpp:389]
MyProject_Win64_Shipping!<lambda_04cfcd2176d25e5a0c33289e1c33f647>::operator()() [D:\Unreal Projects\Lipsync\fix2\MyProject\Plugins\OVRLipSync\Source\OVRLipSync\Private\CreateFrameSequenceAsset.cpp:79]
MyProject_Win64_Shipping!TAsyncRunnable<void>::Run()
MyProject_Win64_Shipping!FRunnableThreadWin::Run()
Tracing the error at CreateFrameSequenceAsset.cpp:79 which was part of the plugin source code I found the following function
context.ProcessFrame(PCMData + offs, ChunkSizeSamples, Visemes, LaughterScore, FrameDelayInMs,NumChannels > 1);
putting efforts back on the declaration and definition of the function found nothing useful, except that I tried looking up the file ovrlipsyncshim.cpp and found nothing so I tried searching my project for ProcessFrameEx() I found another part of it in /ThirdParty/Include/OVRLipSync.h as follows
ovrLipSyncResult ovrLipSync_ProcessFrameEx(
ovrLipSyncContext context,
const void* audioBuffer,
int sampleCount,
ovrLipSyncAudioDataType dataType,
ovrLipSyncFrame* pFrame);
though tracing all this up couldn't find anything useful to handle the exception or tell the cause of it.
Anyone ever faced such a problem or having any experience solving such an issue
I had the same issue. I've changed a line in OvrLipSyncEditorModule.cpp:
From:
UOVRLipSyncContextWrapper context(ovrLipSyncContextProvider_Enhanced, SampleRate, 4096, ModelPath);
To:
UOVRLipSyncContextWrapper context(ovrLipSyncContextProvider_Enhanced, SampleRate, 8192, ModelPath);
(basically I've increased the buffer size)
And I've also added this line:
SoundWave->LoadingBehavior = ESoundWaveLoadingBehavior::ForceInline;
before the call:
DecompressSoundWave(SoundWave);
Now it doesn't crash anymore and it generates a sequence, but when I attach the sequence to the model, it does nothing.. It's weird, it seems to not be a working sequence even though I've checked and the sequence is not empty.

GATE_Using for Thesis_Run-time Error

When I am trying to run corpus pipeline on language resources. It is throwing the below (even though I follow the order as Document reset, english tokeniser, sentence splitter)
Can someone help me with the process to debug this run-time error
Error:
gate.creole.ExecutionException: No sentences or tokens to process in document Password_Safe-window1.txt_0003E
Please run a sentence splitter and tokeniser first!
at gate.creole.POSTagger.execute(POSTagger.java:257)
at gate.util.Benchmark.executeWithBenchmarking(Benchmark.java:291)
at gate.creole.SerialController.runComponent(SerialController.java:225)
at gate.creole.SerialController.executeImpl(SerialController.java:157)
at gate.creole.SerialAnalyserController.executeImpl(SerialAnalyserController.java:223)
at gate.creole.SerialAnalyserController.execute(SerialAnalyserController.java:126)
at gate.util.Benchmark.executeWithBenchmarking(Benchmark.java:291)
at gate.gui.SerialControllerEditor$RunAction$1.run(SerialControllerEditor.java:1759)
at java.lang.Thread.run(Thread.java:745)
Edit:
The files are not empty. As i tried to implement #dedek's suggestion, it has thrown no errors. But raised one more problem as follows:
Exception in thread "ApplicationViewer1" java.lang.OutOfMemoryError: Java heap space
I think it is because your document is empty.
Can you confirm that?
There is a run-time param failOnMissingInputAnnotations of the POSTagger, set it to false and it should be ok.
See also the docs:
failOnMissingInputAnnotations - if set to false, the PR will not fail with an ExecutionException if no input Annotations are found and instead only log a single warning message per session and a debug message per document that has no input annotations (run-time, default = true).
Concerning the OutOfMemoryError: Java heap space
See following questions:
Getting OOM while using GATE on large data set
GATE PersistenceManager.loadObjectFromFile outofmemory error while loading .gapp files
JAVA PermGem memory

Pyter not working when written as a Python Program

I am using pyter API for finding translational error rate(TER) between two words. Pyter normally works in terminal, but when I am using it in Python code, it isn’t working. Normally, it works by writing pyter.ter(w1,w2), but now it is saying that pyter module has no attribute "ter".
actually it got solved. BY mistake I was saving the program named as pyter.py. So, whenever I was importing pyter, the error message of the actual program was being shown. I renamed the script as pyter_xyz.py, and its running fine.
Thanks for the concern #Kamiccolo

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)