Problems with asynchronous calls using motion-resource and BubbleWrap for Ruby Motion - rubymotion

I'm trying to follow the example from motion-resource
https://github.com/tkadauke/motion-resource. Having difficulty getting the data in the correct format with this asynchronous code:
def all_friends(&block)
Friends.find_all do |friends, response|
if response.ok?
puts friends.inspect
block.call friends
else
App.alert response.error_message
end
end
end
In this implementation I have a user resource that has many friends. I'm trying to find all the friends for this user with User.current.all_friends
I'm getting an error when the data comes back when I try to iterate through it because its coming back as a BubbleWrap HTTP Query.
#<BubbleWrap::HTTP::Query:0xc54c7a0 ...>

I wonder what line number your error is coming from.
" undefined method `each' for # "
Have you tried removing this line?
block.call friends?
Your example appears to come from the docs with this working line of code...
User.find_all do |users, response|
if response.ok?
puts users.inspect
else
App.alert response.error_message
end
end
Hope you figure it out. It would be easier to troubleshoot if the code was executable.

This Stack Stack overflow question addresses the same problems experienced in this question: What is the iOS (or RubyMotion) idiom for waiting on a block that executes asynchronously?

Related

GATE_Using for Thesis_Run-time Error

When I am trying to run corpus pipeline on language resources. It is throwing the below (even though I follow the order as Document reset, english tokeniser, sentence splitter)
Can someone help me with the process to debug this run-time error
Error:
gate.creole.ExecutionException: No sentences or tokens to process in document Password_Safe-window1.txt_0003E
Please run a sentence splitter and tokeniser first!
at gate.creole.POSTagger.execute(POSTagger.java:257)
at gate.util.Benchmark.executeWithBenchmarking(Benchmark.java:291)
at gate.creole.SerialController.runComponent(SerialController.java:225)
at gate.creole.SerialController.executeImpl(SerialController.java:157)
at gate.creole.SerialAnalyserController.executeImpl(SerialAnalyserController.java:223)
at gate.creole.SerialAnalyserController.execute(SerialAnalyserController.java:126)
at gate.util.Benchmark.executeWithBenchmarking(Benchmark.java:291)
at gate.gui.SerialControllerEditor$RunAction$1.run(SerialControllerEditor.java:1759)
at java.lang.Thread.run(Thread.java:745)
Edit:
The files are not empty. As i tried to implement #dedek's suggestion, it has thrown no errors. But raised one more problem as follows:
Exception in thread "ApplicationViewer1" java.lang.OutOfMemoryError: Java heap space
I think it is because your document is empty.
Can you confirm that?
There is a run-time param failOnMissingInputAnnotations of the POSTagger, set it to false and it should be ok.
See also the docs:
failOnMissingInputAnnotations - if set to false, the PR will not fail with an ExecutionException if no input Annotations are found and instead only log a single warning message per session and a debug message per document that has no input annotations (run-time, default = true).
Concerning the OutOfMemoryError: Java heap space
See following questions:
Getting OOM while using GATE on large data set
GATE PersistenceManager.loadObjectFromFile outofmemory error while loading .gapp files
JAVA PermGem memory

gcc gprof/gcov/other - how to get sequence of function calls/exits + control flow statements

BACKGROUND
We have testers for our embedded GUI product and when a tester declares "test failed", sometimes it's hard for us developers to reproduce the exact problem because we don't have the exact trace of what happened.
We do currently have a logging framework but us devs have to manually input those logging statements in the code which is fine . . . except when a hard-to-reproduce bug occurs and we didn't have a log statement at the 'right' location and then when we re-build, re-run the test with the same steps, we get a different result.
ISSUE
We would like a solution wherein the compiler produces extra instrumentation code that allows us to see the exact sequence of events including, at the minimum:
function enter/exit (already provided by -finstrument-functions)
control-flow statement enter i.e. entering if/else, which case statement we jumped to
The log would look like this:
int main() entered
if-line 5 entered
else-line 10 entered
void EventLoop() entered
. . .
Some additional nice-to-haves are
Parameter values on function entry AND exit (for pass-by-reference types)
Function return value
QUESTION
Are there any gcc tools or even paid tools that can do this instrumentation automatically?
You can either use gdb for that, and you can automate that (I've got a tool for that in the works, you find it here or you can try to use gcov.
The gcov implementation is so, that it loads the latest gcov data when you start the program. But: you can manually dump and load the data. If you call __gcov_flush it will dump the current data and reset the current state. However: if you do that several times it will always merge the data, so you'd also need to rename the gcov data file then.

Tensorflow sess.run() returns nothing

I've been running a program located here:
https://github.com/dennybritz/cnn-text-classification-tf
and the base code works fine with the posted example. But, I tried to split new data into a train/test split and then it gives me errors. The program trains on the train data I give it, but come evaluation time I get this error:
Traceback (most recent call last):
File "./eval.py", line 81, in correct_predictions = float(sum(all_predictions == y_test))
TypeError: 'bool' object is not iterable`
From the eval.py code I located where the problem is in this loop:
for x_test_batch in batches:
batch_predictions = sess.run(predictions, {input_x: x_test_batch, dropout_keep_prob: 1.0})
all_predictions = np.concatenate([all_predictions, batch_predictions])
Wherein sess.run returns nothing and batch_predictions becomes an empty array, leading to a value error later on. Also of note:
batch_predictions is always empty.
x_test_batch is non-empty for every batch.
all_predictions is also always empty.
I brought up the issue with the github owner but he recommended I trace through execution. This is my first time using Tensorflow and I am unable to access their website. If anyone can
Tell me what my issue is
or
Tell me how to trace through the graph execution to find it
I would be endlessly appreciative. Thank you to anyone who reads this!
you say sess.run does not return anything, perhaps you could print x_test_batch before running sess.run and look whats inside, perhaps its empty, which is not your intention i think

Timed Indexed Color sets in CPN Tools that results in Unhandled Exception Error

I am using CPN Tools to model a distributed system. CPN Tools uses CPN ML an extension of SML. The project homepage is: cpntools.org
I started with a simple model and when I try to make a particular indexed color set timed, I get an "Internal error". There is another indexed colorset within my Petri-net model that is timed and works correctly. I am not sure how I can troubleshoot since I don't understand the error message. Could you help me interpret the error message or give me some hints on what I could be doing wrong?
The model is:
http://imgur.com/JUjPRHK
The declarations of the model are:
http://imgur.com/DvvpyvH
The error message is:
Internal error: Compile error when generating code. Caught error.../compiler/TopLevel/interact/evalloop.sml:296.17-296.20../compiler/TopLevel/interact/evalloop.sml:44.55../compiler/TopLevel/interact/evalloop.sml:66.19-66.27
structure CPN`TransitionID1413873858 = struct ... end (* see simulator debug info for full code *)
simglue.sml:884.12-884.43
"
Thank you~
I know this is an old question, but I run in the same problem and wasted too much time on this, so maybe it will help someone else in the future.
I didn't understand exactly the reason for this, but it seems the problem appears when you play with time values on an arch that ends to a transition (I was updating an integer value to the current time, using IntInf.toInt(time())). Now, if I move the code on the outgoing arch of that transition (that is: the one that ends in a place) there is no error.

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)