I am writing a simple game in Ocaml, using its module Graphics to perform drawing and interaction. I came across the problem that Graphics.read_key() queues all presses for later use, therefore when I hold a key for a while then many "presses" are put into memory. After release the action is still performed.
Is there any way to delete entries from this queue, or just (even better) not queue them at all?
THis is problably not the most beautiful solution, but you can use key_pressed.
This function will return true if a keypress is available.
So once you have read a keypress with read_key you can flush your queue by calling read_key until key_pressed is false and ignoring the result.
(* flush_kp : unit -> unit *)
let flush_kp () = while key_pressed () do
let c = read_key ()
in ()
done ;;
Hope this could help.
Related
I have an futures::sync::mpsc::unbounded channel. I can send messages to the UnboundedSender<T> but have problems receiving them from the UnboundedReciever<T>.
I use the channel to send messages to the UI thread, and I have a function that gets called every frame, and I'd like to read all the available messages from the channel on each frame, without blocking the thread when there are no available messages.
From what I've read the Future::poll method is kind of what I need, I just poll, and if I get Async::Ready, I do something with the message, and if not, I just return from the function.
The problem is the poll panics when there is no task context (I'm not sure what that means or what to do about it).
What I tried:
let (sender, receiver) = unbounded(); // somewhere in the code, doesn't matter
// ...
let fut = match receiver.by_ref().collect().poll() {
Async::Ready(items_vec) => // do something on UI with items,
_ => return None
}
this panics because I don't have a task context.
Also tried:
let (sender, receiver) = unbounded(); // somewhere in the code, doesn't matter
// ...
let fut = receiver.by_ref().collect(); // how do I run the future?
tokio::runtime::current_thread::Runtime::new().unwrap().block_on(fut); // this blocks the thread when there are no items in the receiver
I would like help with reading the UnboundedReceiver<T> without blocking the thread when there are no items in the stream (just do nothing then).
Thanks!
You are using futures incorrectly -- you need a Runtime and a bit more boilerplate to get this to work:
extern crate tokio;
extern crate futures;
use tokio::prelude::*;
use futures::future::{lazy, ok};
use futures::sync::mpsc::unbounded;
use tokio::runtime::Runtime;
fn main() {
let (sender, receiver) = unbounded::<i64>();
let receiver = receiver.for_each(|result| {
println!("Got: {}", result);
Ok(())
});
let rt = Runtime::new().unwrap();
rt.executor().spawn(receiver);
let lazy_future = lazy(move || {
sender.unbounded_send(1).unwrap();
sender.unbounded_send(2).unwrap();
sender.unbounded_send(3).unwrap();
ok::<(), ()>(())
});
rt.block_on_all(lazy_future).unwrap();
}
Further reading, from Tokio's runtime model:
[...]in order to use Tokio and successfully execute tasks, an application must start an executor and the necessary drivers for the resources that the application’s tasks depend on. This requires significant boilerplate. To manage the boilerplate, Tokio offers a couple of runtime options. A runtime is an executor bundled with all necessary drivers to power Tokio’s resources. Instead of managing all the various Tokio components individually, a runtime is created and started in a single call.
Tokio offers a concurrent runtime and a single-threaded runtime. The concurrent runtime is backed by a multi-threaded, work-stealing executor. The single-threaded runtime executes all tasks and drivers on thee current thread. The user may pick the runtime with characteristics best suited for the application.
I want to have a UI button press trigger a block of code, so I created a queue and dispatched a block to it async, but I'm not seeing the block start in a reasonable amount of time.
minimized example:
class InterfaceController: WKInterfaceController {
...
let queue = DispatchQueue(label: "unique_label", qos: .userInteractive)
#IBAction func on_press() {
print("Touch")
queue.async {
// Stuff
}
}
}
So I see the "Touch" line in the console, but nothing from the async block happens.
Odd thing is, if I use let queue = DispatchQueue.global() instead, it seems to work as desired. So what is the operational difference between making my own queue, and using the global one here? I would have expected my QoS to give it some CPU time.
So what is the operational difference between making my own queue, and
using the global one here?
let queue = DispatchQueue(label: "unique_label", qos: .userInteractive)
creates the .serial queue with high priority
let queue = DispatchQueue.global()
doesn't actually create nothing but returns global (system) .concurrent queue with qos .default.
When you create Your own queue, the system will decide, on which global queue it will dispatch your execution request. The queue is not an execution engine ...
I am not able to believe, that your code is never executed, it is very unlikely to be true. If it happened, the trouble must be somewhere in your code which is not part of your question.
I have got a Play 2.4 (Java-based) application with some background Akka tasks implemented as functions returning Promise.
Task1 downloads bank statements via bank Rest API.
Task2 processes the statements and pairs them with customers.
Task3 does some other processing.
Task2 cannot run before Task1 finishes its work. Task3 cannot run before Task2. I was trying to run them through sequence of Promise.map() like this:
protected F.Promise run() throws WebServiceException {
return bankAPI.downloadBankStatements().map(
result -> bankProc.processBankStatements().map(
_result -> accounting.checkCustomersBalance()));
}
I was under an impression, that first map will wait until Task1 is done and then it will call Task2 and so on. When I look into application (tasks are writing some debug info into log) I can see, that tasks are running in parallel.
I was also trying to use Promise.flatMap() and Promise.sequence() with no luck. Tasks are always running in parallel.
I know that Play is non-blocking application in nature, but in this situation I really need to do things in right order.
Is there any general practice on how to run multiple Promises in selected order?
You're nesting the second call to map, which means what's happening here is
processBankStatements
checkCustomerBalance
downloadBankStatements
Instead, you need to chain them:
protected F.Promise run() throws WebServiceException {
return bankAPI.downloadBankStatements()
.map(statements -> bankProc.processBankStatements())
.map(processedStatements -> accounting.checkCustomersBalance());
}
I notice you're not using result or _result (which I've renamed for clarity) - is that intentional?
Allright, I found a solution. The correct answer is:
If you are chaining multiple Promises in the way I do. That means, in return of map() function you are expecting another Promise.map() function and so on, you should follow these rules:
If you are returning non-futures from mapping, just use map()
If you are returning more futures from mapping, you should use flatMap()
The correct code snippet for my case is then:
return bankAPI.downloadBankStatements().flatMap(result -> {
return bankProc.processBankStatements().flatMap(_result -> {
return accounting.checkCustomersBalance().map(__result -> {
return null;
});
});
});
This solution was suggested to me a long time ago, but it was not working at first. The problem was, that I had a hidden Promise.map() inside function downloadBankStatements() so the chain of flatMaps was broken in this case.
I'm doing some proof-of-concept work for a fairly complex video game I'd like to write in Haskell using the HOpenGL library. I started by writing a module that implements client-server event based communication. My problem appears when I try to hook it up to a simple program to draw clicks on the screen.
The event library uses a list of TChans made into a priority queue for communication. It returns an "out" queue and an "in" queue corresponding to server-bound and client-bound messages. Sending and receiving events are done in separate threads using forkIO. Testing the event library without the OpenGL part shows it communicating successfully. Here's the code I used to test it:
-- Client connects to server at localhost with 3 priorities in the priority queue
do { (outQueue, inQueue) <- client Nothing 3
-- send 'Click' events until terminated, the server responds with the coords negated
; mapM_ (\x -> atomically $ writeThing outQueue (lookupPriority x) x)
(repeat (Click (fromIntegral 2) (fromIntegral 4)))
}
This produces the expected output, namely a whole lot of send and receive events. I don't think the problem lies with the Event handling library.
The OpenGL part of the code checks the incoming queue for new events in the displayCallback and then calls the event's associated handler. I can get one event (the Init event, which simply clears the screen) to be caught by the displayCallback, but after that nothing is caught. Here's the relevant code:
atomically $ PQ.writeThing inqueue (Events.lookupPriority Events.Init) Events.Init
GLUT.mainLoop
render pqueue =
do event <- atomically $
do e <- PQ.getThing pqueue
case e of
Nothing -> retry
Just event -> return event
putStrLn $ "Got event"
(Events.lookupHandler event Events.Client) event
GL.flush
GLUT.swapBuffers
So my theories as to why this is happening are:
The display callback is blocking all of the sending and receiving threads on the retry.
The queues are not being returned properly, so that the queues that the client reads are different than the ones that the OpenGL part reads.
Are there any other reasons why this could be happening?
The complete code for this is too long to post on here although not too long (5 files under 100 lines each), however it is all on GitHub here.
Edit 1:
The client is run from within the main function in the HOpenGL code like so:
main =
do args <- getArgs
let ip = args !! 0
let priorities = args !! 1
(progname, _) <- GLUT.getArgsAndInitialize
-- Run the client here and bind the queues to use for communication
(outqueue, inqueue) <- Client.client (Just ip) priorities
GLUT.createWindow "Hello World"
GLUT.initialDisplayMode $= [GLUT.DoubleBuffered, GLUT.RGBAMode]
GLUT.keyboardMouseCallback $= Just (keyboardMouse outqueue)
GLUT.displayCallback $= render inqueue
PQ.writeThing inqueue (Events.lookupPriority Events.Init) Events.Init
GLUT.mainLoop
The only flag I pass to GHC when I compile the code is -package GLUT.
Edit 2:
I cleaned up the code on Github a bit. I removed acceptInput since it wasn't doing anything really and the Client code isn't supposed to be listening for events of its own anyway, that's why it's returning the queues.
Edit 3:
I'm clarifying my question a little bit. I took what I learned from #Shang and #Laar and kind of ran with it. I changed the threads in Client.hs to use forkOS instead of forkIO (and used -threaded at ghc), and it looks like the events are being communicated successfully, however they are not being received in the display callback. I also tried calling postRedisplay at the end of the display callback but I don't think it ever gets called (because I think the retry is blocking the entire OpenGL thread).
Would the retry in the display callback block the entire OpenGL thread? If it does, would it be safe to fork the display callback into a new thread? I don't imagine it would, since the possibility exists that multiple things could be trying to draw to the screen at the same time, but I might be able to handle that with a lock. Another solution would be to convert the lookupHandler function to return a function wrapped in a Maybe, and just do nothing if there aren't any events. I feel like that would be less than ideal as I'd then essentially have a busy loop which was something I was trying to avoid.
Edit 4:
Forgot to mention I used -threaded at ghc when I did the forkOS.
Edit 5:
I went and did a test of my theory that the retry in the render function (display callback) was blocking all of OpenGL. I rewrote the render function so it didn't block anymore, and it worked like I wanted it to work. One click in the screen gives two points, one from the server and from the original click. Here's the code for the new render function (note: it's not in Github):
render pqueue =
do event <- atomically $ PQ.getThing pqueue
case (Events.lookupHandler event Events.Client) of
Nothing -> return ()
Just handler ->
do let e = case event of {Just e' -> e'}
handler e
return ()
GL.flush
GLUT.swapBuffers
GLUT.postRedisplay Nothing
I tried it with and without the postRedisplay, and it only works with it. The problem now becomes that this pegs the CPU at 100% because it's a busy loop. In Edit 4 I proposed threading off the display callback. I'm still thinking of a way to do that.
A note since I haven't mentioned it yet. Anybody looking to build/run the code should do it like this:
$ ghc -threaded -package GLUT helloworldOGL.hs -o helloworldOGL
$ ghc server.hs -o server
-- one or the other, I usually do 0.0.0.0
$ ./server "localhost" 3
$ ./server "0.0.0.0" 3
$ ./helloworldOGL "localhost" 3
Edit 6: Solution
A solution! Going along with the threads, I decided to make a thread in the OpenGL code that checked for events, blocking if there aren't any, and then calling the handler followed by postRedisplay. Here it is:
checkEvents pqueue = forever $
do event <- atomically $
do e <- PQ.getThing pqueue
case e of
Nothing -> retry
Just event -> return event
putStrLn $ "Got event"
(Events.lookupHandler event Events.Client) event
GLUT.postRedisplay Nothing
The display callback is simply:
render = GLUT.swapBuffers
And it works, it doesn't peg the CPU for 100% and events are handled promptly. I'm posting this here because I couldn't have done it without the other answers and I feel bad taking the rep when the answers were both very helpful, so I'm accepting #Laar's answer since he has the lower Rep.
One possible cause could be the use of threading.
OpenGL uses thread local storage for it's context. Therefore all calls using OpenGL should be made from the same OS thread. HOpenGL (and OpenGLRaw too) is a relatively simple binding around the OpenGL library and is not providing any protection or workarounds to this 'problem'.
On the other hand are you using forkIO to create a light weight haskell thread. This thread is not guaranteed to stay on the same OS thread. Therefore the RTS might switch it to another OS thread where the thread local OpenGL-context is not available. To resolve this problem there is the forkOS function, which creates a bound haskell thread. This bound haskell thread will always run on the same OS thread and thus having its thread local state available. The documentation about this can be found in the 'Bound Threads' section of Control.Concurrent, forkOS can also be found there.
edits:
With the current testing code this problem is not present, as you're not using -threaded. (removed incorrect reasoning)
Your render function ends up being called only once, because the display callback is only called where there is something new to draw. To request a redraw, you need to call
GLUT.postRedisplay Nothing
It takes an optional window parameter, or signals a redraw for the "current" window when you pass Nothing. You usually call postRedisplay from an idleCallback or a timerCallback but you can also call it at the end of render to request an immediate redraw.
I have been slowly examining all of the features that F# brings to the table. One that has particularly piqued my interest is the MailboxProcessor.
The equivalent of this in C# would most likely use locks. Can we consider the MailboxProcessor as a replacement for locks?
In the following example, am I doing
anything particularly naive or can
you see anything that might be
improved?
module Tcp =
open System
open System.Collections.Generic
open System.Net
open System.Net.Sockets
open System.Threading
type SocketAsyncMessage =
| Get of AsyncReplyChannel<SocketAsyncEventArgs>
| Put of SocketAsyncEventArgs
| Dispose of AsyncReplyChannel<MailboxProcessor<SocketAsyncMessage>>
type SocketAsyncEventArgsPool(size:int) =
let agent =
lazy(MailboxProcessor.Start(
(fun inbox ->
let references = lazy(new List<SocketAsyncEventArgs>(size))
let idleReferences = lazy(new Queue<SocketAsyncEventArgs>(size))
let rec loop () =
async {
let! message = inbox.Receive()
match message with
| Get channel ->
if idleReferences.Value.Count > 0 then
channel.Reply(idleReferences.Value.Dequeue())
else
let args = new SocketAsyncEventArgs()
references.Value.Add args
channel.Reply args
return! loop()
| Put args ->
if args = null then
nullArg "args"
elif references.Value.Count < size then
idleReferences.Value.Enqueue args
else
if not(references.Value.Remove args) then
invalidOp "Reference not found."
args.Dispose()
return! loop()
| Dispose channel ->
if references.IsValueCreated then
references.Value
|> Seq.iter(fun args -> args.Dispose())
channel.Reply inbox
}
loop())))
/// Returns a SocketAsyncEventArgs instance from the pool.
member this.Get () =
agent.Value.PostAndReply(fun channel -> Get channel)
/// Returns the SocketAsyncEventArgs instance to the pool.
member this.Put args =
agent.Value.Post(Put args)
/// Releases all resources used by the SocketAsyncEventArgsPool.
member this.Dispose () =
(this:>IDisposable).Dispose()
interface IDisposable with
member this.Dispose() =
if agent.IsValueCreated then
(agent.Value.PostAndReply(fun channel -> Dispose channel):>IDisposable).Dispose()
Mailboxes (and similar constructs) are used in programming models that don't use locks, as they're inherently built around asynchronous processing. (Lack of shared mutable state is another requirement of this model).
The Actor model can be thought of as a series of single-threaded mini-applications that communicate by sending and receiving data from each other. Each mini-application will only be run by a single thread at a time. This, combined with the lack of shared state, renders locks unnecessary.
Procedural models (and most OO code is, at its heart, procedural), use thread-level concurrency, and synchronous calls to other objects. The Actor model flips this around - calls (messages) between objects are asynchronous, but each object is completely synchronous.
I don't know enough F# to really analyze your code, frankly. It does look like you're trying to stick a synchronous-looking shell around your mailbox, and I wonder if that's really the best thing to do (vs. embracing the mailbox model fully). In your implementation, it does appear that you're using it as a replacement for a lock.
To first part of your question:
The MailboxProcessor class is a message queue running on its own thread. You may send an message
to the MailboxProcessor from any thread as asynchronously as synchronously.
Such model allows to communicate between threads through message passing instead of using locks/mutexes/ipc mechanics.