In Xcode set up a stack of curl_easy_setopt-functions for uploading a file to a server/API, and (after a lot of trial and error) it all works like a charm. After going through several other Q&As i also managed to set up an easy CURLOPT_PROGRESSFUNCTION that looks like this:
static int my_progress_func(void *p,
double t,
double d,
double ultotal,
double ulnow)
{
printf("(%g %%)\n", ulnow*100.0/ultotal);
// globalProgressNumber = ulnow*100.0/ultotal;
return 0;
}
As the upload progresses "0%.. 16%.. 58%.. 100%" is output to the console; splendid.
What i'm not able to do is to actually USE this data (globalProgressNumber) eg. for my NSProgressIndicator; CURL kind of hijacks my App and doesn't allow any other input/output until the progress is complete.
I tried updating IBOutlets from my_progress_func (eg. [_myLabel setStringValue:globalProgressNumber];) but the static int function doesn't allow that.
Neither is [self] allowed, so posting to NSNotificationCenter isn't possible:
[[NSNotificationCenter defaultCenter]
postNotificationName:#"Progressing"
object:self];
My curl function runs from my main class/ window (NSPanel).
Any good advice on how to achieve a realtime/ updating element on my .xib?
CURL [...] doesn't allow any other input/output until the progress is complete.
Are you calling curl_easy_perform from the main thread? If yes, you should not do that since this function is synchronous, i.e it blocks until the transfer finishes.
In other words long running tasks (e.g with network I/O) must run on a separate thread, while UI code (e.g. updating the text of a label) must run on the main thread.
how to achieve a realtime/updating element
You should definitely take care to perform the curl transfer in a separate thread.
This could be easily achieved by wrapping this into an NSOperation with a custom protocol to notify a delegate of the progress (e.g your view controller):
push your operation into an NSOperationQueue,
the operation queue will take care to detach the transfer and run it into an another thread,
on the operation side, you should still use the progress function and set the operation object as the opaque object via the CURLOPT_PROGRESSDATA curl option. By doing so, each time the progress function is called you can retrieve the operation object by casting the void *clientp opaque pointer. Then notify the delegate of the current progress in the main thread (e.g with performSelectorOnMainThread) to make sure you can perform UI updates such as refreshing your NSProgressIndicator.
As an alternative to an NSOperation you can also use Grand Central Dispatch (GCD) and blocks. If possible, I greatly recommend you to work with BBHTTP which is a libcurl client for iOS 5+ and OSX 10.7+ (BBHTTP uses GCD and blocks).
FYI: here's an example from BBHTTP that illustrates how to easily perform a file upload with an upload progress block - this is for iOS but should be directly reusable for OS X.
Related
I am not really seeking code examples, but I'm hoping someone can review my program design and provide feedback. I am trying to figure out how do I ensure I have one instance of my "workflow" running at a time.
I am working in C++.
This is my workflow:
I read rows off of a Postgres database.
If the table has any records, I want to do these instructions:
Read the records and transform them to JSON
Send the JSON document to a remote Web service
Parse the response from the service. The service tells me which records were saved or not saved, based on their primary key.
I delete the successfully saved records
I log the unsuccessful records (there's another process that consumes the logs and so my work is done).
I want to perform all of this threads using a separate thread (or "task", whatever higher-level abstraction is available in C++), and I want to make sure that if my function for [1] gets called multiple times, the additional calls basically get "dropped" if step 1 is already in flight.
In C++, I believe I can use a flag and a mutex. I use a something like std::lock_guard<std::mutex> at the top of my method. Then the next line checks for a flag.
// MyWorkflow.cpp
std::mutex myMutex;
int inFlight = 0;
void process() {
std::lock_guard<std::mutex> guard(myMutex);
if (inflight) {
return;
}
inflight = 1;
std::vector<Widget> widgets = readFromMyTable();
std::string json = getJson(&widgets);
... // Send the json to the remote service and handle the response
}
Okay, let me explain my confusion. I want to use Curl to perform the HTTP request. But Curl works asynchronously. And so if I make the asynchronous HTTP call via Curl, my update function will just return and myMutex will be released, right?
I think in my asynchronous response handler, I need to call a second function that's in MyWorkflow.cpp
void markCompletion() {
std::lock_guard<std::mutex> guard(myMutex);
inFlight = 0; // Reset the inflight flag here
}
Is this the right approach? I am worried that if an exception is thrown anywhere before I call markCompletion(), I will block all future callers. I think I need to ensure I have proper exception handling and always call markCompletion().
I am terribly sorry for asking such a noob question, but I really want to learn to do this the right way.
I currently working on a async rest client using boost::asio::io_service.
I am trying to make the client as a some kind of service for a bigger program.
The idea is that the client will execute async http requests to a rest API, independently from the thread running the main program. So inside in the client will be another thread waiting for a request to send.
To pass the requests to the client I am using a io_service and io_service::work initialized with the io_service. I almost reused the example given on this tutorial - logger_service.hpp.
My problem is that when in the example they post a work to the service, the called handler is a simple function. In my case as I am making async calls like this
(I have done the necessary to run all the instancies of the following objects and some more in a way to be able to establish the network connection):
boost::asio::io_service io_service_;
boost::asio::io_service::work work_(io_service_); //to prevent the io_service::run() to return when there is no more work to do
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket_(io_service_);
In the main program I am doing the following calls:
client.Connect();
...
client.Send();
client.Send();
...
Some client's pseudo code:
void MyClass::Send()
{
...
io_service_.post(boost::bind(&MyClass::AsyncSend, this);
...
}
void MyClass::AsyncSend()
{
...
boost::io_service::asio::async_write(socket, streamOutBuffer, boost::bind(&MyClass::handle_send, this));
...
}
void MyClass::handle_send()
{
boost::io_service::asio::async_read(socket, streamInBuffer, boost::bind(&MyClass::handle_read, this));
}
void MyClass::handle_read()
{
// ....treatment for the received data...
if(allDataIsReceived)
FireAnEvent(ReceivedData);
else
boost::io_service::asio::async_read(socket, streamInBuffer, boost::bind(&MyClass::handle_read, this));
}
As it is described in the documentation the 'post' method requests the io_service to invoke the given handler and return immediately. My question is, will be the nested handlers, for example the ::handle_send in the AsyncSend, called just after (when the http response is ready) when post() is used? Or the handlers will be called in another order different from the one defined by the order of post() calls ?
I am asking this question because when I call only once client->Send() the client seems to "work fine". But when I make 2 consecutive calls, as in the example above, the client cannot finish the first call and than goes to execute the second one and after some chaotic executions at the end the 2 operations fail.
Is there any way to do what I'm describing execute the whole async chain before the execution of another one.
I hope, I am clear enough with my description :)
hello Blacktempel,
Thank you for the given comment and the idea but however I am working on a project which demands using asynchronous calls.
In fact, as I am newbie with Boost my question and the example I gave weren't right in the part of the 'handle_read' function. I add now a few lines in the example in a way to be more clear in what situation I am (was).
In fact in many examples, may be all of them, who are treating the theme how to create an async client are very basic... All they just show how to chain the different handlers and the data treatment when the 'handle_read' is called is always something like "print some data on the screen" inside of this same read handler. Which, I think, is completely wrong when compared to real world problems!
No one will just print data and finish the execution of her program...! Usually once the data is received there is another treatment that has to start, for example FireAnEvent(). Influenced by the bad examples, I have done this 'FireAnEvent' inside the read handler, which, obviously is completely wrong! It is bad to do that because making the things like that, the "handle_read" might never exit or exit too late. If this handler does not finish, the io_service loop will not finish too. And if your further treatment demands once again to your async client to do something, this will start/restart (I am not sure about the details) the io_service loop. In my case I was doing several calls to the async client in this way. At the end I saw how the io_service was always started but never ended. Even after the whole treatment was ended, I never saw the io_service to stop.
So finally I let my async client to fill some global variable with the received data inside the handle_read and not to call directly another function like FireAnEvent. And I moved the call of this function (FireAnEvent) just after the io_service.run(). And it worked because after the end of the run() method I know that the loop is completely finished!
I hope my answer will help people :)
I'm doing some proof-of-concept work for a fairly complex video game I'd like to write in Haskell using the HOpenGL library. I started by writing a module that implements client-server event based communication. My problem appears when I try to hook it up to a simple program to draw clicks on the screen.
The event library uses a list of TChans made into a priority queue for communication. It returns an "out" queue and an "in" queue corresponding to server-bound and client-bound messages. Sending and receiving events are done in separate threads using forkIO. Testing the event library without the OpenGL part shows it communicating successfully. Here's the code I used to test it:
-- Client connects to server at localhost with 3 priorities in the priority queue
do { (outQueue, inQueue) <- client Nothing 3
-- send 'Click' events until terminated, the server responds with the coords negated
; mapM_ (\x -> atomically $ writeThing outQueue (lookupPriority x) x)
(repeat (Click (fromIntegral 2) (fromIntegral 4)))
}
This produces the expected output, namely a whole lot of send and receive events. I don't think the problem lies with the Event handling library.
The OpenGL part of the code checks the incoming queue for new events in the displayCallback and then calls the event's associated handler. I can get one event (the Init event, which simply clears the screen) to be caught by the displayCallback, but after that nothing is caught. Here's the relevant code:
atomically $ PQ.writeThing inqueue (Events.lookupPriority Events.Init) Events.Init
GLUT.mainLoop
render pqueue =
do event <- atomically $
do e <- PQ.getThing pqueue
case e of
Nothing -> retry
Just event -> return event
putStrLn $ "Got event"
(Events.lookupHandler event Events.Client) event
GL.flush
GLUT.swapBuffers
So my theories as to why this is happening are:
The display callback is blocking all of the sending and receiving threads on the retry.
The queues are not being returned properly, so that the queues that the client reads are different than the ones that the OpenGL part reads.
Are there any other reasons why this could be happening?
The complete code for this is too long to post on here although not too long (5 files under 100 lines each), however it is all on GitHub here.
Edit 1:
The client is run from within the main function in the HOpenGL code like so:
main =
do args <- getArgs
let ip = args !! 0
let priorities = args !! 1
(progname, _) <- GLUT.getArgsAndInitialize
-- Run the client here and bind the queues to use for communication
(outqueue, inqueue) <- Client.client (Just ip) priorities
GLUT.createWindow "Hello World"
GLUT.initialDisplayMode $= [GLUT.DoubleBuffered, GLUT.RGBAMode]
GLUT.keyboardMouseCallback $= Just (keyboardMouse outqueue)
GLUT.displayCallback $= render inqueue
PQ.writeThing inqueue (Events.lookupPriority Events.Init) Events.Init
GLUT.mainLoop
The only flag I pass to GHC when I compile the code is -package GLUT.
Edit 2:
I cleaned up the code on Github a bit. I removed acceptInput since it wasn't doing anything really and the Client code isn't supposed to be listening for events of its own anyway, that's why it's returning the queues.
Edit 3:
I'm clarifying my question a little bit. I took what I learned from #Shang and #Laar and kind of ran with it. I changed the threads in Client.hs to use forkOS instead of forkIO (and used -threaded at ghc), and it looks like the events are being communicated successfully, however they are not being received in the display callback. I also tried calling postRedisplay at the end of the display callback but I don't think it ever gets called (because I think the retry is blocking the entire OpenGL thread).
Would the retry in the display callback block the entire OpenGL thread? If it does, would it be safe to fork the display callback into a new thread? I don't imagine it would, since the possibility exists that multiple things could be trying to draw to the screen at the same time, but I might be able to handle that with a lock. Another solution would be to convert the lookupHandler function to return a function wrapped in a Maybe, and just do nothing if there aren't any events. I feel like that would be less than ideal as I'd then essentially have a busy loop which was something I was trying to avoid.
Edit 4:
Forgot to mention I used -threaded at ghc when I did the forkOS.
Edit 5:
I went and did a test of my theory that the retry in the render function (display callback) was blocking all of OpenGL. I rewrote the render function so it didn't block anymore, and it worked like I wanted it to work. One click in the screen gives two points, one from the server and from the original click. Here's the code for the new render function (note: it's not in Github):
render pqueue =
do event <- atomically $ PQ.getThing pqueue
case (Events.lookupHandler event Events.Client) of
Nothing -> return ()
Just handler ->
do let e = case event of {Just e' -> e'}
handler e
return ()
GL.flush
GLUT.swapBuffers
GLUT.postRedisplay Nothing
I tried it with and without the postRedisplay, and it only works with it. The problem now becomes that this pegs the CPU at 100% because it's a busy loop. In Edit 4 I proposed threading off the display callback. I'm still thinking of a way to do that.
A note since I haven't mentioned it yet. Anybody looking to build/run the code should do it like this:
$ ghc -threaded -package GLUT helloworldOGL.hs -o helloworldOGL
$ ghc server.hs -o server
-- one or the other, I usually do 0.0.0.0
$ ./server "localhost" 3
$ ./server "0.0.0.0" 3
$ ./helloworldOGL "localhost" 3
Edit 6: Solution
A solution! Going along with the threads, I decided to make a thread in the OpenGL code that checked for events, blocking if there aren't any, and then calling the handler followed by postRedisplay. Here it is:
checkEvents pqueue = forever $
do event <- atomically $
do e <- PQ.getThing pqueue
case e of
Nothing -> retry
Just event -> return event
putStrLn $ "Got event"
(Events.lookupHandler event Events.Client) event
GLUT.postRedisplay Nothing
The display callback is simply:
render = GLUT.swapBuffers
And it works, it doesn't peg the CPU for 100% and events are handled promptly. I'm posting this here because I couldn't have done it without the other answers and I feel bad taking the rep when the answers were both very helpful, so I'm accepting #Laar's answer since he has the lower Rep.
One possible cause could be the use of threading.
OpenGL uses thread local storage for it's context. Therefore all calls using OpenGL should be made from the same OS thread. HOpenGL (and OpenGLRaw too) is a relatively simple binding around the OpenGL library and is not providing any protection or workarounds to this 'problem'.
On the other hand are you using forkIO to create a light weight haskell thread. This thread is not guaranteed to stay on the same OS thread. Therefore the RTS might switch it to another OS thread where the thread local OpenGL-context is not available. To resolve this problem there is the forkOS function, which creates a bound haskell thread. This bound haskell thread will always run on the same OS thread and thus having its thread local state available. The documentation about this can be found in the 'Bound Threads' section of Control.Concurrent, forkOS can also be found there.
edits:
With the current testing code this problem is not present, as you're not using -threaded. (removed incorrect reasoning)
Your render function ends up being called only once, because the display callback is only called where there is something new to draw. To request a redraw, you need to call
GLUT.postRedisplay Nothing
It takes an optional window parameter, or signals a redraw for the "current" window when you pass Nothing. You usually call postRedisplay from an idleCallback or a timerCallback but you can also call it at the end of render to request an immediate redraw.
I've been having some issues getting my method hooks to work. I can get the hook to work if "I" call the method that's being hooked. But when it occurs naturally during the processes operation, it doesn't get hooked. My problem is probably stemming from the fact that I'm actually setting these hooks in my own thread that I've spawned. And apparently the LhSetInclusiveACL() method needs to know the thread that you want to hook. Well, here are my issues...
I don't really care which threads apply the hook, i want them all to be hooked. For example, lets say I want the CreateICW() method from the "gdi32.dll" library hooked for the entire process "iexplorer.exe". Not just from thread ID number 48291 or whatever. Knowing which threads are going to be calling the routines you are interested in hooking requires intimate knowledge of internal workings of the process you are hooking. I'm speculating that is generally not feasible and certainly not feasible for me. Thus its kind of impossible for me to know a priori which thread IDs need to be hooked.
The following code was taken from the "UnmanageHook" example:
extern "C" int main(int argc, wchar_t* argv[])
{
//...
//...
//...
/*
The following shows how to install and remove local hooks...
*/
FORCE(LhInstallHook(
GetProcAddress(hUser32, "MessageBeep"),
MessageBeepHook,
(PVOID)0x12345678,
hHook));
// won't invoke the hook handler because hooks are inactive after installation
MessageBeep(123);
// activate the hook for the current thread
// This is where I believe my problem is. ACLEntries is
// supposed to have a list of thread IDs that should pay
// attention to the MessageBeep() hook. Entries that are
// "0" get translated to be the "current" threadID. I want
// ALL threads and I don't want to have to try to figure out
// which threads will be spawned in the future for the given
// process. The second parameter is InThreadCount. I'm
// kind of shocked that you can't just pass in 0 or -1 or
// something for this parameter and just have it hook all
// threads in that given process.
FORCE(LhSetInclusiveACL(ACLEntries, 1, hHook));
// will be redirected into the handler...
MessageBeep(123);
//...
//...
//...
}
I've added some comments to the LhSetInclusiveACL() method call explaining the situation. Also LhSetExclusiveACL() and the "global" versions for these methods don't seem to help either.
For reference here is the documentation for LhSetExclusiveACL:
/***********************************************************************
Sets an exclusive hook local ACL based on the given thread ID list.
Global and local ACLs are always intersected. For example if the
global ACL allows a set “G” of threads to be intercepted, and the
local ACL allows a set “L” of threads to be intercepted, then the
set “G L” will be intercepted. The “exclusive” and “inclusive”
ACL types don’t have any impact on the computation of the final
set. Those are just helpers for you to construct a set of threads.
EASYHOOK_NT_EXPORT LhSetExclusiveACL(
ULONG* InThreadIdList,
ULONG InThreadCount,
TRACED_HOOK_HANDLE InHandle);
Parameters:
InThreadIdList
An array of thread IDs. If you specific zero for an
entry in this array, it will be automatically replaced
with the calling thread ID.
InThreadCount
The count of entries listed in the thread ID list. This
value must not exceed MAX_ACE_COUNT!
InHandle
The hook handle whose local ACL is going to be set.
Return values:
STATUS_INVALID_PARAMETER_2
The limit of MAX_ACE_COUNT ACL is violated by the given buffer.
***********************************************************************/
Am I using this wrong? I imagine that this is how the majority of implementations would use this library, so why is this not working for me?
You want to use LhSetExclusiveACL instead. This means that any calls across any threads get hooked, except for ones you specify in the ACL.
I made a class that has an asynchronous OpenWebPage() function. Once you call OpenWebPage(someUrl), a handler gets called - OnPageLoad(reply). I have been using a global variable called lastAction to take care of stuff once a page is loaded - handler checks what is the lastAction and calls an appropriate function. For example:
this->lastAction == "homepage";
this->OpenWebPage("http://www.hardwarebase.net");
void OnPageLoad(reply)
{
if(this->lastAction == "homepage")
{
this->lastAction = "login";
this->Login(); // POSTs a form and OnPageLoad gets called again
}
else if(this->lastAction == "login")
{
this->PostLogin(); // Checks did we log in properly, sets lastAction as new topic and goes to new topic URL
}
else if(this->lastAction == "new topic")
{
this->WriteTopic(); // Does some more stuff ... you get the point
}
}
Now, this is rather hard to write and keep track of when we have a large number of "actions". When I was doing stuff in Python (synchronously) it was much easier, like:
OpenWebPage("http://hardwarebase.net") // Stores the loaded page HTML in self.page
OpenWebpage("http://hardwarebase.net/login", {"user": username, "pw": password}) // POSTs a form
if(self.page == ...): // now do some more checks etc.
// do something more
Imagine now that I have a queue class which holds the actions: homepage, login, new topic. How am I supposed to execute all those actions (in proper order, one after one!) via the asynchronous callback? The first example is totally hard-coded obviously.
I hope you understand my question, because frankly I fear this is the worst question ever written :x
P.S. All this is done in Qt.
You are inviting all manner of bugs if you try and use a single member variable to maintain state for an arbitrary number of asynchronous operations, which is what you describe above. There is no way for you to determine the order that the OpenWebPage calls complete, so there's also no way to associate the value of lastAction at any given time with any specific operation.
There are a number of ways to solve this, e.g.:
Encapsulate web page loading in an immutable class that processes one page per instance
Return an object from OpenWebPage which tracks progress and stores the operation's state
Fire a signal when an operation completes and attach the operation's context to the signal
You need to add "return" statement in the end of every "if" branch: in your code, all "if" branches are executed in the first OnPageLoad call.
Generally, asynchronous state mamangment is always more complicated that synchronous. Consider replacing lastAction type with enumeration. Also, if OnPageLoad thread context is arbitrary, you need to synchronize access to global variables.