Incremental uploading of file using signals and slots - c++

Before implementing this I would like to check if this will lead to undefined behaviour or race conditions.
When uploading files to asure, this must be done in blocks. I want to upload 5 blocks in parallel and they all get their data from the same file. This would happen like this:
char *currentDataChunk;
int currentDataChunkSize;
connect(_blobStorageProvider, SIGNAL(putBlockSucceded(int)), this, SLOT(finalizeAndUploadNextBlock(int)));
int parallelUploads = ((_item->size() / MAX_BLOCK_SIZE) >= MAX_PARALLEL_BLOCKUPLOADS) ? MAX_PARALLEL_BLOCKUPLOADS : (_item->size() / MAX_BLOCK_SIZE);
_latestProcessedBlockId = (parallelUploads - 1);
for(int i = 0; i < parallelUploads; i++) {
currentDataChunkSize = _item->read(currentDataChunk, MAX_BLOCK_SIZE);
...
uploader->putBlock(_container, _blobName, currentDataChunk, i);
}
In the putBlock function in the uploader, it calls the QNetworkAccessManager with the call. When it's done it sends back a signal if it failed, succeded or got canceled, along with the blockId so that I know which one of the blocks that was uploaded.
void BigBlobUploader::finalizeAndUploadNextBlock(int blockId) {
// FINALIZE BY ADDING SUCCESSFUL BLOCK TO FUTURE BLOCKLIST
QByteArray temp;
for(int i = 0; i != sizeof(blockId); i++) {
temp.append((char)(blockId >> (i * 8)));
}
_uploadedBlockIds.insert(blockId, QString(temp.toBase64()));
this->uploadNextBlock();
}
void BigBlobUploader::uploadNextBlock() {
char *newDataChunk;
int newDataChunkSize = _item->read(newDataChunk, MAX_BLOCK_SIZE);
...
_latestProcessedBlockId++;
uploader->putBlock(_container, _blobName, newDataChunk, _latestProcessedBlockId);
}
My plan now is to fetch these signals to a slot which should take note that this block was uploaded (put it in a list to be able to put a block list to finalize this blob), increase the index by one (which starts at 5) and fetch a new chunk of data and redo the whole process.
My issue now is, what if two of them finishes at the EXACT same time? I'm not dealing with threads here but since the HTTP requests are threaded by default, what is the case here? Are the signals queued (or should I use QueuedConnection)? Can a slot be called in parallel? Is there a better way of doing this?

Sorry for the inconvenience, I assumed you were using .NET since you added the Windows Azure tag to this thread. I'm familar with Windows Azure, but my understanding about Qt is limited. However, it would not be different from using signals/slots in other concurrent scenarios. This document may help: http://qt-project.org/doc/qt-4.8/signalsandslots.html.
Best Regards,
Ming Xu.

I am not familiar with QNetworkAccessManager. But in general, to deal with race conditions, please use locks. Usually, the way to use locks in C# is leveraging the lock keyword. Something like:
private object lockingObject = new object();
In a method:
lock
{
// If a thread acquires a lock, another thread is blocked here until the lock is released.
}
In addition, you can refer to http://msdn.microsoft.com/en-us/library/c5kehkcz(v=vs.100).aspx for more information.
Best Regards,
Ming Xu.

Related

Using a lock in C++ across multiple tasks

I am not really seeking code examples, but I'm hoping someone can review my program design and provide feedback. I am trying to figure out how do I ensure I have one instance of my "workflow" running at a time.
I am working in C++.
This is my workflow:
I read rows off of a Postgres database.
If the table has any records, I want to do these instructions:
Read the records and transform them to JSON
Send the JSON document to a remote Web service
Parse the response from the service. The service tells me which records were saved or not saved, based on their primary key.
I delete the successfully saved records
I log the unsuccessful records (there's another process that consumes the logs and so my work is done).
I want to perform all of this threads using a separate thread (or "task", whatever higher-level abstraction is available in C++), and I want to make sure that if my function for [1] gets called multiple times, the additional calls basically get "dropped" if step 1 is already in flight.
In C++, I believe I can use a flag and a mutex. I use a something like std::lock_guard<std::mutex> at the top of my method. Then the next line checks for a flag.
// MyWorkflow.cpp
std::mutex myMutex;
int inFlight = 0;
void process() {
std::lock_guard<std::mutex> guard(myMutex);
if (inflight) {
return;
}
inflight = 1;
std::vector<Widget> widgets = readFromMyTable();
std::string json = getJson(&widgets);
... // Send the json to the remote service and handle the response
}
Okay, let me explain my confusion. I want to use Curl to perform the HTTP request. But Curl works asynchronously. And so if I make the asynchronous HTTP call via Curl, my update function will just return and myMutex will be released, right?
I think in my asynchronous response handler, I need to call a second function that's in MyWorkflow.cpp
void markCompletion() {
std::lock_guard<std::mutex> guard(myMutex);
inFlight = 0; // Reset the inflight flag here
}
Is this the right approach? I am worried that if an exception is thrown anywhere before I call markCompletion(), I will block all future callers. I think I need to ensure I have proper exception handling and always call markCompletion().
I am terribly sorry for asking such a noob question, but I really want to learn to do this the right way.

Implementing a custom async task type and await

I am developing a C++ app in which i need to receive messages from an MQ and then parsing them according to their type and for a particular reason I want to make this process (receiving a single message followed by processing it) asynchronous. Since, I want to keep things as simple as possible in a way that the next developer would have no problem continuing the code, I have written a very small class to implement Asynchrony.
I first raise a new thread and pass a function to the thread:
task = new thread([&] {
result = fn();
isCompleted = true;
});
task->detach();
and in order to await the task I do the following:
while (!isCompleted && !(*cancelationToken))
{
Sleep(5);
}
state = 1; // marking the task as completed
So far there is no problem and I have not faced any bug or error but I am not sure if this is "a good way to do this" and my question is focused on determining this.
Read about std::future and std::async.
If your task runs in another core or processor, the variable isCompleted may become un-synchronized having two copies in core cache. So you may be waiting more than needed.
If you have to wait for something it is better to use a semaphore.
As said in comments, using standard methods is better anyway.

Eclipse RAP Multi-client but single server thread

I understand how RAP creates scopes have a specific thread for each client and so on. I also understand how the application scope is unique among several clients, however I don't know how to access that specific scope in a single thread manner.
I would like to have a server side (with access to databases and stuff) that is a single execution to ensure it has a global knowledge of all transaction and that requests from clients are executed in sequence instead of parallel.
Currently I am accessing the application context as follows from the UI:
synchronized( MyServer.class ) {
ApplicationContext appContext = RWT.getApplicationContext();
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
myServer.doSomething(RWTUtils.getSessionID());
}
Even if I access myServer object there and trigger requests, the execution will still be running in the UI thread.
For now the only way to ensure the sequence is to use synchronized as follows on my server
public class MyServer {
String text = "";
public void doSomething(String string) {
try {
synchronized (this) {
System.out.println("doSomething - start :" + string);
text += "[" + string + "]";
System.out.println("text: " + (text));
Thread.sleep(10000);
System.out.println("text: " + (text));
System.out.println("doSomething - stop :" + string);
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Is there a better way to not have to manage the thread synchronization myself?
Any help is welcome
EDIT:
To better explain myself, here is what I mean. Either I trust the database to handle multiple request properly and I have to handle also some other knowledge in a synchronized manner to share information between clients (example A) or I find a solution where another thread handles both (example B), the knowledge and the database. Of course, the problem here is that one client may block the others, but this is can be managed with background threads for long actions, most of them will be no problem. My initial question was, is there maybe already some specific thread of the application scope that does Example B or is Example A actually the way to go?
Conclusion (so far)
Basically, option A) is the way to go. For database access it will require connection pooling and for shared information it will require thoughtful synchronization of key objects. Main attention has to be done in the database design and the synchronization of objects to ensure that two clients cannot write incompatible data at the same time (e.g. write contradicting entries that make the result dependent of the write order).
First of all, the way that you create MyServer in the first snippet is not thread safe. You are likely to create more than one instance of MyServer.
You need to synchronize the creation of MyServer, like this for example:
synchronized( MyServer.class ) {
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
}
See also this post How to implement thread-safe lazy initialization? for other possible solutions.
Furthermore, your code is calling doSomething() on the client thread (i.e. the UI thread) which will cause each client to wait until pending requests of other clients are processed. The client UI will become unresponsive.
To solve this problem your code should call doSomething() (or any other long-running operation for that matter) from a background thread (see also
Threads in RAP)
When the background thread has finished, you should use Server Push to update the UI.

Architecture problem: access-queue block

There's a resource manager class. It helps us to access devices. But, of course, it should look for not to give access to one device for 2 processes at the same time.
At first I thought I wouldn't have any access queue. I thought there would be method like anyFree_devicename() that would return access handle if there is any free and NULL if no any. But, because of high concurrency for some devices, I've written accessQueue in every device.
Now, when you try to access device your pid (process id) is inserted into such accessQueue and you can ask for your turn using special method.
But, I found one problem: access Queues can block each other when you need few devicec in one command:
Device1 Device2
1 2
2 1
And both of them would be blocked.
inline bool API::Device::Device::ShallIUse(int pid)
{
if (amIFirst(pid)) return 1; // if I'm first I can use it anyway
std::stack<int> tempStorage; // we pass every element acessQ -> Temp
while (acessQueue.front() != pid) // every process
{
//we take process pointer to look into it's queue
API::ProcessManager::Process* proc = API::ProcessManager::TaskManager::me->giveProcess(acessQueue.front());
// list of devices this prosess needs now
std::vector<API::Device::Device*>* dINeed = proc->topCommand()->devINeedPtr();
// an dsee if there any process
for (int i = 0; i < (dINeed->size() - 1); i++)
{
if (!dINeed[i]->mIFirst())
{
while ( ! tempStorage.empty())
{
acessQueue.push(tempStorage.top());
tempStorage.pop();
}
return 0;
}
}
tempStorage.push(acessQueue.front());
acessQueue.pop();
}
return 1;
I've written such algorithm some lime later but:
It ruing all layer-based architecture
Now It seems to work wrong.
That's crazy! We simply look-trough all commands in nearly all processes and tring to push some of commands up on the access Queue. It works really slow.
Your access queue is creating what is known as a dead-lock. Multiple clients become perpetually blocked because they are trying to take ownership of the same set of resources but in a different order.
You can avoid it by assigning a unique value to all your resources. Have the clients submit a list of desired resources to the resource manager. The resource manager's acquire method will sort the list by the resource number and then attempt to allocate that set of resources in order.
This will enforce a specific order for all acquisitions and you will never be able to deadlock.
Any given client will, of course, block until all the set of resources it needs are available.

Handling Interrupt in C++

I am writing a framework for an embedded device which has the ability to run multiple applications. When switching between apps how can I ensure that the state of my current application is cleaned up correctly? For example, say I am running through an intensive loop in one application and a request is made to run a second app while that loop has not yet finished. I cannot delete the object containing the loop until the loop has finished, yet I am unsure how to ensure the looping object is in a state ready to be deleted. Do I need some kind of polling mechanism or event callback which notifies me when it has completed?
Thanks.
Usually if you need to do this type of thing you'll have an OS/RTOS that can handle the multiple tasks (even if the OS is a simple homebrew type thing).
If you don't already have an RTOS, you may want to look into one (there are hundreds available) or look into incorporating something simple like protothreads: http://www.sics.se/~adam/pt/
So you have two threads: one running the kernel and one running the app? You will need to make a function in your kernel say ReadyToYield() that the application can call when it's happy for you to close it down. ReadyToYield() would flag the kernel thread to give it the good news and then sit and wait until the kernel thread decides what to do. It might look something like this:
volatile bool appWaitingOnKernel = false;
volatile bool continueWaitingForKernel;
On the app thread call:
void ReadyToYield(void)
{
continueWaitingForKernel = true;
appWaitingOnKernel = true;
while(continueWaitingForKernel == true);
}
On the kernel thread call:
void CheckForWaitingApp(void)
{
if(appWaitingOnKernel == true)
{
appWaitingOnKernel = false;
if(needToDeleteApp)
DeleteApp();
else
continueWaitingForKernel = false;
}
}
Obviously, the actual implementation here depends on the underlying O/S but this is the gist.
John.
(1) You need to write thread-safe code. This is not specific to embedded systems.
(2) You need to save state away when you do a context switch.