How does one represent multiple threads in a structure/flowchart - chart.js

I have been tasked with creating a structure/flowchart for some client-server and start-up processes in our organization's software. A lot of our processes run concurrently as they have an impact on one another. How is this traditionally represented in the flow chart?
description the program :
A server having more than one thread is known as Multithreaded Server. When a client sends the
request, a thread is generated through which a user can communicate with the server. You need to
implement a server-client program to generate multiple threads to accept multiple requests from
multiple clients at the same time (in parallel).
Let many clients work on the same input data as follows:
An integer matrix of size 10×10 is randomly generated (numbers between 1-10) and stored on
the server side. - Each client requests a specific service from the server:
Client 1: Matrix summation
Client 2: Matrix sort (ascendingly)
Client 3: Find the maximum number
Client 4: Transpose the matrix
Client 5: Count the repeated number (ex: number 1 is repeated 5 times)
I JUST NEED TO structure/flowchart

Sorry if I miss-understood the question, but I think the diagram you are looking for is either a sequence diagram or an activity diagram.
Draw.io is a tool that can draw this, and has examples at https://drawio-app.com/create-uml-sequence-diagrams-in-draw-io/
Hope this points you in the right direction.

Related

How to build an asymmetric multi-party communication protocol?

I am implementing a c++ application which involves multiply users(for example. 128 users) with asymmetric roles(they all have different jobs). In such scenario, every user has to communicate with each other. Thus each pair of users need a bi-direction (virtual) communication channel between them.
There are three popular messaging patterns in this application.
Exchange: each user i has a message m_ij for user j != i to send. The length of m_ij is a public constant value. These messages m_ij are independent and have no relation with each other. This is something like "everyone has something for everyone".
Distribute: a (per-determined) user i_0 has a message m_j for ever other user j!=i_0. The length of the messages is a public constant value. It is a little similar to broadcast but the receivers are not receiving the same message.
Gather: a (per-determined) user i_0 receives a message m_j from ever other user j!=i_0. The length of the messages is a public constant value. This is very similar to vote mechanism.
Besides, there are also a small amount of two-party communication between some of the users.
The round-trip cost is very sensitive in the application. Thus a one-roundtrip implementation for these communication patterns is very desirable.
Besides, the bandwidth cost of the application is very high thus a non-blocking implementation is almost a must-have.
I first tried the classic server/client socket (https://www.geeksforgeeks.org/socket-programming-cc/) by having multiply ports and deploy a server/client pair between every two users. However it turns out to be a failure.
I also investgate the ZMQ library. But to my poor understanding I have to somehow handle "routing" on my own, which I am not capable of.
Nanomsg is another candidate to go with but none of the patterns it provides seem to match the requirements.
So, could anyone provide any idea about this challenge? Thanks in advance!

How to handle distributing lots of large messages to an Akka cluster on startup?

We have an actor structure where there is a cluster-sharded actor that calculates a parametric matrix of about 7 meg and has to distribute it to five other nodes for consumption. First of all, there are the following constraints:
The matrix has to be generated in one place.
The matrix is needed on all nodes to handle user load.
The matrix will be periodically generated again to handle changing variables and then sent off to all nodes to handle user load.
Saving the matrix in a database is probably not viable as it merely shifts the networking load and the database would get very large very fast. The database only saves the input parameters.
We changed the Akka max message size to 10 meg to accomplish this, but that feels a bit odd and we didn't see another choice. Normally this works fine even though passing 10 meg messages around a distributed pub-sub seems odd to me. However, on startup, the system has to start up 2000 of these all at once. As a result the sharding coordinators scream at us about buffered messages. It eventually calms down and life resumes, but I would love to be able to do this without the bloodbath in the logs.
Can someone recommend an alternative strategy for handling the distribution of the parametric matrix that gets it to every node but doesn't cause a shard coordinator complaint bloodbath?

C++ Yahtzee through TCP

I've been looking around StackOverflow for a simple network TCP connection in C++ and all I seem to find are Java, Python and C#. I understand that Python or Java can set up easy connections but mostly I am programming a simple Yahtzee game in C++ and wanting to learn about the Networking in C++ because I have about a year's knowledge on C++ but don't have a strong grasp of these other languages, but alas I can't find a good simple set up to suit my needs.
I saw C++ Winsock P2P who has a set up but it has a few points that are tough for beginners.
In my game I am expecting the client to do most of the work (as I've heard that is a better implementation) but am not sure where to have the server take inputs and throw back outputs. I have the client rolling and displaying the rolls. Then I assume it would do all three rolls and once you've chose where to use your rolls, the client sends the server the placement in the score sheet and the dice rolls.
I understand the protocols of: you send something and the server can compute and send it back, but my question is, Do I have my client send a number (int choice = 12 for a yahtzee obviously) as a packet message and then send an array of dice rolls (int roll[5]) for the server to compute what goes in? or should the client do that work? This is what I don't know about the sending and overall set up.
This question is difficult to answer as it is really just your decision how to do it, and both approaches work. Some general aspects to consider in a server clients system:
Stuff that the client does is a lot more vulnurable to hacking. (e.g. If you generate the Random die result on the client, one could change the code and send your server wrong data, like always 12 or whatever)
Stuff that you do on the server impacts performance when your user number grows very large. Letting the client do it leaves your server processing power for other stuff
Oviously limit the amount of information you send when possible
So a general rule of thumb would be: let the server do anything that's 'dangerous', let the client do everything else. (Though unfortunately a lot of things are dangerous in this context)
In your case I would let the server do the dice rolling and send the results to the user.
Do I have my client send a number (int choice = 12 for a yahtzee obviously) as a packet message and then send an array of dice rolls (int roll[5]) for the server to compute what goes in? or should the client do that work?
You cant send an empty array and let the server fill that in. Remember, those can be seperate Computers. They cant access each others memory. You could just send a request to the server (Maybe an int constant) so it knows what to do(rolling a dice e.g.)
and then listen on the network for data and interpret e.g. the next 5 bytes that the server sends you as the dices it rolled.

Dynamically Evaluate load and create Threads depending on machine performance

Hi i have started to work on a project where i use parallel computing to separate job loads among multiple machines, such as hashing and other forms of mathematical calculations. Im using C++
it is running on a Master/slave or Server/Client model if you prefer where every client connects to the server and waits for a job. The server can than take a job and seperate it depending on the number of clients
1000 jobs -- > 3 clients
IE: client 1 --> calculate(0 to 333)
Client 2 --> calculate(334 to 666)
Client 3 --> calculate(667 to 999)
I wanted to further enhance the speed by creating multiple threads on every running client. But since every machine are not likely (almost 100%) not going to have the same hardware, i cannot arbitrarily decide on a number of threads to run on every client.
i would like to know if one of you guys knew a way to evaluate the load a thread has on the cpu and extrapolate the number of threads that can be run concurently on the machine.
there are ways i see of doing this.
I start threads one by one, evaluating the cpu load every time and stop when i reach a certain prefix ceiling of (50% - 75% etc) but this has the flaw that ill have to stop and re-separate the job every time i start a new thread.
(and this is the more complex)
run some kind of test thread and calculate its impact on the cpu base load and extrapolate the number of threads that can be run on the machine and than start threads and separate jobs accordingly.
any idea or pointer are welcome, thanks in advance !

Best approach for writing a Linux Server in C (phtreads, select or fork ? )

i got a very specific question about server programming in UNIX (Debian, kernel 2.6.32). My goal is to learn how to write a server which can handle a huge amount of clients. My target is more than 30 000 concurrent clients (even when my college mentions that 500 000 are possible, which seems QUIIITEEE a huge amount :-)), but i really don't know (even whats possible) and that is why I ask here. So my first question. How many simultaneous clients are possible? Clients can connect whenever they want and get in contact with other clients and form a group (1 group contains a maximum of 12 clients). They can chat with each other, so the TCP/IP package size varies depending on the message sent.
Clients can also send mathematical formulas to the server. The server will solve them and broadcast the answer back to the group. This is a quite heavy operation.
My current approach is to start up the server. Than using fork to create a daemon process. The daemon process binds the socket fd_listen and starts listening. It is a while (1) loop. I use accept() to get incoming calls.
Once a client connects I create a pthread for that client which will run the communication. Clients get added to a group and share some memory together (needed to keep the group running) but still every client is running on a different thread. Getting the access to the memory right was quite a hazzle but works fine now.
In the beginning of the programm i read out the /proc/sys/kernel/threads-max file and according to that i create my threads. The amount of possible threads according to that file is around 5000. Far away from the amount of clients i want to be able to serve.
Another approach i consider is to use select () and create sets. But the access time to find a socket within a set is O(N). This can be quite long if i have more than a couple of thousands clients connected. Please correct me if i am wrong.
Well, i guess i need some ideas :-)
Groetjes
Markus
P.S. i tag it for C++ and C because it applies to both languages.
The best approach as of today is an event loop like libev or libevent.
In most cases you will find that one thread is more than enough, but even if it isn't, you can always have multiple threads with separate loops (at least with libev).
Libev[ent] uses the most efficient polling solution for each OS (and anything is more efficient than select or a thread per socket).
You'll run into a couple of limits:
fd_set size: This is changable at compile time, but has quite a low limit by default, this affects select solutions.
Thread-per-socket will run out of steam far earlier - I suggest putting the longs calculations in separate threads (with pooling if required), but otherwise a single thread approach will probably scale.
To reach 500,000 you'll need a set of machines, and round-robin DNS I suspect.
TCP ports shouldn't be a problem, as long as the server doesn't connection back to the clients. I always seem to forget this, and have to be reminded.
File descriptors themselves shouldn't be too much of a problem, I think, but getting them into your polling solution may be more difficult - certainly you don't want to be passing them in each time.
I think you can use the event model(epoll + worker threads pool) to solve this problem.
first listen and accept in main thread, if the client connects to the server, the main thread distribute the client_fd to one worker thread, and add epoll list, then this worker thread will handle the reqeust from the client.
the number of worker thread can be configured by the problem, and it must be no more the the 5000.