I'm trying to monitor the network sessions on server withe event driven programming (and not polling on /proc/net/tcp or udp).
I was able to find this article but it only provide one time look at the current state and not an event on each change (LISTEN, ESTABLISHED...).
Is it possible to use this like in this article that monitors processes changes but on network connections?
If not, is there any other API that I can use in order to achive this without polling /porc/net/* in interval?
Related
I need to write a C++ application that should read firewall status of Windows, and then need to keep an eye continuously if admin/someuser
changes the firewall status (lets say when my program was started firewall was disabled and after sometime Admin enabled it).
To implement this, I have created a thread that periodically(10 seconds) poll the code that checks Windows firewall status, but this doesn't look an efficient solution to me as continuous polling is required.
Is there a way to get event automatically in my program if firewall status changes (for example, FindFirstChangeNotification, using this I can get notification if any change in directory)? This will avoid continuous polling and will make program more efficient I think.
Any help is appreciated.
I know there is Windows ETW which anti-viruses use and which has all the info you need. It is a big system log where you subscribe to log/event providers. Pretty much everything that happens in system gets reported there via event which you can listen/wait for. I don't know the links to more useful pages with a list of loggers connected to ETW so here is the more general page: https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw-. You need to find out how to use C++ ETW API and the name/ID of the firewall events provider with a list of event types, then using API subscribe to this provider and setup a callback for when an event that interests you (here change of firewall status) occurs and that is it.
I'm a fresh user of ActiveMQ technology, and I have some problem approaching this technology.
I have the following situation:
I have a SW, running in a embedded (offline) ARM device, that archive a set of videos on a upluggable hard disk at run time.
Sometimes (4-5 events a day), I have to associate a alarm event to those videos and to queue the alarm on a persistent queue.
Once a month we have to extract the hard disk and to connect it to another embedded online ARM device, that should notify a ActiveMQ server about the alarms generated by the offline ARM device
And now my question: how can I store the persistent queue on the hard disk, so that the events generated byt the offline ARM device will be available to the online ARM system (the only "connection" between online and offline embedded device is hard disk)?
Please note that I cannot change the way I transmit messages to the online server, since it is a system not developed by my company.
Best regards
Giovanni
It sounds like you want a "store-and-forward" messaging pattern. You could configure the "offline" ActiveMQ broker to attempt to connect to the "online" ActiveMQ broker. The network connector will attempt to connect at configurable intervals and when it is "online" it will begin to send messages automatically.
The slight down side is that the broker will attempt to connect to the remote broker (even when offline), so you'll need to manage log rotation or logging levels to accommodate.
Look for the static:// network connector uri
Network of brokers
Which are the approach (simplest possible) that we can use to get notified for Power Status changes (for instance when computer goes to sleep, hibernate, etc.) in Linux based systems?
I will need this mainly for persisting some state before sleeping, and of course, restoring that state once the computer wakes up.
You can get all these events by just configuring your acpid to send them via socket, for example.
There's an official specification document that describes all possible events and circumstances. An extensive read though.
I'm writing a tcp server for an online turn-based game. I've already written a prototype using php sockets, but would like to move to C++. I've been looking at the popular network libraries (ASIO, ACE, POCO, LibEvent), but currently unclear which one would best suit my needs:
1) Connections are persistent (on the order of minutes), and the server must be able to handle 100+ simultaneous connections.
2) Connections must be able to maintain state information (user login info). [my php prototype currently requires each client request to contain the login info]
3) Optionally and preferably multi-threaded, but a single process. Prefer not to have 1 thread per connection, but a fixed number of threads working on all open connections.
I'm leaning towards POCO's TCPServer or Reactor frameworks, but not exactly sure if they meet my requirements. I think the Reactor is single threaded, and the TCPServer enforces 1:1 threading/connection. Am I correct?
In either case case, I'm not exactly sure how to do the most important task of associating login info to a specific connection with connections coming and going at random.
Boost.Asio should meet your requirements. The reactor queue can be serviced by multiple threads. Using asynchronous methods will enable your design of a fixed number of threads servicing all connections.
The tutorials and examples are probably the best place to start if you are unfamiliar with the library.
You might also take a look at MUSCLE, a multi-user networking library and server I wrote with this sort of application in mind. It's BSD-licensed, handles hundreds of users, and includes a server-side database mechanism for storing and sharing any information you want the clients to know about each other. The server is single-threaded by default, but I haven't found that to be a problem in practice (and it's possible to extend the server to be multithreaded if that turns out to be necessary).
I like to know the server (TCP based) architecture to support large scale of clients(at least10K) to implement Fix server. My points are
How we design it.
How to listen on the open port? Use select or poll or any other function.
How to process the response of the client? On large scale we cannot create the one thread for each client.
Should the processing of response is in the different executable and share the request and response to the server executable through IPC.
There is much more on it. I would appreciate if anyone explains it or provide any link.
Thanks
An excellent resource for information on this topic is The C10K problem. Although the dimensions there seem a little old, the techniques are still applicable today.
The architecture depends on what you want to do with the clients incoming data. My guess is that for every incoming message you would perform some computations and probably also return a response.
In that case I would create 1 main listener thread that receives all the incoming messages (Actually, if your hardware has more than 1 physical network device, I would use a listener thread per device and make sure each one is listening to a specific device).
Get the number of CPUs that you have on your machine and create worker threads for each CPU and bind them each thread to one cpu (Maybe number of working thread should be num_of_cpu-1, to leave an availalbe cpu for the listener and dispatcher).
Each thread has a queue and semaphore, the main listener thread just push the incoming data into those queues. There are many way to perform load balancing (Will talk about it later).
Each working thread just works on the requests given to it, and put the response on another queue that is read by the dispatcher.
The dispatcher - there are 2 options here, use a thread for dispatcher (or thread per network device as for listeners), or have the dispatcher actually be the same thread as the listener.
There is some advantage to put them both on the same thread, since it makes it easier to detect lost socket connection and use the same fds for both reading and writing without thread synchronization. However, it could be that using 2 different threads would give better performance, it need to be tested.
Note about load balancing:
This is a topic of its own.
The simplest thing is to use 1 queue for all working threads, but the problem is that they have to lock in order to pop items and the locking can damage performance. (But you get the most balanced load).
Another quite simple approach would be to have a private queue for every worker and perform round-robin when inserting. After every X cycles check the size of all the queues. If some queues are much larger than others then leave them out for the next X cycles and then recheck them again. This is not the best approach, but a simple one to implement and gives some load balancing while no locking is needed.
By the way - There is a way to implement queue between 2 threads without blocking - but this is also another topic.
I hope it helps,
Guy
If the client and server are on a secure network then the security aspect is to be minimal - to the extent that the transfers are encrypted. If the clients and the server are not on a secure network - you first want the server and client to authenticate each other and then initiate encrypted data transfer. For data transfer, server-side authentication should suffice. At the end of this authentication use the session key to generate encrypted data stream (symmetric). consider using TFTP it is simple to implement and scales reasonably well.