C ++ logger Multiple process support logger - c++

Muliple process access to writing on same file simultaneously..if the file size is excess on the limit(example 10mb),the processing file is renamed(sample.txt to sample1.txt)rolling appender) and create a new one on the same name.
My issue is ,multiple process writing at same time,File size exceed time file closed, if one of the process is still writing on same file. doesnt File rolling .can any one help

One strategy that I've used also works on a distributed computing system accross multiple machines.
If you create a library which will package log messages and then send them via TCP to a destination, then you can have as many processes as you like writing to the same logger. You'd need a server at that destination to receive the log messages and write them to one file.
Generally, inter-process communication occurs via either shared memory or networking. Using networking we can go not-only inter-process, but also inter machine. If we just use the destination of localhost or 127.0.0.1, then the packet never actually reaches the network card. Most drivers are smart enough to just pass the packet to any processes listening, leading to good performance too.

Related

IPC methods for local processes with multiple separate groups

I’m new to IPC and I’m trying to implement a secure IPC method (not related to encryption).
I’m developing a system in C++ using Visual Studio 2010 (but will be ported to others platforms Linux/MacOS/FreeBSD), this system have a process “A” that needs receive and send a XML to other process “B” on the same computer, but will exist around of 14 process like “B” (B1, B2, ..., B14) that need send/receive a XML to the process “A”.
The process “A” will acts as a proxy/bridge between every process “B”, all data/XML that the process “B” must send, will be sent to the process “A”, and just the process “A” will sends data/XML to the process “B”.
I’m looking for an IPC method to exchange this data between the process “A” and “B1…B14”. The shared memory sounds good to do this, but any process can write/read to the address, so this isn’t secure (I know that is possible to set permission access).
I’m trying to find an IPC method that:
Must be a local only method, I need avoid remote connections.
For security reasons, when a process opens a “channel for communication” to send/receive the data, other process can’t use the same “channel” (unlike shared memory or Boost Message Queue that is possible to write on this channel, or NamedPipe that is possible open other instance with the give name), I want to avoid fake/malicious process. TCP sounds good for this, because isn’t possible that two process listen on the same port (but isn't local only).
3- The process “A” will be a service, and some processes “B” will run as service too and others processes “B” will run as a unprivileged user, so this must not be an administrator-only feature.
4- This project will be code-closed, so I can’t use a code/lib based on the GPL license.
5- If possible, cross-platform (Windows/Linux/MacOS/FreeBSD).
Can someone suggest a suitable IPC technique, either built into the OS or requiring a third-party library?
Short answer:
Windows Pipes for Win32.
Anonymous local sockets for Linux(and family).
Long answer:
On Windows platform there are following commonly used alternatives:
Memory mapped files
Named Pipes
Network sockets (mostly IP)
The unfortunate fact is that none of the above is local-only by nature. Files are shared by storage access, pipes are available due to common RPC/LPC routing and IP is a subject to routing/forwarding configuration (even when using loopback).
I personally recommend using pipes on Win32. They are acting more or less like local sockets on Linux (with some differences though).
On Linux platform:
Shared memory
Pipes
Local sockets (including anonymous ones).
Pipes and local sockets are secure, and in different scenarios each of them have own benefits. As you have multiple client/single server scenario, I would favor local (AF_LOCAL) socket programming. You can either use named sockets (with file-based access control), or anonymous ones. Both options are pretty secure (unless attacker gains local access).
Links
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365780(v=vs.85).aspx
http://manpages.ubuntu.com/manpages/lucid/man7/unix.7.html

Is network speed/performance affected by libpcap based application?

I am using the libpcap library to monitor HTTP requests and responses. I am also storing the 10 most recent GET requests in memory based on string search and a few responses. Suppose the monitor is on and I am downloading a file, will it affect my download speed or is it, a copy of packet is passed on to libpcap without affecting the traffic?
Previously, i was doing same using iptables + libnetfilter_queue. My libnetfilter_queue based module was bit slow in analysing the packets as many string searches and related operations were done on every outgoing packet, and few incoming packets. It affected by download speed, suppose downloading a file using a download accelerator. When the module was running my download speeds were less in comparison to when it wasn't running. Possible because all the packets were passed to my netfilter_queue module and then to other user applications.
Will i face the same problem with libpcap. I heard it uses some zero-copy mechanism.
A copy of the packet is passed to the PF_PACKET socket (I'm inferring from "libnetfilter" that you're using Linux), so it's not processed in the same code path that processes it as regular network input.
Newer versions of libpcap (1.0 and later) pass those packets to userland through shared memory, which is the "zero-copy" mechanism.
However, there's still processing being done for each packet, so there will be some slowdown unless your machine has idle processor cores and spare memory bandwidth (and disk bandwidth if your program is writing significant amounts of data to the file system). It won't directly increase packet processing latency, as it's not in the code path the way your netfilter-based mechanism was, so it probably won't impact networking performance as much.
It will be a copy of packet is passed on to libpcap without affecting the traffic.

Can I write Ethernet based network programs in C++?

I would like to write a program and run it on two machines, and send some data from one machine to another in an Ethernet frame.
Typically application data is at layer 7 of the OSI model, is there anything like a kernel restriction or API restriction, that would stop me from writing a program in which I can specify a destination MAC address and have some data sent to that MAC as the Ethernet payload? Then write a program to listen for incoming frames and grab the frames from a specified source MAC address, extracting the payload of data from the frame?
(So I don't want any other overhead like IP or TCP/UDP headers, I don't want to go higher than layer 2).
Can this be done in C++, or must all communication happen at the IP layer, and can this be done on Ubuntu? Extra love for pointing or providing examples! :D
My problem is obviously I'm new to network programming in c++ and as far as I know, if I want to communicate across a network I have to use a socket() call or similar, which works at an IP layer, so can I write a c++ program to work at OSI layer 2, are there APIs for this, does the Linux kernel even allow this?
As you already mentioned sockets, probably you would just like to use a raw socket. Maybe this page with C example code is of some help.
In case you are looking for an idea for a program only using Ethernet while still being useful:
Wake on LAN in it's original form is quite simple. Note however that most current implementations actually send UDP packets (exploiting that the receiver does not parse for packet headers etc. but just a string in the packet's payload).
Also the use of raw sockets is usually restricted to privileged users. You might need to either
call your program as root
or have it owned by root and setuid bit set
or set the capability for creating raw socket using setcap CAP_NET_RAW+ep /path/to/your/program-file
The last option gives more fine grained privileges (just raw sockets, not write access to your whole file system etc.) than the other two. It is still less widely known however, since it is "only" supported from kernel 2.6.24 on (which came with Ubuntu 8.04).
Yes, actually linux has a very nice feature that makes it easy to deal with layer 2 packets. You can use a TAP device, which allows your userspace program to read/write ethernet traffic through the kernel.
http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation/networking/tuntap.txt
http://en.wikipedia.org/wiki/TUN/TAP

Starting programs from C++, exchanging info with pipes and then shared memory

I am currently developing a modular framework using shared memory in C & C++.
The goal is to have independent programs in both C and C++, talk to each other through shared memory.
E.g. one program is responsible for reading a GPS and another responsible for processing the data from several sensors.
A master program will start all the slave programs
(currently i am using fp = popen(./slave1/slave1,"r"); to do this) and then make shared memory segments that each slave can connect to.
The thought behind this is that if a slave dies, it can be revived by the master and reconnect to the same shared memory segment.
Slaves can also be exchanged during runtime (e.g. switch one GPS with another).
The problem is that I spawn the slave via popen, and pass the shared memory ID to the slave. Via the pipe the slave transmits back the size needed.
After this is done i want to reroute the slave's pipe to terminal to display debug messages and not pass through the master.
Suggestions are greatly appreciated, as well as other solutions to the issue.
The key is to have some form of communication prior to setting up the shared memory.
I suggest to use another mean to communicate. Named pipe are the way I think. Rerouting standard out/err will be tricky at best.
I suggest to use boost.interprocess to handle IPC. And be attentive to synchronization :)
my2c
You may want to look into the SCM_RIGHTS transfer mode of unix domain sockets - this lets you pass a file descriptor across a local socket. This can be used to pass stderr and the like to your slave processes.
You can also pass shared memory segments as a file descriptor as well (at least on Linux). Create a file with a random name in /dev/shm and unlink it immediately. Now ftruncate to the desired size and mmap with MAP_SHARED. Now you have a shared memory segment tied to a file descriptor. One of the nice things about this approach is the shared memory segment is automatically destroyed if all processes holding it terminate.
Putting this together, you can pass your slaves one end of a unix domain socketpair(). Just leave the fd open over exec, and pass the fd number on the command line. Pass whatever configuration information you want as normal data over the socket pair, then hand over a file descriptor pointing to your shared memory segment.
Pipes are not reroutable -- they go where they go when they were created. What you need to do is have the slave close the pipe when its done with it, and then reopen its stdout elsewhere. If you always want output to the terminal, you can use freopen("/dev/tty", "w", stdout), but then it will always go to the terminal -- you can't redirect it anywhere else.
To address the specific issue, send debug messages to stderr, rather than stdout.

Options for inter-service one-way communication

I'm searching for different options for implementing communication between a service and other services/applications.
What I would like to do:
I have a service that is constantly running, polling a device connected to a serial port. At certain points, this service should send a message to interested clients containing data retrieved from the device. Data is uncomplicated, most likely just a single string.
Ideally, the clients would not have to subscribe to receive these messages, which leads me to some sort of event 'broadcast' setup (similar to Windows events). The message sending process should not block, and does not need a response from any clients (or that there even are any clients for that matter).
I've been reading about IPC (COM in particular) and windows events, but am yet to come across something that really fits with what I want to do.
So is this possible? If so, what technologies should I be using? If not, what are some viable communication alternatives?
Here's the particulars of the setup:
Windows 2000/XP environments
'Server' service is a windows service, using VC++2005
Clients would vary, but always be in the windows environment (usual clients would be VC++6 windows services, VB6 applications)
Any help would be appreciated!
Windows supports broadcasting messages, check here. You can SendMessage to HWND_BROADCAST from the service, and receive it in each client.
There are a number of ways to do a broadcast system, but you'll have to either give up reliability (ie, some messages must be lost) or use a proper subscription system.
If you're willing to give up reliability, you can create a shared memory segment and named manual-reset event object. When a new message arrives, write it to the shared memory segment, signal the event object, then close the event object and create a new one with a different name (the name should be in the shmem segment somewhere). Clients open the shmem segment, find the current event object, wait for it to be signaled, then read off the message and new event segment.
In this option, you must be careful to deal with the case of a client reading at the same time as the shmem segment is updated properly. One way to do this is to have two sequence number fields in the shmem segment - one is updated before the new message is written, one after. Clients read the second sequence number prior to reading the message, then re-read both sequence numbers after, and check that they are all equal (and discard the message and retry after a delay if they are not). Be sure to place memory barriers around accesses to these sequence numbers to ensure the compiler does not reorder them!
Of course, this is all a bit hairy. Named pipes are a lot simpler, but a subscription (of a sort) is required. The server calls CreateNamedPipe, then accepts connections with ConnectNamedPipe. Clients use CreateFile to connect to the server's pipe. The server then just loops to send data (using WriteFile) to all of its clients. Note that you will need to create addititonal instance of the pipe using CreateNamedPipe each time you accept a connection. An example of a named pipe server can be found here: http://msdn.microsoft.com/en-us/library/aa365588(v=vs.85).aspx