I want to know how can I configure posix queue on linux OS.
I know the ways I can edit in sysctl.conf and in code by
mq_open(**,**,**);
Is there any other way I can configure the number of messages per queue and the number of queues.
You are mixing different layers of the onion.
On the individual queue layer, the queue attributes (mq_maxmsg and mq_msgsize) are fixed at the time of the queue creation and can't be changed. mq_curmsgs doesn't make any sense to change unless you are looking to mangle your queue and can only be queried through mq_getattr. The mq_flags can be changed through mq_setattr` but the only flag to be changed is to toggle the blocking/non-blocking state of the queue.
As practical matter it is easy to write simple command line utilities to do most of the above and many organizations will already have them. They are usually among the first programs using queues that developers write for themselves anyway. Some systems will incorporate these little utilities into startup and shutdown scripts for their applications.
On the process layer, there are limits on message priorities (MQ_PRIO_MAX) and the number of queues a process can have open (MQ_OPEN_MAX). In linux neither of these are a real concern. The max priority is like 32k - sysconf(_SC_MQ_PRIO_MAX) - and if you are using that many priorities you have some real design issues. And because mqd_t types in linux are file descriptors the real limiting factors on the number of open queues is the total number of file descriptors to which a process is limited.
At the system level, there are limit files in /proc/sys/fs/mqueue that can be changed with appropriate permissions. (a) queues_max is the upper limit on the number of queues allowed on a system in toto but a privileged user can still create queues once this limit has been hit. (b) msgsize_max is the max message size of a message created by an unprivileged process. (c) msg_max is the largest message size allowed for a queue. (d) Linux also has two files msg_default and msgsize_default in /proc/sys/fs/mqueue that should be self-evident.
Related
Looking for C++ library or easy and robust combination of ones that will provide durable disk backed queue for variable sizes binary blocks.
My app is producing messages that are being sent out to subscribers (messages are variable sized binaries), in a case of subscribers failure or restart or networking issues I need something like circular buffer to queue them up until subscriber return. Available RAM is not enough to handle worst case failure scenario so I'm looking for easy way to offload data to disk.
In best case : set up maximum disk space like (100G) and file name, recover data after application restart, .pus_back() / .front() / .pop_front() like API, no performance drawback when queue is small (99.99% case), no need for strict persistence (fsync() on every message)
Average case : data is not preserved between restarts
Some combo of boost libs will be highly preferable
I am using a class that contains a function involving TempoClock.default.sched [I'm preparing an MWE]. If I make a new instance of the class and apply the function, I obtain following error message:
scheduler queue is full.
This message is repeated all the time. What does it mean?
Every clock has a queue to store scheduled events. The size of the queue is very large - but still limited (I think ~4096 items?). The "scheduler cue is full" error happens when this queue is full - this can either happen when you legitimately have more than 4096 events scheduled on a given clock. But, a common bug case is accidentally queueing events far in the future, such that they hang out in the queue forever, eventually filling it up. It's easy to do this if you, e.g. call .sched(...), which takes a relative time value, but try to pass it an absolute time (which would schedule the event far far in the future).
If you need to actually schedule more than 4096 events at a given time - I believe the Scheduler class has a queue that can be arbitrarily large. AppClock uses this scheduler, so it shouldn't have a problem with large numbers of events. However - the timing of AppClock is less accurate than SystemClock, and isn't good for fine-grained music events. If you need highly accurate timing, you can use multiple TempoClocks and e.g. use different ones for each instruments, or each different kind of event etc.
In my application I have different modules which communicate through posix queues, the problem is Im getting the above mentioned error when limit meets, I have set the limit in both
sysctl fs.file-max = new_value
and
ulimit -n
but this is some hardcoded value, is there any best practice to overcome this? I tried closing the descriptors by mq_close but then again all the modules in application can use any message at any time. So I cannot close all the descriptors.
There are two types of resource limits in linux/UNIX Soft limit & Hard limit. The maximum descriptors you can set is up to the hard limit. There are methods to increase hard limit but frankly speaking I have never tried so & neither I would recommend this due to two reasons:
Opening too many descriptors concurrently will slow down the performance of your program.
It is not even required to increase hard limit since if you close the unused descriptors properly in your program you will see yourself that it is not even required. Imagine a web server that opens a new descriptor for every new request also does not require to increase the hard limit.
Finally I would recommend you even to increase the soft limit please use setrlimit function from your program since increasing the limit on the shell is temporary & if you set it in profile it will increase the limit for all the programs.
In my current project, I have two levels of tasking, in a VxWorks system, a higher priority (100) task for number crunching and other work and then a lower priority (200) task for background data logging to on-board flash memory. Logging is done using the fwrite() call, to a file stored on a TFFS file system. The high priority task runs at a periodic rate and then sleeps to allow background logging to be done.
My expectation was that the background logging task would run when the high priority task sleeps and be preempted as soon as the high priority task wakes.
What appears to be happening is a significant delay in suspending the background logging task once the high priority task is ready to run again, when there is sufficient data to keep the logging task continuously occupied.
What could delay the pre-emption of a lower priority task under VxWorks 6.8 on a Power PC architecture?
You didn't quantify significant, so the following is just speculation...
You mention writing to flash. One of the issue is that writing to flash typically requires the driver to poll the status of the hardware to make sure the operation completes successfully.
It is possible that during certain operations, the file system temporarily disables preemption to insure that no corruption occurs - coupled with having to wait for hardware to complete, this might account for the delay.
If you have access to the System Viewer tool, that would go a long way towards identifying the cause of the delay.
I second the suggestion of using the System Viewer, It'll show all the tasks involved in TFFS stack and you may be surprised how many layers there are. If you're making an fwrite with a large block of data, the flash access may be large (and slow as Benoit said). You may try a bunch of smaller fwrites. I suggest doing a test to see how long fwrites() take for various sizes, and you may see differences from test to test with the same sizea as you cross flash block boundaries.
I am writing to USB disk from a lowest priority thread, using chunked buffer writing and still, from time to time the system in overall lags on this operation. If I disable writing to disk only, everything works fine. I can't use Windows file operations API calls, only C write. So I thought maybe there is a WinAPI function to turn on/off USB disk write caching which I could use in conjunction with FlushBuffers or similar alternatives? The number of drives for operations is undefined.
Ideally I would like to never be lagging using write call and the caching, if it will be performed transparently is ok too.
EDIT: would _O_SEQUENTIAL flag on write only operations be of any use here?
Try to reduce I/O priority for the thread.
See this article: http://msdn.microsoft.com/en-us/library/windows/desktop/ms686277(v=vs.85).aspx
In particular use THREAD_MODE_BACKGROUND_BEGIN for your IO thread.
Warning: this doesn't work in Windows XP
The thread priority won't affect the delay that happens in the process of writing the media, because it's done in the kernel mode by the file system/disk drivers that don't pay attention to the priority of the calling thread.
You might try to use "T" flag (_O_SHORTLIVED) and flush the buffers at the end of the operation, also try to decrease the buffer size.
There are different types of data transfer for USB, for data there are 3:
1.Bulk Transfer,
2.Isochronous Transfer, and
3.Interrupt Transfer.
Bulk Transfers Provides:
Used to transfer large bursty data.
Error detection via CRC, with guarantee of delivery.
No guarantee of bandwidth or minimum latency.
Stream Pipe - Unidirectional
Full & high speed modes only.
Bulk transfer is good for data that does not require delivery in a guaranteed amount of time The USB host controller gives a lower priority to bulk transfer than the other types of transfer.
Isochronous Transfers Provides:
Guaranteed access to USB bandwidth.
Bounded latency.
Stream Pipe - Unidirectional
Error detection via CRC, but no retry or guarantee of delivery.
Full & high speed modes only.
No data toggling.
Isochronous transfers occur continuously and periodically. They typically contain time sensitive information, such as an audio or video stream. If there were a delay or retry of data in an audio stream, then you would expect some erratic audio containing glitches. The beat may no longer be in sync. However if a packet or frame was dropped every now and again, it is less likely to be noticed by the listener.
Interrupt Transfers Provides:
Guaranteed Latency
Stream Pipe - Unidirectional
Error detection and next period retry.
Interrupt transfers are typically non-periodic, small device "initiated" communication requiring bounded latency. An Interrupt request is queued by the device until the host polls the USB device asking for data.
From the above, it seems that you want a Guaranteed Latency, so you should use Isochronous mode. There are some libraries that you can use like libusb, or you can read more in msdn
To find out what is letting your system hang you first need to drill down to the Windows hang. What was Windows doing while you did experience the hang?
To find this out you can take a kernel dump. How to get and analyze a Kernel Dump read here.
Depending on the findings you get there you then need to decide if there is anything under your control you can do about. Since you are using a third party library to to the writing there is little you can do except to set the IO priority, thread priority on thread or process level. If the library you were given links against a specific CRT you could try to build your own customized version of it to e.g. flush after every write to prevent write combining by the OS to write only data in big chunks back to disc.
Edit1
Your best bet would be to flush the device after every write. This could force the OS to flush any pending data and write the current pending writes to disc without caching the writes up to certain amount.
The second best thing would be to simply wait after each write to give the OS the chance to write pending changes though small back to disc after a certain time interval.
If you are deeper into performance you should try out XPerf which has a nice GUI and shows you even the call stack where your process did hang. The Windows Team and many other teams at MS use this tool to troubleshoot hang experiences. The latest edition with many more features comes with the Windows 8 SDK. But beware that Xperf only works on OS > Vista.