Limitation in QModbusTcpClient's data size - c++

The story behind: Using a QModbusTcpClient I am trying to read the content from a device connected to a Modbus/TCP network. For that purpose I have written a Windows program (tested on 7 and 10) in Qt C++ (Qt version 5.7.0) which essentially calls QModbusClient::sendReadRequest with QModbusDataUnit::QModbusDataUnit(RegisterType type, int address, quint16 size) as a parameter, where type is HoldingRegisters, address equals to 1000 (could be another address, it is not important for this particular issue) and size is the length of the desired data to be read from the device.
The issue: Everything works well when size is less or equal to 63 registers. Every attempt to go beyond this value results in an error, which depends on the device I am testing the program with, but generally says invalid request.
Tests:
I have tested this with several real devices and with a Modbus/TCP
simulator obtaining the same results, i.e. size <= 63 -> okay; size >
63 -> error
Modpoll from another side allows me to read a data chunk from the same devices and simulator with a size greater than 63 registers
Some research: Here it is stated, that there is indeed a limitation, but it is 256 bytes, which equals to 128 16-bit registers, in other words - way above the limit of my read attempts.
My suspicion: It appears that QModbusTcpClient does not allow reading more than a 63 registers.
Question: Have anyone experienced such an issue using QModbusTcpClient and is there a way to overcome this limitation, apart from reading the data on two passes?

Well, the solution, which worked in my case is to take the matter in my hands and write my own class to communicate with the Modbus devices. The class inherits from QObject, so the signal-slot system is still at disposal, however the actual functionality is based around the winsock2.h. Here is a sample program, which does the job for what I need. Another useful sources I have stumbled upon are this book, the example program from the winsocket 2 reference and of course the Modbus specification. It turned out, that it is not so difficult and with a little bit of help from the sources I've mentioned I was able to solve the problem I've had.

Related

Sending 12 Bit Number in 8 Bits

This question is related to what is expanded on here: User Space to Kernel Communication Without Call Back Function and here: Correct Approach To Turn Closed Source Library Into Kernel HWMon Driver Module. Basically I have a kernel module function that invokes a user space program that interfaces with a third party library to read sensor data. That sensor data needs to be sent back to the same kernel function that invoked the user space program. All methods of communicating with the kernel that I've read about involve call back functions which means the data doesn't end up in the calling function so the only solution seems to be to use the user space program return value to transfer the data. The only sensor data that doesn't seem to fit inside 8 bits is fan RPM values. 12 bits would give a max RPM value of 4096, although 13 bits would be better to give a larger range. This somehow needs to be transmitted via the 8 bit return value.
Obviously I'm gonna lose some information. My first thought is to round the number to the nearest 16 multiple thus basically losing the least significant 4 bits. However I'm wondering if there might be a better approach.

Making a program portable between machines that have different number of bits in a "machine byte"

We are all fans of portable C/C++ programs.
We know that sizeof(char) or sizeof(unsigned char) is always 1 "byte". But that 1 "byte" doesn't mean a byte with 8 bits. It just means a "machine byte", and the number of bits in it can differ from machine to machine. See this question.
Suppose you write out the ASCII letter 'A' into a file foo.txt. On any normal machine these days, which has a 8-bit machine byte, these bits would get written out:
01000001
But if you were to run the same code on a machine with a 9-bit machine byte, I suppose these bits would get written out:
001000001
More to the point, the latter machine could write out these 9 bits as one machine byte:
100000000
But if we were to read this data on the former machine, we wouldn't be able to do it properly, since there isn't enough room. Somehow, we would have to first read one machine byte (8 bits), and then somehow transform the final 1 bit into 8 bits (a machine byte).
How can programmers properly reconcile these things?
The reason I ask is that I have a program that writes and reads files, and I want to make sure that it doesn't break 5, 10, 50 years from now.
How can programmers properly reconcile these things?
By doing nothing. You've presented a filesystem problem.
Imagine that dreadful day when the first of many 9-bit machines is booted up, ready to recompile your code and process that ASCII letter A that you wrote to a file last year.
To ensure that a C/C++ compiler can reasonably exist for this machine, this new computer's OS follows the same standards that C and C++ assume, where files have a size measured in bytes.
...There's already a little problem with your 8-bit source code. There's only about a 1-in-9 chance each source file is a size that can even exist on this system.
Or maybe not. As is often the case for me, Johannes Schaub - litb has pre-emptively cited the standard regarding valid formats for C++ source code.
Physical source file characters are mapped, in an
implementation-defined manner, to the basic source character set
(introducing new-line characters for end-of-line indicators) if
necessary. Trigraph sequences (2.3) are replaced by corresponding
single-character internal representations. Any source file character
not in the basic source character set (2.2) is replaced by the
universal-character-name that des- ignates that character. (An
implementation may use any internal encoding, so long as an actual
extended character encountered in the source file, and the same
extended character expressed in the source file as a
universal-character-name (i.e. using the \uXXXX notation), are handled
equivalently.)
"In an implementation-defined manner." That's good news...as long as some method exists to convert your source code to any 1:1 format that can be represented on this machine, you can compile it and run your program.
So here's where your real problem lies. If the creators of this computer were kind enough to provide a utility to bit-extend 8-bit ASCII files so they may be actually stored on this new machine, there's already no problem with the ASCII letter A you wrote long ago. And if there is no such utility, then your program already needs maintenance and there's nothing you could have done to prevent it.
Edit: The shorter answer (addressing comments that have since been deleted)
The question asks how to deal with a specific 9-bit computer...
With hardware that has no backwards-compatible 8-bit instructions
With an operating system that doesn't use "8-bit files".
With a C/C++ compiler that breaks how C/C++ programs have historically written text files.
Damian Conway has an often-repeated quote comparing C++ to C:
"C++ tries to guard against Murphy, not Machiavelli."
He was describing other software engineers, not hardware engineers, but the intention is still sound because the reasoning is the same.
Both C and C++ are standardized in a way that requires you to presume that other engineers want to play nice. Your Machiavellian computer is not a threat to your program because it's a threat to C/C++ entirely.
Returning to your question:
How can programmers properly reconcile these things?
You really have two options.
Accept that the computer you describe would not be appropriate in the world of C/C++
Accept that C/C++ would not be appropriate for a program that might run on the computer you describe
Only way to be sure is to store data in text files, numbers as strings of number characters, not some amount of bits. XML using UTF-8 and base 10 should be pretty good overall choice for portability and readability, as it is well defined. If you want to be paranoid, keep the XML simple enough, so that in a pinch it can be easily parsed with simple custom parser, in case a real XML parser is not readily available for your hypothetical computer.
When parsing numbers, and it is bigger than what fits in your numeric data type, well, that's an error situation you need to handle as you see fit in the context. Or use a "big int" library, which can then handle arbitrarily large numbers (with an order of magnitude performance hit compared to "native" numeric data types, of course).
If you need to store bit fields, then store bit fields, that is number of bits and then bit values in whatever format.
If you have a specific numeric range, then store the range, so you can explicitly check if they fit in available numeric data types.
Byte is pretty fundamental data unit, so you can not really transfer binary data between storages with different amount of bits, you have to convert, and to convert you need to know how the data is formatted, otherwise you simply can not convert multi-byte values correctly.
Adding actual answer:
In you C code, do not handle byte buffers, except in isolated functions which you will then modify as appropriate for CPU architecture. For example .JPEG handling functions would take either a struct wrapping the image data in unspecified way, or a file name to read the image from, but never a raw char* to byte buffer.
Wrap strings in a container which does not assume encoding (presumably it will use UTF-8 or UTF-16 on 8-bit byte machine, possibly currently non-standard UTF-9 or UTF-18 on 9-bit byte machine, etc).
Wrap all reads from external sources (network, disk files, etc) into functions which return native data.
Create code where no integer overflows happen, and do not rely on overflow behavior in any algorithm.
Define all-ones bitmasks using ~0 (instead of 0xFFFFFFFF or something)
Prefer IEEE floating point numbers for most numeric storage, where integer is not required, as those are independent of CPU architecture.
Do not store persistent data in binary files, which you may have to convert. Instead use XML in UTF-8 (which can be converted to UTF-X without breaking anything, for native handling), and store numbers as text in the XML.
Same as with different byte orders, except much more so, only way to be sure is to port your program to actual machine with different number of bits, and run comprehensive tests. If this is really important, then you may have to first implement such a virtual machine, and port C-compiler and needed libraries for it, if you can't find one otherwise. Even careful (=expensive) code review will only take you part of the way.
if you're planning to write programs for Quantum Computers(which will be available in the near future for us to buy), then start learning Quantum Physics and take a class on programming them.
Unless you're planning for a boolean computer logic in the near future, then.. my question is how will you make it sure that the filesystem available today will not be the same tomorrow? or how a file stored with 8 bit binary will remain portable in the filesystems of tomorrow?
If you want to keep your programs running through generations, my suggestion is create your own computing machine, with your own filesystem and your own operating system, and change the interface as the needs of tomorrow change.
My problem is, the computer system I programmed a few years ago doesn't exist(Motorola 68000) anymore for normal public, and the program heavily relied on the machine's byte order and assembly language. Not portable anymore :-(
If you're talking about writing and reading binary data, don't bother. There is no portability guarantee today, other than that data you write from your program can be read by the same program compiled with the same compiler (including command-line settings). If you're talking about writing and reading textual data, don't worry. It works.
First: The original practical goal of portability is to reduce work; therefore if portability requires more effort than non-portability to achieve the same end result, then writing portable code in such case is no longer advantageous. Do not target 'portability' simply out of principle. In your case, a non-portable version with well-documented notes regarding the disk format is a more efficient means of future-proofing. Trying to write code that somehow caters to any possible generic underlying storage format will probably render your code nearly incomprehensible, or so annoying to maintain that it will fall out of favor for that reason (no need to worry about future-proofing if no one wants to use it anyway 20 yrs from now).
Second: I don't think you have to worry about this, because the only realistic solution to running 8-bit programs on a 9-bit machine (or similar) is via Virtual Machines.
It is extremely likely that anyone in the near or distant future using some 9+ bit machine will be able to start up a legacy x86/arm virtual machine and run your program that way. Hardware 25-50 years from now should have no problem what-so-ever of running entire virtual machines just for the sake of executing a single program; and that program will probably still load, execute, and shutdown faster than it does today on current native 8-bit hardware. (some cloud services today in fact, already trend toward starting entire VMs just to service individual tasks)
I strongly suspect this is the only means by which any 8-bit program would be run on 9/other-bit machines, due to the points made in other answers regarding the fundamental challenges inherent to simply loading and parsing 8-bit source code or 8-bit binary executables.
It may not be remotely resembling "efficient" but it would work. This also assumes, of course, that the VM will have some mechanism by which 8-bit text files can be imported and exported from the virtual disk onto the host disk.
As you can see, though, this is a huge problem that extends well beyond your source code. The bottom line is that, most likely, it will be much cheaper and easier to update/modify or even re-implement-from-scratch your program on the new hardware, rather than to bother trying to account for such obscure portability issues up-front. The act of accounting for it almost certainly requires more effort than just converting the disk formats.
8-bit bytes will remain until end of time, so don't sweat it. There will be new types, but this basic type will never ever change.
I think the likelihood of non-8-bit bytes in future computers is low. It would require rewriting so much, and for so little benefit. But if it happens...
You'll save yourself a lot of trouble by doing all calculations in native data types and just rewriting inputs. I'm picturing something like:
template<int OUTPUTBITS, typename CALLABLE>
class converter {
converter(int inputbits, CALLABLE datasource);
smallestTypeWithAtLeast<OUTPUTBITS> get();
};
Note that this can be written in the future when such a machine exists, so you need do nothing now. Or if you're really paranoid, make sure get just calls datasource when OUTPUTBUTS==inputbits.
Kind of late but I can't resist this one. Predicting the future is tough. Predicting the future of computers can be more hazardous to your code than premature optimization.
Short Answer
While I end this post with how 9-bit systems handled portability with 8-bit bytes this experience also makes me believe 9-bit byte systems will never arise again in general purpose computers.
My expectation is that future portability issues will be with hardware having a minimum of 16 or 32 bit access making CHAR_BIT at least 16.
Careful design here may help with any unexpected 9-bit bytes.
QUESTION to /. readers: is anyone out there aware of general purpose CPUs in production today using 9-bit bytes or one's complement arithmetic? I can see where embedded controllers may exist, but not much else.
Long Answer
Back in the 1990s's the globalization of computers and Unicode made me expect UTF-16, or larger, to drive an expansion of bits-per-character: CHAR_BIT in C. But as legacy outlives everything I also expect 8-bit bytes to remain an industry standard to survive at least as long as computers use binary.
BYTE_BIT: bits-per-byte (popular, but not a standard I know of)
BYTE_CHAR: bytes-per-character
The C standard does not address a char consuming multiple bytes. It allows for it, but does not address it.
3.6 byte: (final draft C11 standard ISO/IEC 9899:201x)
addressable unit of data storage large enough to hold any member of the basic character set of the execution environment.
NOTE 1: It is possible to express the address of each individual byte of an object uniquely.
NOTE 2: A byte is composed of a contiguous sequence of bits, the number of which is implementation-defined. The least significant bit is called the low-order bit; the most significant bit is called the high-order bit.
Until the C standard defines how to handle BYTE_CHAR values greater than one, and I'm not talking about “wide characters”, this the primary factor portable code must address and not larger bytes. Existing environments where CHAR_BIT is 16 or 32 are what to study. ARM processors are one example. I see two basic modes for reading external byte streams developers need to choose from:
Unpacked: one BYTE_BIT character into a local character. Beware of sign extensions.
Packed: read BYTE_CHAR bytes into a local character.
Portable programs may need an API layer that addresses the byte issue. To create on the fly and idea I reserve the right to attack in the future:
#define BYTE_BIT 8 // bits-per-byte
#define BYTE_CHAR (CHAR_BIT/BYTE_BIT) //bytes-per-char
size_t byread(void *ptr,
size_t size, // number of BYTE_BIT bytes
int packing, // bytes to read per char
// (negative for sign extension)
FILE *stream);
size_t bywrite(void *ptr,
size_t size,
int packing,
FILE *stream);
size number BYTE_BIT bytes to transfer.
packing bytes to transfer per char character. While typically 1 or BYTE_CHAR it could indicate BYTE_CHAR of the external system, which can be smaller or larger than the current system.
Never forget endianness clashes.
Good Riddance To 9-Bit Systems:
My prior experience with writing programs for 9-bit environments lead me to believe we will not see such again, unless you happen to need a program to run on a real old legacy system somewhere. Likely in a 9-bit VM on a 32/64-bit system. Since year 2000 I sometimes make a quick search for, but have not seen, references to current current descendants of the old 9-bit systems.
Any, highly unexpected in my view, future general purpose 9-bit computers would likely either have an 8-bit mode, or 8-bit VM (#jstine), to run programs under. The only exception would be special purpose built embedded processors, which general purpose code would not likely to run on anyway.
In days of yore one 9-bit machine was the PDP/15. A decade of wrestling with a clone of this beast make me never expect to see 9-bit systems arise again. My top picks on why follow:
The extra data bit came from robbing the parity bit in core memory. Old 8-bit core carried a hidden parity bit with it. Every manufacturer did it. Once core got reliable enough some system designers switched the already existing parity to a data bit in a quick ploy to gain a little more numeric power and memory addresses during times of weak, non MMU, machines. Current memory technology does not have such parity bits, machines are not so weak, and 64-bit memory is so big. All of which should make the design changes less cost effective then the changes were back then.
Transferring data between 8-bit and 9-bit architectures, including off-the-shelf local I/O devices, and not just other systems, was a continuous pain. Different controllers on the same system used incompatible techniques:
Use the low order 16-bits of 18 bit words.
Use the low-order 8 bits of 9-bit bytes where the extra high-order bit might be set to the parity from bytes read from parity sensitive devices.
Combine the low-order 6 bits of three 8-bit bytes to make 18 bit binary words.
Some controllers allowed selecting between 18-bit and 16-bit data transfers at run time. What future hardware, and supporting system calls, your programs would find just can't be predicted in advance.
Connecting to the 8-bit Internet will be horrid enough by itself to kill any 9-bit dreams someone has. They got away with it back then as machines were less interconnected in those times.
Having something other than an even multiple of 2 bits in byte-addressed storage brings up all sorts of troubles. Example: if you want an array of thousands of bits in 8-bit bytes you can unsigned char bits[1024] = { 0 }; bits[n>>3] |= 1 << (n&7);. To fully pack 9-bits you must do actual divides, which brings horrid performance penalties. This also applies to bytes-per-word.
Any code not actually tested on 9-bit byte hardware may well fail on it's first actual venture into the land of unexpected 9-bit bytes, unless the code is so simple that refactoring it in the future for 9-bits is only a minor issue. The prior byread()/bywrite() may help here but it would likely need an additional CHAR_BIT mode setting to set the transfer mode, returning how the current controller arranges the requested bytes.
To be complete anyone who wants to worry about 9-bit bytes for the educational experience may need to also worry about one's complement systems coming back; something else that seems to have died a well deserved death (two zeros: +0 and -0, is a source of ongoing nightmares... trust me). Back then 9-bit systems often seemed to be paired with one's complement operations.
In a programming language, a byte is always 8-bits. So, if a byte representation has 9-bits on some machine, for whatever reason, its up to the C compiler to reconcile that. As long as you write text using char, - say, if you write/read 'A' to a file, you would be writing/reading only 8-bits to the file. So, you should not have any problem.

Recommended CRC16 polynomials for data logging application

I'm writing a data logging application (running on a microcontroller) which will write data to ordinary, embedded NOR-type serial flash memory (in this case - an AT25DF161.)
Each packet of data (240 or 496 bytes) will be logged individually to the flash one after another. I figure the most common failure in the flash memory would be a stuck bit - typically "0", the non-erased state. I need to be able to detect single bit events, typically -at most- two per record (I assume this as a worst case after 100,000 write cycles.)
I'm using a processor which has a built in 16-bit CRC calculation module, so there's no performance implication for using less or more terms - so what decisions would I need to make to decide on an optimum polynomial?
Use a standard polynomial. You can find a list to choose from here.
Look at the paper by Philip Koopman, "Cyclic Redundancy Code (CRC) Polynomial Selection for Embedded Networks". He analyzes a number of 16-bit polynomials, measuring their error detection capability for different message lengths. As you'll see from his paper, they are not all created equally. For a small number of errors (HD=2 in your case) and a fairly large block, 0xBAAD might be a good choice.

What is the defacto standard for sharing variables between programs in different languages?

I've never had formal training in this area so I'm wondering what do they teach in school (if they do).
Say you have two programs in written in two different languages: C++ and Python or some other combination and you want to share a constantly updated variable on the same machine, what would you use and why? The information need not be secured but must be isochronous should be reliable.
Eg. Program A will get a value from a hardware device and update variable X every 0.1ms, I'd like to be able to access this X from Program B as often as possible and obtain the latest values. Program A and B are written and compiled in two different (robust) languages. How do I access X from program B? Assume I have the source code from A and B and I do not want to completely rewrite or port either of them.
The method's I've seen used thus far include:
File Buffer - Read and write to a
single file (eg C:\temp.txt).
Create a wrapper - From A to B or B
to A.
Memory Buffer - Designate a specific
memory address (mutex?).
UDP packets via sockets - Haven't
tried it yet but looks good.
Firewall?
Sorry for just throwing this out there, I don't know what the name of this technique is so I have trouble searching.
Well you can write XML and use some basic message queuing (like rabbitMQ) to pass messages around
Don't know if this will be helpful, but I'm also a student, and this is what I think you mean.
I've used marshalling to get a java class and import it into a C# program.
With marshalling you use xml to transfer code in a way so that it can be read by other coding environments.
When asking particular questions, you should aim at providing as much information as possible. You have added a use case, but the use case is incomplete.
Your particular use case seems like a very small amount of data that has to be available at a high frequency 10kHz. I would first try to determine whether I can actually make both pieces of code part of a single process, rather than two different processes. Depending on the languages (missing from the question) it might even be simple, or turn the impossible into possible --depending on the OS (missing from the question), the scheduler might not be fast enough switching from one process to another, and it might impact the availability of the latest read. Switching between threads is usually much faster.
If you cannot turn them into a single process, then you will have to use some short of IPC (Inter Process Communication). Due to the frequency I would rule out most heavy weight protocols (avoid XML, CORBA) as the overhead will probably be too high. If the receiving end needs only access to the latest value, and that access may be less frequent than 0.1 ms, then you don't want to use any protocol that includes queueing as you do not want to read the next element in the queue, you only care about the last, if you did not read the element when it was good, avoid the cost of processing it when it is already stale --i.e. it does not make sense to loop extracting from the queue and discarding.
I would be inclined to use shared memory, or a memory mapped shared file (they are probably quite similar, depends on the platform missing from the question). Depending on the size of the element and the exact hardware architecture (missing from the question) you may be able to avoid locking with a mutex. As an example in current intel processors, read/write access to 32 bit integers from memory is guaranteed to be atomic if the variable is correctly aligned, so in that case you would not be locking.
At my school they teach CORBA. They shouldn't, it's an ancient hideous language from the eon of mainframes, it's a classic case of design-by-committee, every feature possible that you don't want is included, and some that you probably do (asynchronous calls?) aren't. If you think the c++ specification is big, think again.
Don't use it.
That said though, it does have a nice, easy-to-use interface for doing simple things.
But don't use it.
It almost always pass through C binding.

Ti-83 Emulator question with the ROM

I have been building knowledge of computers and C++ for quite a while now, and I've decided I want to try making an emulator to get an even better understanding. I want to try making a TI-83 Emulator (runs on a Zilog Z80 CPU). I currently have two problems:
The first is that the "PC" register that points to the current instruction is only 16 bits, but the Ti-83 ROM I downloaded is 256Kb. How is 16 bits of data supposed to point to an address beyond ~64Kb?
Secondly, where is the entry point on the ROM? Does the execution just begin at 0x0000?
Thanks, and hopefully you can help me understand a bit on how this works.
There's is most likely a programmable paging register outboard of the processor core that can be set to map a portion of the 256K at a time into part of the 64K address space. You will need to emulate that. Hopefully you can find out about this in official or unofficial documentation. If you have a schematic or PCB it might even be visible as an external PAL or collection of logic chips.
I forget off the top of my head where a z80 starts executing on reset, but I'm sure you will find it in the processor manual, which would be a necessary tool to write an emulator for it.
You'll want to make sure the core used is truly a z80 and not some kind of custom extended version thereof.
And of course I'm sure someone has already done this, so your project is likely to be more about learning - though in the end you might surpass any available solution if you work on it long enough.
The Developer Guide describes how memory is arranged, although it doesn't actually describe how the mapping works.
Short version: the address space is divided into four 16K pages, the first of which always maps page 0 of the 32-page flash ROM.