bitsPerComponent says it's at most 8, but a couple of Stack Overflow questions imply 16 is supported.
You can find the necessary information in “Supported Pixel Formats“ in the “Graphics Contexts“ chapter of “Quartz 2D Programming Guide“ below even it is archived documentation.
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html
The below figure is taken from the above url.
Note that “bpp“ is specified as “bits per pixel“ and “bcp“ is “bits per component“. As given in above table, it seems that only Mac OS (>=10.X versions) supports 16 bcp, not iOS.
Related
I'm using Embarcadero C++Builder 10.1 Berlin Update 2. I'm using System.Zip.TZipFile.ExtractAll() to extract a large .zip file.
Here are the details surrounding the problem scenario:
The size of the .zip file is 387,077 KB
Using System.Zip.TZipFile.ExtractAll() to extract the .zip file, we end up with:
a 4,194,304 KB size file.
The data is truncated.
Using Windows OS, right click Extract All..., we end up with
a 6,035,259 KB size file.
We need all of the data out of this file.
Reading the System.Zip.TZipFile documentation, I do not see anything about limitations related to file size.
From what I know, this is Embarcadero's provided way to extract .zip files. How may I resolve this issue?
Until you tell us whether the data are simply truncated or instead somehow transformed, we can only really guess at what is going on. However, it's a well-educated guess.
Your output is precisely 232 bytes long, a familiar boundary for many older technologies.
The fact that (as you point out) the documentation doesn't state this limit, further suggests that this is just an upper bound on what the developers bothered to support, probably a very long time ago. They never imagined you'd need any more than that, particularly as many file systems didn't even support files larger than that until [relatively] recently.
Prefer modern, standard C++ and a nice third-party library for unzipping.
I wasn't able to find any difference between bmp files available online, so that I could easily tell whether they were 24 or 32 bit.
I need to read a 32 bit bmp file into rgb array using C++ and most tutorials exist only for 32 bit.
The format of bitmap files is described here on MSDN: it starts with a file header of 14 bytes, followed by a bitmap info header, which contains the information you're looking for in the field biBitCount.
Edit:
As noted by iinspectable in the comments, the bitmap format can be complex. So with Windows, the best is to access to the information of the structures described above using the windows API.
If you're working cross platform, you'll have to take care yourself of many details:
the different file format versions: In fact you need to read the DWORD (32 bits unsigned) at offset 14 of the file to find out which version of the data structure is used. The information you're looking fore is at offset 24 (core version) or 28 (other versions) of the file. It's a WORD, so it's 16 bits unsigned.
the file format could be compressed. This is not the case for the core version. For the other versions, it's indicated in the following DWORD (at offset 30).
all integers are stored in little endian.
But instead of doing all this by yourself, you could as well consider CImg or another library.
I am writing a C++ class permitting the usage of color in terminal. I want it works on every terminal :
printing with true color (24 bits) on terminals that supports it,
with 256 color (6x6x6) on terminals that supports it,
else with the basic 16 colors.
I wrote once C functions using termcap, and I thought using it in this case. However, the man page says :
The termcap database is an obsolete facility for describing the capabilities of character-cell terminals and printers. It is retained only for capability with old programs; new ones should use the terminfo database and associated libraries.
So I tried to use terminfo, but I could not find how to do this. There is not terminfo.h in my system (I run on Debian).
My questions is :
How can I get the color possibilities of the current terminal in C/C++, using the newest tools (ie not termcap, according to manpage) ?
The short answer is that you could not get the information from terminfo until ncurses 6.1 was released in January 2018.
The longer answer:
to effectively use TrueColor, you need an interface handling 3 parameters (for red, green, blue). Termcap cannot do this. Terminfo can handle multiple parameters, but...
there is no standard terminal capability (a name for a feature which may be a boolean, number or a string) dealing with TrueColor as such.
you could adapt existing capabilities, but they have limitations
Looking at the terminfo(5) manual, you might see these (strings):
initialize_color initc Ic initialize color #1
to (#2,#3,#4)
initialize_pair initp Ip Initialize color
pair #1 to
fg=(#2,#3,#4),
bg=(#5,#6,#7)
which are related to these (numbers):
max_colors colors Co maximum number of
colors on screen
max_pairs pairs pa maximum number of
color-pairs on the
screen
ANSI colors and schemes compatible with those (such as 16-, 88- and 256-colors) assume you are coloring foreground and background in pairs. The reason for that was that long ago, hardware terminals just worked that way. The initialize_color capability is for a different scheme (Tektronix), which might seem useful.
However, terminfo is compiled, and the resulting binary files stored only signed 16-bit integers. You could not use the terminal description to store a suitable max_pairs or max_colors for 24-bit color. (termcap stores everything as strings, but as noted is unsuited for this application).
A couple of years after this question and answer were first written, terminfo was updated to use a new file format that uses signed 32-bit integers, which is enough for expressing the number of colours in 24-bit RGB colour.
More details can be found in release announcement for ncurses 6.1 and in the updated term(5) manual page, which latter notes that there are still restrictions in the old API used by some applications that access the terminfo data directly.
Further reading:
Why only 16 (or 256) colors? (ncurses FAQ)
Can I set a color by its number? (xterm FAQ)
Is ncurses terminfo compatible with my system? (ncurses FAQ)
I am making a C++ utility that uses OpenNI 2. Ideally I now need to set the minimum and the maximum thresholds for the depth image. I did this in the past with OpenCV or my own image processing functions and before going this way again I am wondering whether there's a feature in OpenNI that supports this natively.
Having a look to the downloadable documentation (comes with the OpenNI package) there's a couple of interesting functions defined in the class VideoStream in OpenNI.h. These are:
int VideoStream::getMinPixelValue()
int VideoStream::getMaxPixelValue()
which return the current limits I need; these seem to be hardware readings though. Nonetheless, the VideoStream class exposes also the setProperty function which allows setting one of the properties in the list of values defined in oniProperties.h.
Since neither the documentation nor the comments in that file specify if one property is read only or not, I tried to write the min and max values by doing
myVideoStream.setProperty<int>(openni::STREAM_PROPERTY_MIN_VALUE, myIntMinValue);
myVideoStream.setProperty<int>(openni::STREAM_PROPERTY_MAX_VALUE, myIntMaxValue);
As a result the values do not change.
My questions are:
Do you confirm that min and max pixel values in the VideoStream are read only?
Does OpenNI, in some way, support natively of setting these thresdolds?
Thank you for your attention.
I'm facing a similar issue, i.e., setting the maxDepthVlaue of a particular device. The status always return as a failure. However, when you run isPropertySupported(openni::STREAM_PROPERTY_MAX_VALUE), it returns true. So there is an internal means to set the max depth value. Don't quite know what that is though.
I've got a sequence of 28 bytes, which are supposedly encoded with a Reed-Solomon (28, 24, 5) code. The RS code uses 8-bit symbols and operates in GF(28). The field generator polynomial is x8+x4+x3+x2+1. I'm looking for a simple way to decode this sequence, so I can tell if this sequence has errors.
I've tried the Python ReedSolomon module, but I'm not even sure how to configure the codec properly for my RS code (e.g. what's the first consecutive root of the field generator polynomial, what's the primitive element). I also had a look at Schifra, but I couldn't even compile it on my Mac.
I don't care too much about the platform (e.g. Python, C, Scilab) as long as it is free.
I successfully built an embedded data comms project that used Reed Solomon error correction a few years ago. I just had a look at it to refresh my memory, and I found that I used a fairly lightweight, GPL licenced, C language subsystem published by a well known guy named Phil Karn to do the encoding and decoding. It's only a few hundred lines of code, but it's pretty intense stuff. However I found I didn't need to understand the math to use the code.
Googling Phil Karn Reed Solomon got me this document.
Which looks like a decent place to start. Hope this helps.