As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know there is a question about this already but it does not give a solution in code.
I'm trying to load a bitmap image into a GL application using this function:
void glBitmap(GLsizei width,
GLsizei height,
GLfloat xorig,
GLfloat yorig,
GLfloat xmove,
GLfloat ymove,
const GLubyte * bitmap);
Can someone give me a function that returns a GLubyte* given a filename? I've been looking all over the web for a working algorithm but can't seem to get any to work.
The problem with using glBitmap to display the image is that you need to load the image from a file, and then interpret the data as colour index array. Since every possible library and example interpret the image data as RGB (or RGBA), and since color table needs to be set using glColorTable, nobody is using glBitmap to load the image, therefore there are no real examples of how to use glBitmap with data loaded from the file.
By the way, this is from glBitmap reference page :
The bitmap image is interpreted like image data for the glDrawPixels
command, with width and height corresponding to the width and height
arguments of that command, and with type set to GL_BITMAP and format
set to GL_COLOR_INDEX.
Save yourself trouble, and use glDrawPixels or textures to display the image from a file.
This page contains the example how to display the image using glBitmap, but not from the file.
First you need to understand what glBitmap does: glBitmap is a drawing function. It looks at each bit (hence bit map) of the given data, interpreted as a 2 dimensional array and relative to the current raster position (set with glRasterPos) sets those pixels to the currently set raster color (glRasterColor) whose corresponding bit in the bitmap are set. Each GLubyte contains 8 bits, so each byte covers 8 pixels width (endianess is set using glPixelStore).
A for reading a bitmap from a file. AFAIK there's only one widespread bitmap format (in contrast to pixmaps which have channels, where each pixel's channel can assume a range of values). That format is PBM: http://en.wikipedia.org/wiki/Netpbm_format
Given the information from the Wikipedia page it is straightforward to write a function that opens and reads a PBM file (functions required are fopen, fread and fclose) into a structure holding the data. Then one can use that structure to feed the data to glBitmap.
But remember, bitmaps are just binary valued images, i.e. black and white, no grays, not to speak of colour.
Now your notion was "loading a bitmap". I already told, that glBitmap immediately draws the data you give it. So I think what you're actually looking for is textures and a image file to texture loader.
Related
We are writing a piece of software which downloads tiles from the internet from WMS servers (these are map servers, and they provide images as map data for various locations on the globe) and then displays them inside a window, using Qt and some OpenGL bindings.
Some of these servers contain data only for specific regions on the planet, and if you request and area outside of what they support it they provide you just a blank white image, which we do not want to use since they occupy extra space. So the question is:
How to identify whether an image contains only 1 color (white), or not.
What we have tried till now is the following:
Create a QImage, loop over every pixel of it, see if it differs from white. This is extremely slow, and since we want this to be a more or less realtime application, this idea sadly does not work.
Check if the image size is the same as an empty image size, but this also does not work, since it might happen that:
There is another image with the same size which actually contains data
It might be that tiles which are over an ocean have just one color, a light blue, and we need those tiles.
Do a "post processing" of the downloaded images and remove them from the scene later, but this looks ugly from the users' perspective that tiles are just appearing and disappearing ...
Request transparent images from the WMS servers, but due to some OpenGL mishappenings, when rendering, these images appear as black only on some (mostly low-end) video cards.
Any idea, library to use, direction or even code is welcome, and we need a C++ solution, since our app is C++.
Edit for those suggesting to sample pixels only from a few points in the map:
and
The two images above (yes, the left image contains a very tiny piece of Norway in the corner), would be eliminated if we would assume that the image is entirely white based only sampling a few points, in case none of those points actually touch any color than white. Link to the second image: https://wms.geonorge.no/skwms1/wms.sjokartraster2?LAYERS=all&SRS=EPSG:900913&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&BBOX=-313086.067812500,9079495.966562500,0.000000000,9392582.034375001&WIDTH=256&HEIGHT=256&TRANSPARENT=false
The correct and most reliable way would be to uncompress the PNG bytes and check each pixel in a tight loop.
The most usual source of an image process routine being "slow" is invoking a function call per-pixel. So if you are calling QImage::pixel in a nested loop for each row/column, it will not have the performance you desire.
Instead, take advantage of the fact that QImage gives you raw image bytes via the scanLine method or the bits method:
Something like this might work:
const int bytes_per_line = qimage.bytesPerLine();
unsigned char white_row[MAX_WIDTH * 4];
memset(white_row, 0xff, sizeof(white_row));
bool allWhite = true;
for (int row = 0; allWhite && (row < height); row++)
{
unsigned char* row_data = qimage.scanLine(row);
allWhite = !memcmp(row_data, white_row, bytes_per_line);
}
The above loop terminates pretty fast the moment a non-white pixel is encountered.
So, the question looks simple, but I still can't understand how properly compute a vertical alignment of glyphs when we use SDF generated bitmaps using stb_truetype library.
In nutshell I have own texture packer system that generates a texture atlas with all needed SDF represented glyphs. Also there is a data type that contains the following parameters per code point including width, height, xoff and yoff, which I get from stbtt_GetCodepointSDF function.
I've checked up a few listings including this one, but it didn't help me. So what's the right formula?
I'm currently struggling on the conversion of an image captured from ARCore in YUV_420_888 format to an OpenCV Mat object. I use ARCore as a plugin for unity from which I'm able to capture the bytes that represent the image.
I've furthermore written a marker detection in c++ using OpenCV that I planned on using as a native plugin. So far so good.
ARCore has the functionality of returning a GoogleARCore.CameraImageBytes object that contains the following information (taken from a single frame transmitted to the native plugin):
Width: 640
Height: 480
Y Row Stride: 640
UV Row Stride: -1008586736
UV Pixel Stride: 307200
as well as the image data as an System.IntPtr, which I'm able to convert to a byte[] and thus receive it on c++ side as unsigned char *.
These information are passed to c++ in the following functions signature:
extern "C" void findMarkersInImage(int width, int height, int y_row_stride, int uv_row_stride, int uv_pixel_stride,
unsigned char *image_data, int buffer_size)
I realize that there are many answers on this or various other platforms that suggest an algorithm for the conversion. However, all of them employ functionality to direclty gather image plane information from the image. Saying that every solution relies on calling some function called getImageplanes(image, plane_number). Furthermore all of the other solution invoke the use of an AImage which is not available to me. Also converting the gathered image bytes to an AImage, and then to a cv::Mat seems like an computational overkill for an algorithm that is supposed to run in realtime, supporting, not negatively affecting the overall performance.
So finally three main questions occur to me:
Is there a way to find out the planes positions in the unsigned char * I get containing the whole image.
The UV Row Stride value differs in every single frame, not by much but it does, is that normal behaviour?
The before-mentioned value is negative.
Maybe someone can suggest further literature that might help me. Or has a direct answer to one of the questions.
I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.
I need to create a program that loads a .raw image (generic 100x100 image), asks the user to select an (x, y) coordinate within the range, and display the red, green, and blue values for said pixel using the seekg function. I'm at a loss as to how to get the rgb values from the pixel. I've gone through every chapter of the textbook that we've covered so far, and there is nothing about retreiving rgb values.
The code asking for the coordinates and giving an error message if outside the range is working fine. Only when I try to come up with the code for using seekg/getting the rgb values am I running in to trouble. I've looked at different questions on the site, and there is good information here, but I've not see any answers using seekg in order to get the rgb values.
I'm not looking for anyone to produce the code for me, just looking for some guidance and a push in the right direction.
loc = (y * 100 + x) * 3; // code given by professor with 100 being the width of the image
imageRaw.seekg(loc, ios::beg);
And then I'm at a loss.
Any help would be greatly appreciated.
From there, you probably need to read three bytes, which will represent the red, green, and blue values. You haven't told us enough to be sure of the order; green is almost always in the middle, but RGB and BGR are both fairly common.
From a practical viewpoint, for a picture of this size you don't normally want to use seekg at all though. You'd read the entire image into memory, and lookup the values in the vector (or array, if you insist) that stores the data.