Opengl Pixel Co-ordinate manipulation - c++

Ive been looking through the internet for a while now trying to find out if i can DIRECTLY manipulate Pixels like those that makes up the triangle in a mesh not vertex co-ordinates.
When a mesh is being formed in OpenGL the vertex co-ordinates that forms independent triangles are each filled with Pixels that gives it color.
Those pixels are what im trying to manipulate. So far in every Tutorial all i'm seeing is how to alter vertex coords even in the Fragment shader parts of Glsl tutorials i'm not finding anything on the Pixels directly. I'm being shown Texture and Vertex co-ordinates no direct Pixel manipulation.
So far what i know happens is each vertex is assigned some color value and all the Pixel processes get done during execution and you see the results.
So can Pixels be directly altered in OpenGl for each Triangle or What would u guys recommend? Cuz i've heard it might be possible in OpenCV but thats stuff is about Textures

If I get it right you have high poly mesh and want to simplify it by creating normal map for smaller poly count faces ...
Never done this but I would attack this problem like this:
create UV mapping of high poly mesh
create low poly mesh
so you need to merge smaller adjacent faces into bigger ones. Merging only faces that are not too angled to starting face (abs dot between normals is smaller than threshold)... You also need to remember original mesh of merged face.
for each merged face render normal
so render the merged face original polygon to texture but use UV as 2D vertex coordinates and output actual triangle normal as color
This will copy the normals into normal map at the correct position. Do not use any depth buffering blending lighting or whatever. Also the 2D view must be scaled and translated so the UV mapping will cover your texture (no perspective) Do not forget that the normal map (if RGB float used) is clamped so you should first normalize the normal and then convert to range <0,1> for example:
n = 0.5 * (vec3(1.0,1.0,1.0) + normalize(n));
read back the rendered texture
now it should hold the whole normal map. In case you do not have Render to texture available (older Intel HD) you can render to screen instead and then just use glReadPixels.
As you want to save this to image here a small VCL example of saving to 24 bit bmp:
//---------------------------------------------------------------------------
void screenshot(int xs,int ys) // xs,ys is GL screen resolution
{
// just in case your environment does not know basic programing datatypes
typedef unsigned __int8 BYTE;
typedef unsigned __int16 WORD;
typedef unsigned __int32 DWORD;
xs&=0xFFFFFFFC; // crop down resolution to be divisible by 4
ys&=0xFFFFFFFC; // in order make glReadPixel not crashing on some implementations
BYTE *dat,zero[4]={0,0,0,0};
int hnd,x,y,a,align,xs3=3*xs;
// allocate memory for pixel data
dat=new BYTE[xs3*ys];
if (dat==NULL) return;
// copy GL screen to dat
glReadPixels(0,0,xs,ys,GL_BGR,GL_UNSIGNED_BYTE,dat);
glFinish();
// BMP header structure
#pragma pack(push,1)
struct _hdr
{
char ID[2];
DWORD size;
WORD reserved1[2];
DWORD offset;
DWORD reserved2;
DWORD width,height;
WORD planes;
WORD bits;
DWORD compression;
DWORD imagesize;
DWORD xresolution,yresolution;
DWORD ncolors;
DWORD importantcolors;
} hdr;
#pragma pack(pop)
// BMP header extracted from uncompressed 24 bit BMP
const BYTE bmp24[sizeof(hdr)]={0x42,0x4D,0xE6,0x71,0xB,0x0,0x0,0x0,0x0,0x0,0x36,0x0,0x0,0x0,0x28,0x0,0x0,0x0,0xF4,0x1,0x0,0x0,0xF4,0x1,0x0,0x0,0x1,0x0,0x18,0x0,0x0,0x0,0x0,0x0,0xB0,0x71,0xB,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0};
// init hdr with 24 bit BMP header
for (x=0;x<sizeof(hdr);x++) ((BYTE*)(&hdr))[x]=bmp24[x];
// update hdr stuf with our image properties
align=0; // (4-(xs3&3))&3;
hdr.size=sizeof(hdr)+(ys*(xs3+align));
hdr.width=xs;
hdr.height=ys;
hdr.imagesize=ys*xs3;
// save BMP file (using VCL file functions exchange them with whatever you got)
hnd=FileCreate("screenshot.bmp"); // create screenshot image file (binary)
if (hnd!=-1) // if file created
{
FileWrite(hnd,&hdr,sizeof(hdr));// write bmp header
for (a=0,y=0;y<ys;y++,a+=xs3) // loop through all scan lines
{
FileWrite(hnd,&dat[a],xs3); // write scan line pixel data
if (align) // write scan line align zeropad if needed
FileWrite(hnd,zero,align);
}
FileClose(hnd); // close file
}
// cleanup before exit
delete[] dat; // release dat
}
//---------------------------------------------------------------------------
The only thing used from VCL are binary file access routines so just swap them with what you have at disposal. Now you can open this bmp in whatever image software and convert to whatever format you want like png... without the need to encode it yourself.
The bmp header structure was taken from this QA:
Importing BMP file in Turboc++ issue:BMP file is not being displayed properly in the output screen
Also beware of using char/int instead of BYTE/WORD/DWORD it usually leads to data corruption for tasks like this if you do not know what you doing...
You can do the same with color if the mesh is textured ... That way the normal map and color map would have the same UV mapping even if the original mesh uses more than single texture ...

Related

How to get a pixel array from TBitmap?

In a camera application bitmap pixel arrays are retrieved from a streaming camera.
The pixel arrays are captured by writing them to a named pipe, where on the other end of the pipe, ffmpeg retrieves them and creates an AVI file.
I will need to create one custom frame (with custom text on), and pipe its pixels as the first frame in the resulting movie.
The question is how can I use a TBitmap (for convenience) to
Create a X by Y monochrome (8 bit) bitmap from scratch, with
custom text on. I want the background to be white, and the text to
be black. (Mostly figured this step out, see below.)
Retrieve the pixel array that I can send/write to the pipe
Step 1: The following code creates a TBitmap and writes text on it:
int w = 658;
int h = 492;
TBitmap* bm = new TBitmap();
bm->Width = w;
bm->Height = h;
bm->HandleType = bmDIB;
bm->PixelFormat = pf8bit;
bm->Canvas->Font->Name = "Tahoma";
bm->Canvas->Font->Size = 8;
int textY = 10;
string info("some Text");
bm->Canvas->TextOut(10, textY, info.c_str());
The above basically concludes step 1.
The writing/piping code expects a byte array with the bitmaps pixels; e.g.
unsigned long numWritten;
WriteFile(mPipeHandle, pImage, size, &numWritten, NULL);
where pImage is a pointer to a unsigned char buffer (the bitmaps pixels), and the size is the length of this buffer.
Update:
Using the generated TBitmap and a TMemoryStream for transferring data to the ffmpeg pipeline does not generate the proper result. I get a distorted image with 3 diagonal lines on it.
The buffersize for the camera frame buffers that I receive are are exactly 323736, which is equal to the number of pixels in the image, i.e. 658x492.
NOTE I have concluded that this 'bitmap' is not padded. 658 is not divisible by four.
The buffersize I get after dumping my generated bitmap to a memory stream, however, has the size 325798, which is 2062 bytes larger than it is supposed to be. As #Spektre pointed out below, this discrepancy may be padding?
Using the following code for getting the pixel array;
ByteBuffer CustomBitmap::getPixArray()
{
// --- Local variables --- //
unsigned int iInfoHeaderSize=0;
unsigned int iImageSize=0;
BITMAPINFO *pBitmapInfoHeader;
unsigned char *pBitmapImageBits;
// First we call GetDIBSizes() to determine the amount of
// memory that must be allocated before calling GetDIB()
// NB: GetDIBSizes() is a part of the VCL.
GetDIBSizes(mTheBitmap->Handle,
iInfoHeaderSize,
iImageSize);
// Next we allocate memory according to the information
// returned by GetDIBSizes()
pBitmapInfoHeader = new BITMAPINFO[iInfoHeaderSize];
pBitmapImageBits = new unsigned char[iImageSize];
// Call GetDIB() to convert a device dependent bitmap into a
// Device Independent Bitmap (a DIB).
// NB: GetDIB() is a part of the VCL.
GetDIB(mTheBitmap->Handle,
mTheBitmap->Palette,
pBitmapInfoHeader,
pBitmapImageBits);
delete []pBitmapInfoHeader;
ByteBuffer buf;
buf.buffer = pBitmapImageBits;
buf.size = iImageSize;
return buf;
}
So final challenge seem to be to get a bytearray that has the same size as the ones coming from the camera. How to find and remove the padding bytes from the TBitmap code??
TBitmap has a PixelFormat property to set the bit depth.
TBitmap has a HandleType property to control whether a DDB or a DIB is created. DIB is the default.
Since you are passing BMPs around between different systems, you really should be using DIBs instead of DDBs, to avoid any corruption/misinterpretation of the pixel data.
Also, this line of code:
Image1->Picture->Bitmap->Handle = bm->Handle;
Should be changed to this instead:
Image1->Picture->Bitmap->Assign(bm);
// or:
// Image1->Picture->Bitmap = bm;
Or this:
Image1->Picture->Assign(bm);
Either way, don't forget to delete bm; afterwards, since the TPicture makes a copy of the input TBitmap, it does not take ownership.
To get the BMP data as a buffer of bytes, you can use the TBitmap::SaveToStream() method, saving to a TMemoryStream. Or, if you just want the pixel data, not the complete BMP data (ie, without BMP headers - see Bitmap Storage), you can use the Win32 GetDiBits() function, which outputs the pixels in DIB format. You can't obtain a byte buffer of the pixels for a DDB, since they depend on the device they are rendered to. DDBs are only usable in-memory in conjunction with HDCs, you can't pass them around. But you can convert a DIB to a DDB once you have a final device to render it to.
In other words, get the pixels from the camera, save them to a DIB, pass that around as needed (ie, over the pipe), and then do whatever you need with it - save to a file, convert to DDB to render onscreen, etc.
This is just an addon to existing answer (with additional info after the OP edit)
Bitmap file-format has align bytes on each row (so there usually are some bytes at the end of each line that are not pixels) up to some ByteLength (present in bmp header). Those create the skew and diagonal like lines. In your case the size discrepancy is 4 bytes per row:
(xs + align)*ys + header = size
(658+ 4)*492 + 94 = 325798
but beware the align size depends on image width and bmp header ...
Try this instead:
// create bmp
Graphics::TBitmap *bmp=new Graphics::TBitmap;
// bmp->Assign(???); // a) copy image from ???
bmp->SetSize(658,492); // b) in case you use Assign do not change resolution
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf8bit;
// bmp->Canvas->Draw(0,0,???); // b) copy image from ???
// here render your text using
bmp->Canvas->Brush->Style=bsSolid;
bmp->Canvas->Brush->Color=clWhite;
bmp->Canvas->Font->Color=clBlack;
bmp->Canvas->Font->Name = "Tahoma";
bmp->Canvas->Font->Size = 8;
bmp->Canvas->TextOutA(5,5,"Text");
// Byte data
for (int y=0;y<bmp->Height;y++)
{
BYTE *p=(BYTE*)bmp->ScanLine[y]; // pf8bit -> BYTE*
// here send/write/store ... bmp->Width bytes from p[]
}
// Canvas->Draw(0,0,bmp); // just renfder it on Form
delete bmp; bmp=NULL;
mixing GDI winapi calls for pixel array access (bitblt etc...) with VCL bmDIB bitmap might cause problems and resource leaks (hence the error on exit) and its also slower then usage of ScanLine[] (if coded right) so I strongly advice to use native VCL functions (as I did in above example) instead of the GDI/winapi calls where you can.
for more info see:
#4. GDI Bitmap
Delphi / C++ builder Windows 10 1709 bitmap operations extremely slow
Draw tbitmap with scale and alpha channel faster
Also you mention your image source is camera. If you use pf8bit it mean its palette indexed color which is relatively slow and ugly if native GDI algo is used (to convert from true/hi color camera image) for better transform see:
Effective gif/image color quantization?
simple dithering

SDL putting lots of pixel data onto the screen

I am creating a program that allows you to view fractals like the Mandelbrot or Julia set. I would like to render them as quickly as possible. I would love a way to put an array of uint8_t pixel values onto the screen. The array is formatted like this...
{r0,g0,b0,r1,g1,b1,...}
(A one dimensional array or RGB color values)
I know I have the proper data because before I just set individual points and it worked...
for(int i = 0;i < height * width;++i) {
//setStroke and point are functions that I made that together just draw a colored point
r.setStroke(data[i*3],data[i*3+1],data[i*3+2]);
r.point(i % r.window.w,i / r.window.w);
}
This is a pretty slow operation especially if the screen is big (which I would like it to be)
Is there any faster way to just put all the data onto the screen.
I tried doing something like this
void* pixels;
int pitch;
SDL_Texture* img = SDL_CreateTexture(ren,
SDL_GetWindowPixelFormat(win),SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_LockTexture(img, NULL, &pixels, &pitch);
memcpy(pixels, data, window.w * 3 * window.h);
SDL_UnlockTexture(img);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
I have no idea what I'm doing so please have mercy
Edit (thank you for comments :))
So here is what I do now
SDL_Texture* img = SDL_CreateTexture(ren, SDL_PIXELFORMAT_RGB888,SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_UpdateTexture(img,NULL,&data[0],window.w * 3);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
But I get this Image... which is not what it should look like
I am thinking that my data is just formatted wrong, right now it is formatted as an array of uint8_t in RGB order. Is there another way I should be formatting it (note I do not need an alpha channel)

Unable to create image from compressed texture data (S3TC)

I've been trying to load compressed images with S3TC (BC/DXT) compression in Vulkan, but so far I haven't had much luck.
Here is what the Vulkan specification says about compressed images:
https://www.khronos.org/registry/dataformat/specs/1.1/dataformat.1.1.html#S3TC:
Compressed texture images stored using the S3TC compressed image formats are represented as a collection of 4×4 texel blocks, where each block contains 64 or 128 bits of texel data. The image is encoded as a normal 2D raster image in which each 4×4 block is treated as a single pixel.
https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#resources-images:
For images created with linear tiling, rowPitch, arrayPitch and depthPitch describe the layout of the subresource in linear memory. For uncompressed formats, rowPitch is the number of bytes between texels with the same x coordinate in adjacent rows (y coordinates differ by one). arrayPitch is the number of bytes between texels with the same x and y coordinate in adjacent array layers of the image (array layer values differ by one). depthPitch is the number of bytes between texels with the same x and y coordinate in adjacent slices of a 3D image (z coordinates differ by one). Expressed as an addressing formula, the starting byte of a texel in the subresource has address:
// (x,y,z,layer) are in texel coordinates
address(x,y,z,layer) = layerarrayPitch + zdepthPitch + yrowPitch + xtexelSize + offset
For compressed formats, the rowPitch is the number of bytes between compressed blocks in adjacent rows. arrayPitch is the number of bytes between blocks in adjacent array layers. depthPitch is the number of bytes between blocks in adjacent slices of a 3D image.
// (x,y,z,layer) are in block coordinates
address(x,y,z,layer) = layerarrayPitch + zdepthPitch + yrowPitch + xblockSize + offset;
arrayPitch is undefined for images that were not created as arrays. depthPitch is defined only for 3D images.
For color formats, the aspectMask member of VkImageSubresource must be VK_IMAGE_ASPECT_COLOR_BIT. For depth/stencil formats, aspect must be either VK_IMAGE_ASPECT_DEPTH_BIT or VK_IMAGE_ASPECT_STENCIL_BIT. On implementations that store depth and stencil aspects separately, querying each of these subresource layouts will return a different offset and size representing the region of memory used for that aspect. On implementations that store depth and stencil aspects interleaved, the same offset and size are returned and represent the interleaved memory allocation.
My image is a normal 2D image (0 layers, 1 mipmap), so there's no arrayPitch or depthPitch. Since S3TC compression is directly supported by the hardware, it should be possible to use the image data without decompressing it first. In OpenGL this can be done using glCompressedTexImage2D, and this has worked for me in the past.
In OpenGL I've used GL_COMPRESSED_RGBA_S3TC_DXT1_EXT as image format, for Vulkan I'm using VK_FORMAT_BC1_RGBA_UNORM_BLOCK, which should be equivalent.
Here's my code for mapping the image data:
auto dds = load_dds("img.dds");
auto *srcData = static_cast<uint8_t*>(dds.data());
auto *destData = static_cast<uint8_t*>(vkImageMapPtr); // Pointer to mapped memory of VkImage
destData += layout.offset(); // layout = VkImageLayout of the image
assert((w %4) == 0);
assert((h %4) == 0);
assert(blockSize == 8); // S3TC BC1
auto wBlocks = w /4;
auto hBlocks = h /4;
for(auto y=decltype(hBlocks){0};y<hBlocks;++y)
{
auto *rowDest = destData +y *layout.rowPitch(); // rowPitch is 0
auto *rowSrc = srcData +y *(wBlocks *blockSize);
for(auto x=decltype(wBlocks){0};x<wBlocks;++x)
{
auto *pxDest = rowDest +x *blockSize;
auto *pxSrc = rowSrc +x *blockSize; // 4x4 image block
memcpy(pxDest,pxSrc,blockSize); // 64Bit per block
}
}
And here's the code for initializing the image:
vk::Device device = ...; // Initialization
vk::AllocationCallbacks allocatorCallbacks = ...; // Initialization
[...] // Load the dds data
uint32_t width = dds.width();
uint32_t height = dds.height();
auto format = dds.format(); // = vk::Format::eBc1RgbaUnormBlock;
vk::Extent3D extent(width,height,1);
vk::ImageCreateInfo imageInfo(
vk::ImageCreateFlagBits(0),
vk::ImageType::e2D,format,
extent,1,1,
vk::SampleCountFlagBits::e1,
vk::ImageTiling::eLinear,
vk::ImageUsageFlagBits::eSampled | vk::ImageUsageFlagBits::eColorAttachment,
vk::SharingMode::eExclusive,
0,nullptr,
vk::ImageLayout::eUndefined
);
vk::Image img = nullptr;
device.createImage(&imageInfo,&allocatorCallbacks,&img);
vk::MemoryRequirements memRequirements;
device.getImageMemoryRequirements(img,&memRequirements);
uint32_t typeIndex = 0;
get_memory_type(memRequirements.memoryTypeBits(),vk::MemoryPropertyFlagBits::eHostVisible,typeIndex); // -> typeIndex is set to 1
auto szMem = memRequirements.size();
vk::MemoryAllocateInfo memAlloc(szMem,typeIndex);
vk::DeviceMemory mem;
device.allocateMemory(&memAlloc,&allocatorCallbacks,&mem); // Note: Using the default allocation (nullptr) doesn't change anything
device.bindImageMemory(img,mem,0);
uint32_t mipLevel = 0;
vk::ImageSubresource resource(
vk::ImageAspectFlagBits::eColor,
mipLevel,
0
);
vk::SubresourceLayout layout;
device.getImageSubresourceLayout(img,&resource,&layout);
auto *srcData = device.mapMemory(mem,0,szMem,vk::MemoryMapFlagBits(0));
[...] // Map the dds-data (See code from first post)
device.unmapMemory(mem);
The code runs without issues, however the resulting image isn't correct. This is the source image:
And this is the result:
I'm certain that the problem lies in the first code snipped I've posted, however, in case it doesn't, I've written a small adaption of the triangle demo from the Vulkan SDK which produces the same result. It can be downloaded here. The source-code is included, all I've changed from the triangle demo are the "demo_prepare_texture_image"-function in tri.c (Lines 803 to 903) and the "dds.cpp" and "dds.h" files. "dds.cpp" contains the code for loading the dds, and mapping the image memory.
I'm using gli to load the dds-data (Which is supposed to "work perfectly with Vulkan"), which is also included in the download above. To build the project, the Vulkan SDK include directory has to be added to the "tri" project, and the path to the dds has to be changed (tri.c, Line 809).
The source image ("x64/Debug/test.dds" in the project) uses DXT1 compression. I've tested in on different hardware as well, with the same result.
Any example code for initializing/mapping compressed images would also help a lot.
Your problem is actually quite simple - in the demo_prepare_textures function, the first line, there is a variable tex_format, which is set to VK_FORMAT_B8G8R8A8_UNORM (which is what it is in the original sample). This eventually gets used to create the VkImageView. If you just change this to VK_FORMAT_BC1_RGBA_UNORM_BLOCK, it displays the texture correctly on the triangle.
As an aside - you can verify that your texture loaded correctly, with RenderDoc, which comes with the Vulkan SDK installation. Doing a capture of it, the and looking in the TextureViewer tab, the Inputs tab shows that your texture looks identical to the one on disk, even with the incorrect format.

How can I generate a screenshot when glReadPixels is empty?

I have a program which runs in a window using OpenGL (VS2012 with freeglut 2.8.1). Basically at every time step (run via a call to glutPostRedisplay from my glutIdleFunc hook) I call my own draw function followed by a call to glFlush to display the result. Then I call my own screenShot function which uses the glReadPixels function to dump the pixels to a tga file.
The problem with this setup is that the files are empty when the window gets minimised. That is to say, the output from glReadPixels is empty; How can I avoid this?
Here is a copy of the screenShot function I am using (I am not the copyright holder):
//////////////////////////////////////////////////
// Grab the OpenGL screen and save it as a .tga //
// Copyright (C) Marius Andra 2001 //
// http://cone3d.gz.ee EMAIL: cone3d#hot.ee //
//////////////////////////////////////////////////
// (modified by me a little)
int screenShot(int const num)
{
typedef unsigned char uchar;
// we will store the image data here
uchar *pixels;
// the thingy we use to write files
FILE * shot;
// we get the width/height of the screen into this array
int screenStats[4];
// get the width/height of the window
glGetIntegerv(GL_VIEWPORT, screenStats);
// generate an array large enough to hold the pixel data
// (width*height*bytesPerPixel)
pixels = new unsigned char[screenStats[2]*screenStats[3]*3];
// read in the pixel data, TGA's pixels are BGR aligned
glReadPixels(0, 0, screenStats[2], screenStats[3], 0x80E0,
GL_UNSIGNED_BYTE, pixels);
// open the file for writing. If unsucessful, return 1
std::string filename = kScreenShotFileNamePrefix + Function::Num2Str(num) + ".tga";
shot=fopen(filename.c_str(), "wb");
if (shot == NULL)
return 1;
// this is the tga header it must be in the beginning of
// every (uncompressed) .tga
uchar TGAheader[12]={0,0,2,0,0,0,0,0,0,0,0,0};
// the header that is used to get the dimensions of the .tga
// header[1]*256+header[0] - width
// header[3]*256+header[2] - height
// header[4] - bits per pixel
// header[5] - ?
uchar header[6]={((int)(screenStats[2]%256)),
((int)(screenStats[2]/256)),
((int)(screenStats[3]%256)),
((int)(screenStats[3]/256)),24,0};
// write out the TGA header
fwrite(TGAheader, sizeof(uchar), 12, shot);
// write out the header
fwrite(header, sizeof(uchar), 6, shot);
// write the pixels
fwrite(pixels, sizeof(uchar),
screenStats[2]*screenStats[3]*3, shot);
// close the file
fclose(shot);
// free the memory
delete [] pixels;
// return success
return 0;
}
So how can I print the screenshot to a TGA file regardless of whether Windows decides to actually display the content on the monitor?
Note: Because I am trying to keep a visual record of the progress of a simulation, I need to print every frame, regardless of whether it is being rendered. I realise that last statement is a bit of a contradiction, since I need to render the frame in order to produce the screengrab. To rephrase; I need glReadPixels (or some alternative function) to produce the updated state of my program at every step so that I can print it to a file, regardless of whether windows will choose to display it.
Sounds like you're running afoul of the pixel ownership problem.
Render to a FBO and use glReadPixels() to slurp images out of that instead of the front buffer.
I would suggest keeping the last rendered frame stored in memory and updating this memory's contents whenever an update is called and there is actual pixel data in the new render. Either that or you could use the accum perhaps, though I cant quite recall how it stores older frames (it may just end up updating out so fast that it stores no render data as well.
Another solution might be to use a shader to manually render each frame and write the result to a file

opengl 3d point cloud render from x,y,z 2d array

Need some direction on 3d point cloud display using openGl in c++ (vs2008). I am trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with
glBegin(GL_POINTS);
for(int i=0; i<1024; i++)
{
for(int j=0; j<512; j++)
{
glVertex3f(x[i][j], y[i][j], z[i][j]);
}
}
}
glEnd();
Now this loads all the vertices in the buffer (i think) but from here I am not sure how to proceed. Or I am completely wrong here.
Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display.
The point drawing code is fine as is.
(Long term, you may run into performance problems if you have to draw these points repeatedly, say in response to the user rotating the view. Rearranging the data from 3 arrays into 1 with x, y, z values next to each other would allow you to use faster vertex arrays/VBOs. But for now, if it ain't broke, don't fix it.)
To color the points, you need glColor before each glVertex. It has to be before, not after, because in OpenGL glVertex loosely means that's a complete vertex, draw it. You've described the data as a point cloud, so don't change glBegin to GL_POLYGON, leave it as GL_POINTS.
OK, you have another array with one byte color index values. You could start by just using that as a greyscale level with
glColor3ub(color[i][j], color[i][j], color[i][j]);
which should show the points varying from black to white.
To get the true color for each point, you need a color lookup table - I assume there's either one that comes with the data, or one you're creating. It should be declared something like
static GLfloat ctab[256][3] = {
1.0, 0.75, 0.33, /* Color for index #0 */
...
};
and used before the glVertex with
glColor3fv(ctab[color[i][j]]);
I've used floating point colors because that's what OpenGL uses internally these days. If you prefer 0..255 values for the colors, change the array to GLubyte and the glColor3fv to glColor3ub.
Hope this helps.