SDL2 reading RGB values of a pixel from psdl_surface - sdl

How can i read the RGB values of a pixel from a specified x y position on sdl_surface with Pascal SDL2? I tried finding a solution to this already and found nothing that worked.
I tried
function get_pixel32(surface: psdl_surface; location: vector2): uInt32;
var pixels: ^uInt32;
begin
if sdl_mustLock(surface) then sdl_lockSurface(surface);
pixels^:= uInt32(surface^.pixels);
get_pixel32:= pixels[(location.y * surface^.w) + location.x];
sdl_unlockSurface(surface);
end;
begin
pD:= get_pixel32(surface1, vector2.new(1, 1));
sdl_getRGBA(pD, surface1^.format, #r, #g, #b, #a);
end.
but that returned me random colors in a non random pattern (black, random dark color, random bright color, random dark color, black etc...) when i looped through 32 pixels on the X coordinate in the surface.

Values of local variables are random (garbage) and should be initialized. pixels variable is a pointer. In the assignment pixels^:= uInt32(surface^.pixels); you do not initialize the variable, but write data to some random memory location.
The correct initialization is
pixels := PuInt32(surface^.pixels);

Related

How do you convert a 16 bit unsigned integer to a larger 8 bit unsigned integer?

I have a function that needs to return a 16 bit unsigned int vector, but for another from which I also call this one, I need the output in 8 bit unsigned int vector format. For example, if I start out with:
std::vector<uint16_t> myVec(640*480);
How might I convert it to the format of:
std::vector<uint8_t> myVec2(640*480*4);
UPDATE (more information):
I am working with libfreenect and its getDepth() method. I have modified it to output a 16 bit unsigned integer vector so that I can retrieve the depth data in millimeters. However, I would also like to display the depth data. I am working with some example code c++ from the freenect installation, which uses glut and requires an 8 bit unsigned int vector to display the depth, however, i need the 16 bit to retrieve the depth in millimeters and log it to a text file. Therefore, i was looking to retrieve the data as a 16 bit unsigned int vector in glut's draw function, and then convert it so that I can display it with the glut function that's already written.
As per your update, assuming the 8-bit unsigned int is going to be displayed as a gray scale image, what you need is akin to a Brightness Transfer Function. Basically, your output function is looking to map the data to the values 0-255, but you don't necessarily want those to correspond directly to millimeters. What if all of your data was from 0-3mm? Then your image would look almost completely black. What if it was all 300-400mm? Then it'd be completely white because it was clipped to 255.
A rudimentary way to do it would be to find the minimum and maximum values, and do this:
double scale = 255.0 / (double)(maxVal - minVal);
for( int i = 0; i < std::min(myVec.size(), myVec2.size()); ++i )
{
myVec2.at(i) = (unsigned int)((double)(myVec.at(i)-minVal) * scale);
}
depending on the distribution of your data, you might need to do something a little more complex to get the most out of your dynamic range.
Edit: This assumes your glut function is creating an image, if it is using the 8-bit value as an input to a graph then you can disregard.
Edit2: An update after your other update. If you want to fill a 640x480x4 vector, you are clearly doing an image. You need to do what I outlined above, but also the 4 dimensions that it is looking for are Red, Green, Blue, and Alpha. The Alpha channel needs to be 255 at all times (this controls how transparent it is, you don't want it to be transparent), as for the other 3... that value you got from the function above (the scaled value) if you set all 3 channels (channels being red, green, and blue) to the same value it will appear as grayscale. For example, if my data ranged from 0-25mm, for a pixel who's value is 10mm, I would set the data to 255/(25-0)* 10 = 102 and therefore the pixel would be (102, 102, 102, 255)
Edit 3: Adding wikipedia link about Brightness Transfer Functions - https://en.wikipedia.org/wiki/Color_mapping
How might I convert it to the format of:
std::vector myVec2; such that myVec2.size() will be twice as
big as myVec.size()?
myVec2.reserve(myVec.size() * 2);
for (auto it = begin(myVec); it!=end(myVec); ++it)
{
uint8_t val = static_cast<uint8_t>(*it); // isolate the low 8 bits
myVec2.push_back(val);
val = static_cast<uint8_t>((*it) >> 8); // isolate the upper 8 bits
myVec2.push_back(val);
}
Or you can change the order of push_back()'s if it matters which byte come first (the upper or the lower).
Straightforward way:
std::vector<std::uint8_t> myVec2(myVec.size() * 2);
std::memcpy(myVec2.data(), myVec.data(), myVec.size());
or with the use of the standard library
std::copy( begin(myVec), end(myVec), begin(myVec2));

How to evade color stripes

Here is a convertion from 32bit float per channel to "unsigned byte" per channel color normalization to save some pci-express bandwidth for other things. Sometimes there can be stripes of color and they look unnatural.
How can I avoid this? Especially on the edge of spheres.
Float color channels:
Unsigned byte channels:
Here, yellow edge on the blue sphere and blue edge on the red one should not exist.
Normalization I used(from opencl kernel) :
// multiplying with r doesnt help as picture color gets too bright and reddish.
float r=rsqrt(pixel0.x*pixel0.x+pixel0.y*pixel0.y+pixel0.z*pixel0.z+0.001f);
unsigned char rgb0=(unsigned char)(pixel0.x*255.0);
unsigned char rgb1=(unsigned char)(pixel0.y*255.0);
unsigned char rgb2=(unsigned char)(pixel0.z*255.0);
rgba_byte[i*4+0]=rgb0>255?255:rgb0;
rgba_byte[i*4+1]=rgb1>255?255:rgb1;
rgba_byte[i*4+2]=rgb2>255?255:rgb2;
rgba_byte[i*4+3]=255;
Binding to buffer:
GL11.glEnableClientState(GL11.GL_COLOR_ARRAY);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, id);
GL11.glColorPointer(4, GL11.GL_UNSIGNED_BYTE, 4, 0);
using lwjgl(glfw context) in java environment.
As Andon M. said, I clamped before casting (I couldnt see when I nneded sleep heavily) and it solved.
Color quality is not great by the way but using smaller color buffer helped up the performance.
Your original data set contains floating-point values outside the normalized [0.0, 1.0] range, which after multiplying by 255.0 and casting to unsigned char produces overflow. The false coloring you experienced occurs in areas of the scene that are exceptionally bright in one or more color components.
It seems you knew to expect this overflow when you wrote rgb0>255?255:rgb0, but that logic will not work because when an unsigned char overflows it wraps around to 0 instead of a number larger than 255.
The minimal solution to this would be to clamp the floating-point colors into the range [0.0, 1.0]
before converting to fixed-point 0.8 (8-bit unsigned normalized) color, to avoid overflow.
However, if this is a frequent problem, you may be better off implementing an HDR to LDR post-process. You would identify the brightest pixel in some region (or all) of your scene and then normalize all of the colors into that range. You were sort of implementing this to begin with (with r = sqrt (...)), but it was only using the magnitude of the current pixel to normalize color.

How to calculate the optimal glyph bounds in a character map for OpenGL text rendering

To render text with OpenGL I take the textured quad approach, I draw a quad for every character that needs to be represented. I store all the character texture information in a single texture, and use glScalef and glTranslatef on the texture matrix to select the correct character in the texture. At first I only put a few characters in the image used to create the texture and it worked well.
Since then I needed to render more characters, like lower case letters. I tried adding more characters, but now my text ends up unaligned and smaller.
Is there a proper way to create character maps, or is my approach all together wrong?
Note: I am using a mono-style font so font dimensions should not be the issue, however I would like to add support for fonts with non-uniform size characters as well.
EDIT: I'm using a vertex buffer for drawing rather than immediate mode.
EDIT 2: The texture containing the character map has 9 rows, each with 11 characters. Since the characters are the same size, I use glscalef on the texture matrix to 1/9 the width of the texture, and 1/11 the height. The VBO defines the quad (0,0),(1,0),(0,1),(1,1) and tex coords (0,0),(1,0),(0,1),(1,1). The nonalignment seems to be due to my transformations not fitting each glyph exactly. How are the optimal bounds for each glyph calculated?
In hopes that this may be useful to others. The optimal glyph bounds can be calculated by first normalizing the pixel offsets of each letter so that they are numbers within the range of 0 and 1. The widths and heights can also be normalized to determine the correct bounding box. If the widths and heights are uniform, like in mono fonts, static width and height values may be used for computing the glyph bounds.
Saving an array of pixel position values for each glyph would be tedious to calculate by hand, so it is better to start the first glyph at the first pixel of the character map and keep no spacing in between each letter. This would make calculating the bottom left uv coordinates easy with for loops
void GetUVs(Vector2* us, Vector2* vs, float charWidth, float charHeight, int cols, int rows)
{
for (int x = 0; x < cols; x++)
{
for (int y = 0; y < rows; y++)
{
int index = x + cols * y;
us[index].x = x * charWidth;
vs[index].y = y * charHeight;
}
}
}
The rest of the bounds could be calculated by adding the width, the height, and the width and height respectively.

How can I create a 3d texture with negative values and read it from a shader

I have written a volume rendering program that turns some 2d images into a 3d volume that can be rotated around by a user. I need to calculate a normal for each point in the 3d texture (for lighting) by taking the gradient in each direction around the point.
Calculating the normal requires six extra texture accesses within the fragment shader. The program is much faster without these extra texture access, so I am trying to precompute the gradients for each direction (x,y,z) in bytes and store it in the BGA channels of the original texture. My bytes seem to contain the right values when I test on the CPU, but when I get to the shader it comes out looking wrong. It's hard to tell why it fails from the shader, I think it is because some of the gradient values are negative. However, when I specify the texture type as GL_BYTE (as opposed to GL_UNSIGNED_BYTE) it is still wrong, and that screws up how the original texture should look. I can't tell exactly what's going wrong just by rendering the data as colors. What is the right way to put negative values into a texture? How can I know that values are negative when I read from it in the fragment shader?
The following code shows how I run the operation to compute the gradients from a byte array (byte[] all) and then turn it into a byte buffer (byteBuffer bb) that is read in as a 3d texture. The function 'toLoc(x,y,z,w,h,l)' simply returns (x+w*(y+z*h))*4)--it converts 3d subscripts to a 1d index. The image is grayscale, so I discard gba and only use the r channel to hold the original value. The remaining channels (gba) store the gradient.
int pixelDiffxy=5;
int pixelDiffz=1;
int count=0;
Float r=0f;
byte t=r.byteValue();
for(int i=0;i<w;i++){
for(int j=0;j<h;j++){
for(int k=0;k<l;k++){
count+=4;
if(i<pixelDiffxy || i>=w-pixelDiffxy || j<pixelDiffxy || j>=h-pixelDiffxy || k<pixelDiffz || k>=l-pixelDiffz){
//set these all to zero since they are out of bounds
all[toLoc(i,j,k,w,h,l)+1]=t;//green=0
all[toLoc(i,j,k,w,h,l)+2]=t;//blue=0
all[toLoc(i,j,k,w,h,l)+3]=t;//alpha=0
}
else{
int ri=(int)all[toLoc(i,j,k,w,h,l)+0] & 0xff;
//find the values on the sides of this pixel in each direction (use red channel)
int xgrad1=(all[toLoc(i-pixelDiffxy,j,k,w,h,l)])& 0xff;
int xgrad2=(all[toLoc(i+pixelDiffxy,j,k,w,h,l)])& 0xff;
int ygrad1=(all[toLoc(i,j-pixelDiffxy,k,w,h,l)])& 0xff;
int ygrad2=(all[toLoc(i,j+pixelDiffxy,k,w,h,l)])& 0xff;
int zgrad1=(all[toLoc(i,j,k-pixelDiffz,w,h,l)])& 0xff;
int zgrad2=(all[toLoc(i,j,k+pixelDiffz,w,h,l)])& 0xff;
//find the difference between the values on each side and divide by the distance between them
int xgrad=(xgrad1-xgrad2)/(2*pixelDiffxy);
int ygrad=(ygrad1-ygrad2)/(2*pixelDiffxy);
int zgrad=(zgrad1-zgrad2)/(2*pixelDiffz);
Vec3f grad=new Vec3f(xgrad,ygrad,zgrad);
Integer xg=(int) (grad.x);
Integer yg=(int) (grad.y);
Integer zg=(int) (grad.z);
//System.out.println("gs are: "+xg +", "+yg+", "+zg);
byte gby= (byte) (xg.byteValue());//green channel
byte bby= (byte) (yg.byteValue());//blue channel
byte aby= (byte) (zg.byteValue());//alpha channel
//System.out.println("gba is: "+(int)gby +", "+(int)bby+", "+(int)aby);
all[toLoc(i,j,k,w,h,l)+1]=gby;//green
all[toLoc(i,j,k,w,h,l)+2]=bby;//blue
all[toLoc(i,j,k,w,h,l)+3]=aby;//alpha
}
}
}
}
ByteBuffer bb=ByteBuffer.wrap(all);
final GL gl = drawable.getGL();
final GL2 gl2 = gl.getGL2();
final int[] bindLocation = new int[1];
gl.glGenTextures(1, bindLocation, 0);
gl2.glBindTexture(GL2.GL_TEXTURE_3D, bindLocation[0]);
gl2.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1);//-byte alignment
gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_WRAP_S, GL2.GL_CLAMP);
gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_WRAP_T, GL2.GL_CLAMP);
gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL2.GL_TEXTURE_WRAP_R, GL2.GL_CLAMP);
gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl2.glTexEnvf(GL2.GL_TEXTURE_ENV, GL2.GL_TEXTURE_ENV_MODE, GL.GL_REPLACE);
gl2.glTexImage3D( GL2.GL_TEXTURE_3D, 0,GL.GL_RGBA,
w, h, l, 0,
GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bb );//GL_UNSIGNED_BYTE
Is there a better way to get a large array of signed data into the shader?
gl2.glTexImage3D( GL2.GL_TEXTURE_3D, 0,GL.GL_RGBA,
w, h, l, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bb );
Well, there are two ways to go about doing this, depending on how much work you want to do in the shader vs. what OpenGL version you want to limit things to.
The version that requires more shader work also requires a bit more out of your code. See, what you want to do is have your shader take unsigned bytes, then reinterpret them as signed bytes.
The way that this would typically be done is to pass unsigned normalized bytes (as you're doing), which produces floating-point values on the [0, 1] range, then simply expand that range by multiplying by 2 and subtracting 1, yielding numbers on the [-1, 1] range. This means that your uploading code needs to take it's [-128, 127] signed bytes and convert them into [0, 255] unsigned bytes by adding 128 to them.
I have no idea how to do this in Java, which does not appear to have an unsigned byte type at all. You can't just pass a 2's complement byte and expect it to work in the shader; that's not going to happen. The byte value -128 would map to the floating-point value 1, which isn't helpful.
If you can manage to convert the data properly as I described above, then your shader access would have to unpack from the [0, 1] range to the [-1, 1] range.
If you have access to GL 3.x, then you can do this quite easily, with no shader changes:
gl2.glTexImage3D( GL2.GL_TEXTURE_3D, 0,GL.GL_RGBA8_SNORM,
w, h, l, 0, GL.GL_RGBA, GL.GL_BYTE, bb );
The _SNORM in the image format means that it is a signed, normalized format. So your bytes on the range [-128, 127] will be mapped to floats on the range [-1, 1]. Exactly what you want.

opengl 3d point cloud render from x,y,z 2d array

Need some direction on 3d point cloud display using openGl in c++ (vs2008). I am trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with
glBegin(GL_POINTS);
for(int i=0; i<1024; i++)
{
for(int j=0; j<512; j++)
{
glVertex3f(x[i][j], y[i][j], z[i][j]);
}
}
}
glEnd();
Now this loads all the vertices in the buffer (i think) but from here I am not sure how to proceed. Or I am completely wrong here.
Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display.
The point drawing code is fine as is.
(Long term, you may run into performance problems if you have to draw these points repeatedly, say in response to the user rotating the view. Rearranging the data from 3 arrays into 1 with x, y, z values next to each other would allow you to use faster vertex arrays/VBOs. But for now, if it ain't broke, don't fix it.)
To color the points, you need glColor before each glVertex. It has to be before, not after, because in OpenGL glVertex loosely means that's a complete vertex, draw it. You've described the data as a point cloud, so don't change glBegin to GL_POLYGON, leave it as GL_POINTS.
OK, you have another array with one byte color index values. You could start by just using that as a greyscale level with
glColor3ub(color[i][j], color[i][j], color[i][j]);
which should show the points varying from black to white.
To get the true color for each point, you need a color lookup table - I assume there's either one that comes with the data, or one you're creating. It should be declared something like
static GLfloat ctab[256][3] = {
1.0, 0.75, 0.33, /* Color for index #0 */
...
};
and used before the glVertex with
glColor3fv(ctab[color[i][j]]);
I've used floating point colors because that's what OpenGL uses internally these days. If you prefer 0..255 values for the colors, change the array to GLubyte and the glColor3fv to glColor3ub.
Hope this helps.