OpenGL: Textures are not aligned correctly - opengl

I have a problem with OpenGL (in LWJGL) and Texture Mapping. I'm loading an ARGB image using
public static ByteBuffer toByteArray(BufferedImage image) {
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0, image.getWidth());
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * 4);
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
int pixel = pixels[y * image.getWidth() + x];
buffer.put((byte) ((pixel >> 16) & 0xFF)); // Red component
buffer.put((byte) ((pixel >> 8) & 0xFF)); // Green component
buffer.put((byte) ((pixel >> 0) & 0xFF)); // Blue component
buffer.put((byte) ((pixel >> 24) & 0xFF)); // Alpha component.
}
}
buffer.flip();
return buffer;
}
I'm uploading the textures using
int[] textureIds = new int[textures.size()];
GL11.glGenTextures(textureIds);
int i = 0;
for (Texture texture : textures.values()) {
int textureId = textureIds[i++];
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureId);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_REPEAT);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_REPEAT);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 4);
BufferedImage data = texture.load();
ByteBuffer bytes = Texture.toByteArray(data);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, data.getWidth(), data.getHeight(), 0, GL11.GL_RGBA,
GL11.GL_UNSIGNED_BYTE, bytes);
// GL30.glGenerateMipmap(GL11.GL_TEXTURE_2D);
texture.setTextureId(textureId);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
}
Unfortunately I'm ending up with something like this:
But actually it should look like this (in blender):
The Texture can be found: Here is the texture
So everything is crooked and somewhat following diagonals. The model is made with blender and therefore has proper texture coordinates. I also managed it to load the model plus texture in another engine. But not in mine. Does anyone have an idea how to fix this?

Related

DirectX: Draw bitmap image scale up in viewport caused low quality?

I'm using DirectX to draw the images with RGB data in buffer. The fllowing is sumary code:
// create the vertex buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU
bd.ByteWidth = sizeOfOurVertices; // size is the VERTEX struct * pW*pH
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer
dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer
//Create Sample for texture
D3D11_SAMPLER_DESC desc;
desc.Filter = D3D11_FILTER_ANISOTROPIC;
desc.MaxAnisotropy = 16;
ID3D11SamplerState *ppSamplerState = NULL;
dev->CreateSamplerState(&desc, &ppSamplerState);
devcon->PSSetSamplers(0, 1, &ppSamplerState);
//Create list vertices from RGB data buffer
pW = bitmapSource->PixelWidth;
pH = bitmapSource->PixelHeight;
OurVertices = new VERTEX[pW*pH];
vIndex = 0;
unsigned char* curP = rgbPixelsBuff;
for (y = 0; y < pH; y++)
{
for (x = 0; x < pW; x++)
{
OurVertices[vIndex].Color.b = *curP++;
OurVertices[vIndex].Color.g = *curP++;
OurVertices[vIndex].Color.r = *curP++;
OurVertices[vIndex].Color.a = *curP++;
OurVertices[vIndex].X = x;
OurVertices[vIndex].Y = y;
OurVertices[vIndex].Z = 0.0f;
vIndex++;
}
}
sizeOfOurVertices = sizeof(VERTEX)* pW*pH;
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, OurVertices, sizeOfOurVertices); // copy the data
devcon->Unmap(pVBuffer, NULL);
// unmap the buffer
// clear the back buffer to a deep blue
devcon->ClearRenderTargetView(backbuffer, D3DXCOLOR(0.0f, 0.2f, 0.4f, 1.0f));
// select which vertex buffer to display
UINT stride = sizeof(VERTEX);
UINT offset = 0;
devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset);
// select which primtive type we are using
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST);
// draw the vertex buffer to the back buffer
devcon->Draw(pW*pH, 0);
// switch the back buffer and the front buffer
swapchain->Present(0, 0);
When the viewport's size is smaller or equal the image's size => everything is ok. But when viewport's size is lager image's size => the image's quality is very bad.
I've searched and tried to use desc.Filter = D3D11_FILTER_ANISOTROPIC;as above code (I've tried to use D3D11_FILTER_MIN_POINT_MAG_MIP_LINEAR or D3D11_FILTER_MIN_LINEAR_MAG_MIP_POINTtoo), but the result is not better. The following images are result of displaying:
Someone can tell me the way to fix it.
Many thanks!
You are drawing each pixel as a point using DirectX. It is normal that when the screen size gets bigger, your points will move apart and the quality will be bad. You should draw a textured quad instead, using a texture that you fill with your RGB data and a pixel shader.

LWJGL - Shader for a 'manually loaded' NPOT texture

I am trying to display an npot texture in my lwjgl window. The result is this:
The texture is repeated 4 times, upside down as well as distorted by horizontal lines. Obviously this is not the intended result. Here is what I feel to be relevant source code:
Utility method that loads a texture:
// load the image
BufferedImage image = null;
try {
image = ImageIO.read(new File(path));
}
// exit on error
catch (IOException exception) {
Utility.errorExit(exception);
}
// add the image's data to a bytebuffer
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * 4);
for(int x = 0; x < image.getWidth(); x++) {
for(int y = 0; y < image.getHeight(); y++) {
int pixel = image.getRGB(x, y);
buffer.put((byte) ((pixel >> 16) & 0xFF)); // red
buffer.put((byte) ((pixel >> 8) & 0xFF)); // green
buffer.put((byte) (pixel & 0xFF)); // blue
buffer.put((byte) 0xFF); // alpha
}
}
// flip the buffer
buffer.flip();
// generate and bind the texture
int handle = GL11.glGenTextures();
GL11.glBindTexture(GL31.GL_TEXTURE_RECTANGLE, handle);
//Setup wrap mode
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
//Setup texture scaling filtering
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL31.GL_TEXTURE_RECTANGLE, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
// set the texture data
GL11.glTexImage2D(GL31.GL_TEXTURE_RECTANGLE, 0, GL11.GL_RGBA8, image.getWidth(), image.getHeight(), 0,
GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, buffer);
// return the handle
return handle;
Utility method to bind the texture to the sampler:
// set the sampler's texture unit
GL20.glUniform1i(samplerLocation, GL13.GL_TEXTURE0 + textureUnit);
// bind the texture to the texture unit
GL13.glActiveTexture(GL13.GL_TEXTURE0 + textureUnit);
GL11.glBindTexture(GL31.GL_TEXTURE_RECTANGLE, textureID);
Fragment shader:
#version 150
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect sampler;
in vec2 vTexture;
out vec4 color;
void main()
{
color = texture2DRect(sampler, vTexture);
}
The last piece of information that I feel would be relevant is what my texture coordinates are:
Bottom Left Point: (0, 0)
Top Left Point: (0, 600)
Top Right Point: (800, 600)
Bottom Right Point (800, 0)
I am guessing I am doing multiple things wrong. Post in comments section if you feel there is more information that I could provide. Thanks!
P.S. The reason I say the texture is manually loaded is because I am used to using Slick-Util for loading textures, but I was not able to use it for this certain texture as I hear Slick-Util does not support npot textures.
You're pushing texels to the buffer in the wrong order.
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * 4);
for(int x = 0; x < image.getWidth(); x++) {
for(int y = 0; y < image.getHeight(); y++) {
int pixel = image.getRGB(x, y);
buffer.put((byte) ((pixel >> 16) & 0xFF)); // red
buffer.put((byte) ((pixel >> 8) & 0xFF)); // green
buffer.put((byte) (pixel & 0xFF)); // blue
buffer.put((byte) 0xFF); // alpha
}
}
You are iterating over the height in the inner loop. glTexImage2D expects the data to be scanline based, not column based. So try to swap your x and y loops.

Fbo textures get flipped/rotated

I am capturing a couple of images through fbo's. I then reuse these images, adding something to them (using fbo's and shaders). Now, for some reason the images get rotated and I have no idea where it happens.
Below some of the code the bug may be connected with. I can supply more code on request.
I save the images like this:
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
int bpp = 4; // Assuming a 32-bit display with a byte each for red, green, blue, and alpha.
ByteBuffer buffer = BufferUtils.createByteBuffer(SAVE_WIDTH * SAVE_HEIGHT * bpp);
glReadPixels(0, 0, SAVE_WIDTH, SAVE_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, buffer );
File file = new File("picture" + k + ".png"); // The file to save to.
String format = "png"; // Example: "PNG" or "JPG"
BufferedImage image = new BufferedImage(SAVE_WIDTH, SAVE_HEIGHT, BufferedImage.TYPE_INT_ARGB);
for(int x = 0; x < SAVE_WIDTH; x++)
for(int y = 0; y < SAVE_HEIGHT; y++)
{
int i = (x + (SAVE_WIDTH * y)) * bpp;
int r = buffer.get(i) & 0xFF;
int g = buffer.get(i + 1) & 0xFF;
int b = buffer.get(i + 2) & 0xFF;
int a = buffer.get(i + 3) & 0xFF;
image.setRGB(x, SAVE_HEIGHT - (y + 1), (a << 24) | (r << 16) | (g << 8) | b);
}
try {
ImageIO.write(image, format, file);
} catch (IOException e) {
e.printStackTrace();
}
And I load them like this:
ByteBuffer buf = null;
File file = new File(filename);
if (file.exists()) {
try {
BufferedImage image = ImageIO.read(file);
buf = Util.getImageDataFromImage(image);
} catch (IOException ex) {
Logger.getLogger(SkyBox.class.getName()).log(Level.SEVERE, null, ex);
}
} else {
int length = SAVE_WIDTH * SAVE_HEIGHT * 4;
buf = ByteBuffer.allocateDirect(length);
for (int i = 0; i < length; i++)
buf.put((byte)0xFF);
buf.rewind();
}
// Create a new texture object in memory and bind it
glBindTexture(GL_TEXTURE_2D, pictureTextureId);
// All RGB bytes are aligned to each other and each component is 1 byte
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Upload the texture data and generate mip maps (for scaling)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SAVE_WIDTH, SAVE_HEIGHT, 0,
GL_RGBA, GL_UNSIGNED_BYTE, buf);
// Setup what to do when the texture has to be scaled
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_NEAREST);
getImageDataFromImage()
WritableRaster wr = bufferedImage.getRaster();
DataBuffer db = wr.getDataBuffer();
DataBufferByte dbb = (DataBufferByte) db;
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(dbb.getData().length);
byte[] bytes = dbb.getData();
for(int i=0; i<bytes.length; i+=4) {
byteBuffer.put(bytes[i+3]);
byteBuffer.put(bytes[i+2]);
byteBuffer.put(bytes[i+1]);
byteBuffer.put(bytes[i]);
}
byteBuffer.flip();
return byteBuffer;
Rotated, or flipped in the vertical? If they're flipped, then that's because OpenGL and image file formats don't neccesarily agree on the origin of the coordinate system. With OpenGL and the usual projection setups the origin is in the lower left. Most image file formats and IO libraries assume the origin in the upper left.

Possible to blit images with alpha mask onto transparent surface?

I am trying to do just that. I have an image of various tiles of an explosion in my game. I am trying to preprocess the explosion tiles and create the image and then blit it onto the screen.
Here is the tile sheet with the alpha mask:Explosion
Now, I want to blit these and have them maintain their alpha transparency onto a surface which I can then render.
Here is my code:
SDL_Surface* SpriteManager::buildExplosion(int id, SDL_Surface* image, int size)
{
// Create the surface that will hold the explosion image
SDL_Surface* explosion = SDL_CreateRGBSurface(SDL_HWSURFACE, size * 32 , size * 32, 32, 0, 0, 0, 255 );
// Our source and destination surfaces
SDL_Rect srcrect;
SDL_Rect dstrect;
int parentX = sprites[id].x;
int parentY = sprites[id].y;
int middle = size / 2;
// Create the first image
srcrect.x = sprites[id].imgBlockX * 32; // default for now
srcrect.y = sprites[id].imgBlockY * 32; // default for now
srcrect.w = 32;
srcrect.h = 32;
// Get the location it should be applied to
dstrect.x = middle * 32;
dstrect.y = middle * 32;
dstrect.w = 32;
dstrect.h = 32;
// Apply the texture
SDL_BlitSurface(image, &srcrect, explosion, &dstrect);
errorLog.writeError("Applying surface from x: %i y: %i to x: %i y:%i", srcrect.x, srcrect.y, dstrect.x, dstrect.y);
// Iterate through each explosion
for(int i = 0; i < sprites[id].children.size(); i++)
{
// Get the texture source
srcrect.x = 0; // default for now
srcrect.y = 0; // default for now
srcrect.w = 32;
srcrect.h = 32;
// Get the location it should be applied to
dstrect.x = sprites[id].children[i].x - parentX * 32;
dstrect.y = sprites[id].children[i].y - parentY * 32;
dstrect.w = 32;
dstrect.h = 32;
// Apply the texture
SDL_BlitSurface(image, &srcrect, explosion, &dstrect);
}
//return img;
return explosion;
}
I suspect it has to do with this line but I am really at a loss:
SDL_Surface* explosion = SDL_CreateRGBSurface(SDL_HWSURFACE, size * 32 , size * 32, 32, 0, 0, 0, 255 );
The SDL_Surface called image is the image I linked above just to make that clear. If anyone sees the error of my ways, many thanks!
My problem: The code above does either blits a completely invisible surface or a black surface with the images on them.
I guess I am curious if it possible to do what I described above and if I can modify this code to make that work.

OpenGL: Read pixels from FrameBuffer for picking rounded up to 255 (0xFF)

I am trying to implement object picking by packing the vao id to RGBA and render with it to an off-screen buffer which I then try to read with a buffer object.
I am rendering to an off-screen buffer by creating a texture and a z-buffer RenderBuffer Object then attaching them to a FrameBuffer object as such:
/* create a framebuffer object */
glGenFramebuffers(1, &fbo);
/* attach the texture and the render buffer to the frame buffer */
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
/* generate a texture id */
glGenTextures(1, &tex);
/* bind the texture */
glBindTexture(GL_TEXTURE_2D, tex);
/* create the texture in the GPU */
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WINDOW_SIZE_X, WINDOW_SIZE_Y
, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
/* set texture parameters */
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
/* unbind the texture */
glBindTexture(GL_TEXTURE_2D, 0);
/* create a renderbuffer object for the depth buffer */
glGenRenderbuffers(1, &rbo);
/* bind the texture */
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
/* create the render buffer in the GPU */
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT
, WINDOW_SIZE_X, WINDOW_SIZE_Y);
/* unbind the render buffer */
glBindRenderbuffer(GL_RENDERBUFFER, 0);
/* attach the texture and the render buffer to the frame buffer */
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, tex, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT
, GL_RENDERBUFFER, rbo);
// check the frame buffer
if (glCheckFramebufferStatus(
GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
std::cout << "Framebuffer status not complete" << '\n';
}
/* handle an error : frame buffer incomplete */
/* return to the default frame buffer */
glBindFramebuffer(GL_FRAMEBUFFER, 0);
I also generate a pixel buffer object to read from the FrameBuffer after rendering is complete:
/* generate the pixel buffer object */
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, WINDOW_SIZE_X * WINDOW_SIZE_Y * 4, nullptr, GL_STREAM_READ);
/* to avoid weird behaviour the first frame the data is loaded */
glReadPixels(0, 0, WINDOW_SIZE_X, WINDOW_SIZE_Y, GL_BGRA, GL_UNSIGNED_BYTE, 0);
Then in my render loop i bind it and render to that off-screen FrameBuffer:
GLubyte red, green, blue, alpha;
/* bind the frame buffer */
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
/* clear the frame buffer */
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* select the shader program */
glUseProgram(pickProgram);
/* set the object color */
/*alpha = house.vaoId & 0xFF;
blue = (house.vaoId >> 8) & 0xFF;
green = (house.vaoId >> 16) & 0xFF;
red = (house.vaoId >> 24) & 0xFF; */
GLuint objectId = 5;
alpha = objectId & 0xFF;
blue = (objectId >> 8) & 0xFF;
green = (objectId >> 16) & 0xFF;
red = (objectId >> 24) & 0xFF;
//Upload the packed RGBA values to the shader
glUniform4f(baseColorUniformLocation, red, green ,blue, alpha);
//prepare to draw the object
pvm = projectionMatrix*viewMatrix*house.modelMatrix;
glUniformMatrix4fv(offScreenMatrixUniformLocation, 1, GL_FALSE, glm::value_ptr(pvm));
/* draw the object*/
glBindVertexArray(house.getVaoId());
glDrawRangeElements(GL_TRIANGLES,0,42,42,GL_UNSIGNED_SHORT,NULL);
glBindVertexArray(0);
//check that our framebuffer is ok
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
std::cout << "Framebuffer Error" << '\n';
}
GLuint temp;
//get the object id from the read pixels
temp = get_object_id(); <--- this function is explained later
/* return to the default frame buffer */
glBindFramebuffer(GL_FRAMEBUFFER, 0);
The fragment shader that renders to the offscreen buffer is very simple:
#version 420
uniform vec4 BaseColor;
layout(location=0) out vec4 fragColor;
void main()
{
fragColor = BaseColor;
}
This is the function called during the off-screen rendering to extract the packaged RGBA from the framebuffer:
GLuint Engine::get_object_id()
{
static int frame_event = 0;
GLuint object_id;
int x, y;
GLuint red, green, blue, alpha, pixel_index;
//GLuint read_pbo, map_pbo;
GLubyte* ptr;
/* read one pixel buffer */
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo_a);
/* map the other pixel buffer */
///////////////////// NOTE :5th argument, BGRA or RGBA, doesn't make a difference right?
glReadPixels(0, 0, WINDOW_SIZE_X, WINDOW_SIZE_Y, GL_BGRA, GL_UNSIGNED_BYTE, 0);
ptr = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_WRITE);
/* get the mouse coordinates */
/* OpenGL has the {0,0} at the down-left corner of the screen */
glfwGetMousePos(&x, &y);
y = WINDOW_SIZE_Y - y;
object_id = -1;
if (x >= 0 && x < WINDOW_SIZE_X && y >= 0 && y < WINDOW_SIZE_Y){
//////////////////////////////////////////
//I have to admit I don't understand what he does here
///////////////////////////////////////////
pixel_index = (x + y * WINDOW_SIZE_X) * 4;
blue = ptr[pixel_index];
green = ptr[pixel_index + 1];
red = ptr[pixel_index + 2];
alpha = ptr[pixel_index + 3];
object_id = alpha +(red << 24) + (green << 16) + (blue << 8);
}
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
return object_id;
}
The problem is that this final bit of code that is supposed to read the pixels from the off-screen Framebuffer and give me the vao_id, has the following weird behavior:
If ANY of the 4 bytes packed into the RGBA (sent via the shader) is anything other than 0 that byte comes out as 0xFF at the other end.
So if I send
00000000 00000000 00000000 00000001
I will get
00000000 00000000 00000000 11111111
Or if I send
00000000 00010000 00000000 00000000
I will get
00000000 11111111 00000000 00000000
...when I read the pixels with get_object_id().
If I bind the texture and render it to a quad on my normal FrameBuffer the colors that I pass to the off-screen rendering come out correct on the quad. But the pixels read by the get_object_id() make every byte sent in rounded up to 255 (0xFF). So my guess is there's a problem with that final function.
The fragment shader outputs values in a [0, 1] range which are remapped to [0, 255] when written to the frame buffer. glUniform4f accepts floating point values and you need to send your id's as [0, 1] values instead of [0, 255] values:
glUniform4f(baseColorUniformLocation, red / 255.0f, green / 255.0f...);