SFML leaves part of texture blank, if I create it from PixelPtr - c++

I have a code, which converts plain BGR data to sf::Texture. "ifs" is opened ifstream to file which contains byte triplets of BGR colors (header of source file is omitted). And Width & Height 100% valid. In my example image is 800x600.
struct h3pcx_color_bgr { uint8_t b, uint8_t g, uint8_t r };
sf::Uint8* pixels = new sf::Uint8[width * height * 4];
h3pcx_color_bgr* fileData = new h3pcx_color_bgr[width*height];
ifs.read((char*)fileData, width * height * sizeof(h3pcx_color_bgr));
for (uint32_t i = 0; i < width*height; ++i) {
pixels[i * 4] = fileData[i].r;
pixels[i * 4 + 1] = fileData[i].g;
pixels[i * 4 + 2] = fileData[i].b;
pixels[i * 4 + 3] = 255;
}
This code works good, problem comes after. Once I draw my texture:
m_tex.update(pixels); //sf::Texture
m_sprite.setTexture(m_tex); //sf::Sprite
m_window->draw(m_sprite); // m_window is sf::RenderWindow
I have this annoying grey line in image below:
What I did:
Verified, that pixels contains valid value
Code snippet below (700 * 595 is inside "grey area") shows, that both pixels and fileData contains valid data (not grey color, which is appears just uninitialized memory).
auto f = fileData[700 * 595]; // 32, 31, 38
auto r = pixels[700 * 595 * 4]; // 38
auto g = pixels[700 * 595 * 4 + 1]; // 31
auto b = pixels[700 * 595 * 4 + 2]; // 32
"Grey" color is 204, 204, 204.
Tried to use sf::Image
If we do something like this:
img.create(width, height, pixels); // img is sf::Image
img.setPixel(700, 595, sf::Color::Blue);
And then convert it to sf::Texture, and draw. Result will be same image with grey area, but pixel 700, 585 will be blue!
If I get color value from "grey area":
auto clr = img.getPixel(700,600); //sf::Color(204,204,204)
So, it looks like, there are some hard-limit(???) on quantity of pixels (but I doubt it, since I've looked on actual SFML code, and did not found anything suspicious) or my stupid mistake. I would be very grateful, if someone can point out - why this grey line appears.

In the code:
auto f = fileData[700 * 595];
You are accessing pixel 500, 520. To access the pixel 700, 595 you have to use:
auto f = fileData[700 + 595 * 800]; // x + y * width
I would write this as a comment, but I lack the necessary reputation.

If any1 is wondering - It's just file is wrong, with this exact grey color in the end. Code is correct.

Related

How to access every pixel?

If I have a PixelBuffer object of size (200 * 200 * 3) where each pixel has three consecutive spots for the RGB colors. How can I index them so that if i am trying to implement the DDA line drawing algorithm. I have seen a lot on the web that uses PutPixel(x,y) but im not sure how I can access the pixels in this method.
The pixels will be arranged row by row, with each pixel using 3 bytes. To address a point (x, y), you basically just need to multiply the y value by the size of a row (which is the width multiplied by 3), multiply the x value by the size of a pixel (3).
With a few constants for readability, the code for the function could look like this:
const int IMG_WIDTH = 200;
const int IMG_HEIGHT = 200;
const int BYTES_PER_PIXEL = 3;
const int BYTES_PER_ROW = IMG_WIDTH * BYTES_PER_PIXEL;
void PutPixel(uint8_t* pImgData, int x, int y, const uint8_t color[3])
{
uint8_t pPixel = pImgData + y * BYTES_PER_ROW + x * BYTES_PER_PIXEL;
for (int iByte = 0; iByte < BYTES_PER_PIXEL; ++iByte)
{
pPixel[iByte] = color[iByte];
}
}
Example how this function could be used:
// Allocate image data.
uint8_t* pImgData = new uint8_t[IMG_WIDTH * IMG_HEIGHT];
// Initialize image data, unless you are planning to set all pixels.
...
// Set pixel (50, 30) to yellow.
uint8_t yellow[3] = {255, 255, 0};
PutPixel(pImgData, 50, 30, yellow);
Once you have your image built in memory, you can store the content in a pixel buffer object using glBufferData():
GLuint bufId = 0;
glGenBuffers(1, &bufId);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, bufId);
glBufferData(GL_PIXEL_UNPACK_BUFFER, IMG_HEIGHT * BYTES_PER_ROW,
pImgData, GL_STATIC_DRAW);

Bilinear re-sizing with C++ and vector of RGBA pixels

I am trying to re-size an image by using the bilinear technique I found here but I don't see anything but a black image.
So, in first place I have my image decoded with LodePNG and the pixels go into a vector<unsigned char> variable. It says that they are stored as RGBARGBA but when I tried to apply the image to a X11 window I realized they were stored as BGRABGRA. I don't know if is the X11 API which changes the order or the LodePNG decoder. Anyway, before anything, I convert the BGR to RGB:
// Here is where I have the pixels stored
vector<unsigned char> Image;
// Converting BGRA to RGBA, or vice-versa, I don't know, but it's how it is shown
// correctly on the window
unsigned char red, blue;
unsigned int i;
for(i=0; i<Image.size(); i+=4)
{
red = Image[i + 2];
blue = Image[i];
Image[i] = red;
Image[i + 2] = blue;
}
So, now I am trying to change the size of the image, before applying it to the window. The size would be the size of the window (stretch it).
I firstly try to convert the RGBA to int values, like this:
vector<int> IntImage;
for(unsigned i=0; i<Image.size(); i+=4)
{
IData.push_back(256*256*this->Data[i+2] + 256*this->Data[i+1] + this->Data[i]);
}
Now I have this function from the link I specified above, which is supposed to do the interpolation:
vector<int> resizeBilinear(vector<int> pixels, int w, int h, int w2, int h2) {
vector<int> temp(w2 * h2);
int a, b, c, d, x, y, index ;
float x_ratio = ((float)(w-1))/w2 ;
float y_ratio = ((float)(h-1))/h2 ;
float x_diff, y_diff, blue, red, green ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = (y*w+x) ;
a = pixels[index] ;
b = pixels[index+1] ;
c = pixels[index+w] ;
d = pixels[index+w+1] ;
// blue element
// Yb = Ab(1-w)(1-h) + Bb(w)(1-h) + Cb(h)(1-w) + Db(wh)
blue = (a&0xff)*(1-x_diff)*(1-y_diff) + (b&0xff)*(x_diff)*(1-y_diff) +
(c&0xff)*(y_diff)*(1-x_diff) + (d&0xff)*(x_diff*y_diff);
// green element
// Yg = Ag(1-w)(1-h) + Bg(w)(1-h) + Cg(h)(1-w) + Dg(wh)
green = ((a>>8)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>8)&0xff)*(x_diff)*(1-y_diff) +
((c>>8)&0xff)*(y_diff)*(1-x_diff) + ((d>>8)&0xff)*(x_diff*y_diff);
// red element
// Yr = Ar(1-w)(1-h) + Br(w)(1-h) + Cr(h)(1-w) + Dr(wh)
red = ((a>>16)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>16)&0xff)*(x_diff)*(1-y_diff) +
((c>>16)&0xff)*(y_diff)*(1-x_diff) + ((d>>16)&0xff)*(x_diff*y_diff);
temp.push_back(
((((int)red)<<16)&0xff0000) |
((((int)green)<<8)&0xff00) |
((int)blue) |
0xff); // hardcode alpha ;
}
}
return temp;
}
and I use it like this:
vector<int> NewImage = resizeBilinear(IntData, image_width, image_height, window_width, window_height);
which is supposed to return me the RGBA vector of the re-sized image. Now I am changing back to RGBA (from int)
Image.clear();
for(unsigned i=0; i<NewImage.size(); i++)
{
Image.push_back(NewImage[i] & 255);
Image.push_back((NewImage[i] >> 8) & 255);
Image.push_back((NewImage[i] >> 16) & 255);
Image.push_back(0xff);
}
and what I get is a black window (the default background color), so I don't know what am I missing. If I comment out the line where I get the new image and just convert back to RGBA the IntImage I get the correct values so I don't know if it is the messed up RGBA/int <> int/RGBA. I'm just lost now. I know this can be optimized/simplified but for now I just want to make it work.
The array access in your code is incorrect:
vector<int> temp(w2 * h2); // initializes the array to contain zeros
...
temp.push_back(...); // appends to the array, leaving the zeros unchanged
You should overwrite instead of appending; for that, calculate the array position:
temp[i * w2 + j] = ...;
Alternatively, initialize the array to an empty state, and append your stuff:
vector<int> temp;
temp.reserve(w2 * h2); // reserves some memory; array is still empty
...
temp.push_back(...); // appends to the array

Trouble reading (and comparing) pixels in SDL2

I'm loading a PNG file in SDL2 and I'm trying to find 'special' pixel colours to track during a spritesheet animation. I've put these pixels into my image but my code isn't finding them.
I'm using this code to read the pixels (taken from internet, wrapped into my own Texture class):
Uint32 getpixel(SDL_Surface *surface, int x, int y)
{
int bpp = surface->format->BytesPerPixel;
/* Here p is the address to the pixel we want to retrieve */
Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x * bpp;
switch(bpp) {
case 1:
return *p;
break;
case 2:
return *(Uint16 *)p;
break;
case 3:
if(SDL_BYTEORDER == SDL_BIG_ENDIAN)
return p[0] << 16 | p[1] << 8 | p[2];
else
return p[0] | p[1] << 8 | p[2] << 16;
break;
case 4:
return *(Uint32 *)p;
break;
default:
return 0; /* shouldn't happen, but avoids warnings */
}
}
And these are the important bits of code I'm using to compare pixels to the 'special' values I've set before:
// convert special SDL_Color to Uint32
Uint32 spec1 = SDL_MapRGBA(_texture->GetSDLSurface()->format, _spec1.r, _spec1.g, _spec1.b, 255);
Uint32 spec2 = SDL_MapRGBA(_texture->GetSDLSurface()->format, _spec2.r, _spec2.g, _spec2.b, 255);
...and, while looping through all pixels in each sprite frame...
// get pixel at (x, y)
Uint32 pix = _texture->GetPixel(x, y);
// if pixel is a special value, store it in animation
if (pix == spec1)
{
SDL_Point pt = {x, y};
anim->Special1.push_back(pt);
found1 = true;
}
else if (pix == spec2)
{
SDL_Point pt = {x, y};
anim->Special2.push_back(pt);
found2 = true;
}
Now, I'm setting a breakpoint in these if-statements to check if the colour has been found, but the breakpoint is never reached. Does anyone know what the problem is?
P.S. I've tried also using SDL_MapRGB() but that doesn't work either.
[edit]
Okay so I tried putting a pixel at 0,0 of the whole image with RGB values 66, 77 and 88. It read them in as 84, 96 and 107, so obviously the colours are either being changed or not read in properly. However, when I try it with a specific alpha value, it reads it in perfectly. I would change my system to only use alpha values but it seems the pixel editor I'm using removes the alpha value once you put in the pixel and blends it in with the rest of the image.
Your formula to offset is not correct, it should be :
Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x
(x does not need to be multiplied by bpp)
From the docs
pitch
The length of a surface scanline in bytes
The pitch, also called stride is computed as following :
pitch = width * bytes per pixel
bytes per pixel = (bits per pixel + 7) / 8
When you are at correct byte offset, get an Uint32 (for a 32bpp image) from it and do your comparison.

array, copy pixels to correct index, algorithm

I have image size is 2x2, so count pixels = 4
one pixel - 4 bytes
so I have an array of 16 bytes - mas[16] - width * height * 4 = 16
I want to make the same image, but the size is more a factor of 2, this means that instead of one will be four pixels
new array will have size of 64 bytes - newMas[16] - width*2 * height*2 * 4
problem, that i can't correct copy pixels to newMas,that with different size image correctly copy pixels
this code copy pixels to mas[16]
size_t width = CGImageGetWidth(imgRef);
size_t height = CGImageGetHeight(imgRef);
const size_t bytesPerRow = width * 4;
const size_t bitmapByteCount = bytesPerRow * height;
size_t mas[bitmapByteCount];
UInt8* data = (UInt8*)CGBitmapContextGetData(bmContext);
for (size_t i = 0; i < bitmapByteCount; i +=4)
{
UInt8 a = data[i];
UInt8 r = data[i + 1];
UInt8 g = data[i + 2];
UInt8 b = data[i + 3];
mas[i] = a;
mas[i+1] = r;
mas[i+2] = g;
mas[i+3] = b;
}
In general, using the built-in image drawing API will be faster and less error-prone than writing your own image-manipulation code. There are at least three potential errors in the code above:
It assumes that there's no padding at the end of rows (iOS seems to pad up to a multiple of 16 bytes); you need to use CGImageGetBytesPerRow().
It assumes a fixed pixel format.
It gets the width/height from a CGImage but the data from a CGBitmapContext.
Assuming you have a UIImage,
CGRect r = {{0,0},img.size};
r.size.width *= 2;
r.size.height *= 2;
UIGraphicsBeginImageContext(r.size);
// This turns off interpolation in order to do pixel-doubling.
CGContextSetInterpolationQuality(UIGraphicsGetCurrentContext(), kCGInterpolationNone);
[img drawRect:r];
UIImage * bigImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Issue with writing YUV image frame in C/C++

I am trying to convert an RGB frame, which is taken from OpenGL glReadPixels(), to a YUV frame, and write the YUV frame to a file (.yuv). Later on I would like to write it to a named_pipe as an input for FFMPEG, but as for now I just want to write it to a file and view the image result using a YUV Image Viewer. So just disregard the "writing to pipe" for now.
After running my code, I encountered the following errors:
The number of frames shown in the YUV Image Viewer software is always 1/3 of the number of frames I declared in my program. When I declare fps as 10, I could only view 3 frames. When I declared fps as 30, I could only view 10 frames. However when I view the file in Text Editor, I could see that I have the correct amount of word "FRAME" printed in the file.
This is the example output that I got: http://www.bobdanani.net/image.yuv
I could not see the correct image, but just some distorted green, blue, yellow, and black pixels.
I read about YUV format from http://wiki.multimedia.cx/index.php?title=YUV4MPEG2 and http://www.fourcc.org/fccyvrgb.php#mikes_answer and http://kylecordes.com/2007/pipe-ffmpeg
Here is what I have tried so far. I know that this conversion approach is quite in-efficient, and I can optimize it later. Now I just want to get this naive approach to work and have the image shown properly.
int frameCounter = 1;
int windowWidth = 0, windowHeight = 0;
unsigned char *yuvBuffer;
unsigned long bufferLength = 0;
unsigned long frameLength = 0;
int fps = 10;
void display(void) {
/* clear the color buffers */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* DRAW some OPENGL animation, i.e. cube, sphere, etc
.......
.......
*/
glutSwapBuffers();
if ((frameCounter % fps) == 1){
bufferLength = 0;
windowWidth = glutGet(GLUT_WINDOW_WIDTH);
windowHeight = glutGet (GLUT_WINDOW_HEIGHT);
frameLength = (long) (windowWidth * windowHeight * 1.5 * fps) + 100; // YUV 420 length (width*height*1.5) + header length
yuvBuffer = new unsigned char[frameLength];
write_yuv_frame_header();
}
write_yuv_frame();
frameCounter = (frameCounter % fps) + 1;
if ( (frameCounter % fps) == 1){
snprintf(filename, 100, "out/image-%d.yuv", seq_num);
ofstream out(filename, ios::out | ios::binary);
if(!out) {
cout << "Cannot open file.\n";
}
out.write (reinterpret_cast<char*> (yuvBuffer), bufferLength);
out.close();
bufferLength = 0;
delete[] yuvBuffer;
}
}
void write_yuv_frame_header (){
char *yuvHeader = new char[100];
sprintf (yuvHeader, "YUV4MPEG2 W%d H%d F%d:1 Ip A0:0 C420mpeg2 XYSCSS=420MPEG2\n", windowWidth, windowHeight, fps);
memcpy ((char*)yuvBuffer + bufferLength, yuvHeader, strlen(yuvHeader));
bufferLength += strlen (yuvHeader);
delete (yuvHeader);
}
void write_yuv_frame() {
int width = glutGet(GLUT_WINDOW_WIDTH);
int height = glutGet(GLUT_WINDOW_HEIGHT);
memcpy ((void*) (yuvBuffer+bufferLength), (void*) "FRAME\n", 6);
bufferLength +=6;
long length = windowWidth * windowHeight;
long yuv420FrameLength = (float)length * 1.5;
long lengthRGB = length * 3;
unsigned char *rgb = (unsigned char *) malloc(lengthRGB * sizeof(unsigned char));
unsigned char *yuvdest = (unsigned char *) malloc(yuv420FrameLength * sizeof(unsigned char));
glReadPixels(0, 0, windowWidth, windowHeight, GL_RGB, GL_UNSIGNED_BYTE, rgb);
int r, g, b, y, u, v, ypos, upos, vpos;
for (int j = 0; j < windowHeight; ++j){
for (int i = 0; i < windowWidth; ++i){
r = (int)rgb[(j * windowWidth + i) * 3 + 0];
g = (int)rgb[(j * windowWidth + i) * 3 + 1];
b = (int)rgb[(j * windowWidth + i) * 3 + 2];
y = (int)(r * 0.257 + g * 0.504 + b * 0.098) + 16;
u = (int)(r * 0.439 + g * -0.368 + b * -0.071) + 128;
v = (int)(r * -0.148 + g * -0.291 + b * 0.439 + 128);
ypos = j * windowWidth + i;
upos = (j/2) * (windowWidth/2) + i/2 + length;
vpos = (j/2) * (windowWidth/2) + i/2 + length + length/4;
yuvdest[ypos] = y;
yuvdest[upos] = u;
yuvdest[vpos] = v;
}
}
memcpy ((void*) (yuvBuffer + bufferLength), (void*)yuvdest, yuv420FrameLength);
bufferLength += yuv420FrameLength;
free (yuvdest);
free (rgb);
}
This is just the very basic approach, and I can optimize the conversion algorithm later.
Can anyone tell me what is wrong in my approach? My guess is that one of the issues is with the outstream.write() call, because I converted the unsigned char* data to char* data that it may lose data precision. But if I don't cast it to char* I will get a compile error. However this doesn't explain why the output frames are corrupted (only account to 1/3 of the number of total frames).
It looks to me like you have too many bytes per frame for 4:2:0 data. ACcording to the spec you linked to, the number of bytes for a 200x200 pixel 4:2:0 frame should be 200 * 200 * 3 / 2 = 60,000. But you have ~90,000 bytes. Looking at your code, I don't see where you are convert from 4:4:4 to 4:2:0. So you have 2 choices - either set the header to 4:4:4, or convert the YCbCr data to 4:2:0 before writing it out.
I compiled your code and surely there is a problem when computing upos and vpos values.
For me this worked (RGB to YUV NV12):
vpos = length + (windowWidth * (j/2)) + (i/2)*2;
upos = vpos + 1;