SDL putting lots of pixel data onto the screen - c++

I am creating a program that allows you to view fractals like the Mandelbrot or Julia set. I would like to render them as quickly as possible. I would love a way to put an array of uint8_t pixel values onto the screen. The array is formatted like this...
{r0,g0,b0,r1,g1,b1,...}
(A one dimensional array or RGB color values)
I know I have the proper data because before I just set individual points and it worked...
for(int i = 0;i < height * width;++i) {
//setStroke and point are functions that I made that together just draw a colored point
r.setStroke(data[i*3],data[i*3+1],data[i*3+2]);
r.point(i % r.window.w,i / r.window.w);
}
This is a pretty slow operation especially if the screen is big (which I would like it to be)
Is there any faster way to just put all the data onto the screen.
I tried doing something like this
void* pixels;
int pitch;
SDL_Texture* img = SDL_CreateTexture(ren,
SDL_GetWindowPixelFormat(win),SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_LockTexture(img, NULL, &pixels, &pitch);
memcpy(pixels, data, window.w * 3 * window.h);
SDL_UnlockTexture(img);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
I have no idea what I'm doing so please have mercy
Edit (thank you for comments :))
So here is what I do now
SDL_Texture* img = SDL_CreateTexture(ren, SDL_PIXELFORMAT_RGB888,SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_UpdateTexture(img,NULL,&data[0],window.w * 3);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
But I get this Image... which is not what it should look like
I am thinking that my data is just formatted wrong, right now it is formatted as an array of uint8_t in RGB order. Is there another way I should be formatting it (note I do not need an alpha channel)

Related

How to quickly scan and analyze large groups of pixels?

I am trying to build an autoclicker using C++ to beat a 2D videogame in which the following situation appears:
The main character is in the center of the screen, the background is completely black and enemies are coming from all directions. I want my program to be capable of clicking on enemies just as they appear on the screen.
What I came up at first is that the enemies have a minimum size of 15px, so I tried doing a search every 15 pixels and analyze if any pixel is different than the background's RGB, using GetPixel(). It looks something like this:
COLORREF color;
int R, G, B;
for(int i=0; i<SCREEN_SIZE_X; i+=15){ //These SCREEN_SIZE values are #defined with the ones of my screen
for(int j=0;j<SCREEN_SIZE_Y, j+=15){
//The following conditional excludes the center which is the player's position
if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)){
color = GetPixel(GetDC(nullptr), i, j);
R = GetRValue(color);
G = GetGValue(color);
B = GetBValue(color);
if(R!=0 or G!=0 or B!=0) cout<<"Enemy Found"<<endl;
}
}
}
It turns out that, as expected, the GetPixel() function is extremely slow as it has to verify about 4000 pixels to cover just one screen scan. I was thinking about a way to solve this faster, and while looking at the keyboard I noticed the button "Pt Scr", and then realized that whatever that button is doing it is able to almost instantly save the information of millions of pixels.
I surely think there is a proper and different technic to approach this kind of problem.
What kind of theory or technic for pixel analyzing should I investigate and read about so that this can be considered respectable code, and to get it actually work, and much faster?
The GetPixel() routine is slow because it's fetching the data from the videocard (device) memory one by one. So to optimize your loop, you have to fetch the entire screen at once, and put it into an array of pixels. Then, you can iterate over that array of pixels much faster, because it'll be operating over the data in your RAM (host memory).
For a better optimization, I also recommend clearing the pixels of your player (in the center of the screen) after fetching the screen into your pixel array. This way, you can eliminate that if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)) condition inside the loop.
CImage image;
//Save DC to image
int R, G, B;
BYTE *pRealData = (BYTE*)image.GetBits();
int pit = image.GetPitch();
int bitCount = image.GetBPP()/8;
int w=image.GetWidth();
int h=image.GetHeight();
for (int i=0;i<h;i++)
{
for (int j=0;j<w;j++)
{
B=*(pRealData + pit*i + j*bitCount);
G=*(pRealData + pit*i + j*bitCount +1);
R=*(pRealData + pit*i + j*bitCount +2);
}
}

GLbyte Data in Strange Format -- NPR Technique

I'm working on an edge detection algorithm for a NPR technique. I plan on just using difference of gaussians to find the edges.
I thought that I would take a copy of the current screen, then analyze and recolor the pixels so that I have a map to draw the edges with.
This is my screen copy logic so far:
int width = rd->width();
int height = rd->height();
GLbyte * data = (GLbyte *)malloc( width * height * 3 );
if( data ) {
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);
}
float color = 0;
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
color = data[i*width+j];
}
}
Seeing as I'm just grabbing everything, I didn't think that the alpha component was necessary to copy. rd is my render device, and data is being output like this:
2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Vy2Vy2Vy2Vx2Vx2Vx2Vx2Vx2Vx2Vx2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy3Vy3Vy3Vy3Vy3Vy2Vy2Vy1Vy1Uy0Uy1Vy1Vy1Vy1Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy1Vy1Vy0Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Ux0Ux0Ux0Tx0Tx0Tx0Tx0Tx0Ux0Ux0Ux0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx/Tx/Tw/Tw/Tx/Tx0Tx0Tx0Tx/Tx/Tw.Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw.Tw.Tw.Tw.Tw/Tw/Tw/Tw/Tx/Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Vy1Vy1Vy2Vy2Vz2Vz3Wz3Wz3Vz3Vz3Vz3Vz...
And I have no idea how to handle that. I tried reading a value as shown below with the float color but that didn't really help me, as I don't really know what it means. Is each color I'm reading an intensity value of the pixel, or do I need to read three data points in a row to get all the channels?
What is a good way to get the data displayed on the screen, modify it, and redraw it?
You are telling glReadPixels that you want to read RGB values in 3 BYTES and you are putting it in a single float value. This cannot work.
Try the following instead:
unsigned char color[3];
for ...
color[0] = data[3*(i*width+j)];
color[1] = data[3*(i*width+j)+1];
color[2] = data[3*(i*width+j)+2];
I haven't tried it so there might be some mistakes. But you get the idea.
You could also tell glReadPixels that you only want GL_RED in GL_FLOAT and put it in a float buffer if you are processing black and white images and only want the intensity. Or GL_LUMINANCE; it's really up to you but you need to be coherent between the parameters you pass to glReadPixels and the way you parse that data.

Vertically flipping an Char array: is there a more efficient way?

Lets start with some code:
QByteArray OpenGLWidget::modifyImage(QByteArray imageArray, const int width, const int height){
if (vertFlip){
/* Each pixel constist of four unisgned chars: Red Green Blue Alpha.
* The field is normally 640*480, this means that the whole picture is in fact 640*4 uChars wide.
* The whole ByteArray is onedimensional, this means that 640*4 is the red of the first pixel of the second row
* This function is EXTREMELY SLOW
*/
QByteArray tempArray = imageArray;
for (int h = 0; h < height; ++h){
for (int w = 0; w < width/2; ++w){
for (int i = 0; i < 4; ++i){
imageArray.data()[h*width*4 + 4*w + i] = tempArray.data()[h*width*4 + (4*width - 4*w) + i ];
imageArray.data()[h*width*4 + (4*width - 4*w) + i] = tempArray.data()[h*width*4 + 4*w + i];
}
}
}
}
return imageArray;
}
This is the code I use right now to vertically flip an image which is 640*480 (The image is actually not guaranteed to be 640*480, but it mostly is). The color encoding is RGBA, which means that the total array size is 640*480*4. I get the images with 30 FPS, and I want to show them on the screen with the same FPS.
On an older CPU (Athlon x2) this code is just too much: the CPU is racing to keep up with the 30 FPS, so the question is: can I do this more efficient?
I am also working with OpenGL, does that have a gimmic I am not aware of that can flip images with relativly low CPU/GPU usage?
According to this question, you can flip an image in OpenGL by scaling it by (1,-1,1). This question explains how to do transformations and scaling.
You can improve at least by doing it blockwise, making use of the cache architecture. In your example one of the accesses (either the read OR the write) will be off-cache.
For a start it can help to "capture scanlines" if you're using two loops to loop through the pixels of an image, like so:
for (int y = 0; y < height; ++y)
{
// Capture scanline.
char* scanline = imageArray.data() + y*width*4;
for (int x = 0; x < width/2; ++x)
{
const int flipped_x = width - x-1;
for (int i = 0; i < 4; ++i)
swap(scanline[x*4 + i], scanline[flipped_x*4 + i]);
}
}
Another thing to note is that I used swap instead of a temporary image. That'll tend to be more efficient since you can just swap using registers instead of loading pixels from a copy of the entire image.
But also it generally helps if you use a 32-bit integer instead of working one byte at a time if you're going to be doing anything like this. If you're working with pixels with 8-bit types but know that each pixel is 32-bits, e.g., as in your case, you can generally get away with a case to uint32_t*, e.g.
for (int y = 0; y < height; ++y)
{
uint32_t* scanline = (uint32_t*)imageArray.data() + y*width;
std::reverse(scanline, scanline + width);
}
At this point you might parellelize the y loop. Flipping an image horizontally (it should be "horizontal" if I understood your original code correctly) in this way is a little bit tricky with the access patterns, but you should be able to get quite a decent boost using the above techniques.
I am also working with OpenGL, does that have a gimmic I am not aware
of that can flip images with relativly low CPU/GPU usage?
Naturally the fastest way to flip images is to not touch their pixels at all and just save the flipping for the final part of the pipeline when you render the result. For this you might render a texture in OGL with negative scaling instead of modifying the pixels of a texture.
Another thing that's really useful in video and image processing is to represent an image to process like this for all your image operations:
struct Image32
{
uint32_t* pixels;
int32_t width;
int32_t height;
int32_t x_stride;
int32_t y_stride;
};
The stride fields are what you use to get from one scanline (row) of an image to the next vertically and one column to the next horizontally. When you use this representation, you can use negative values for the stride and offset the pixels accordingly. You can also use the stride fields to, say, render only every other scanline of an image for fast interactive half-res scanline previews by using y_stride=height*2 and height/=2. You can quarter-res an image by setting x stride to 2 and y stride to 2*width and then halving the width and height. You can render a cropped image without making your blit functions accept a boatload of parameters by just modifying these fields and keeping the y stride to width to get from one row of the cropped section of the image to the next:
// Using the stride representation of Image32, this can now
// blit a cropped source, a horizontally flipped source,
// a vertically flipped source, a source flipped both ways,
// a half-res source, a quarter-res source, a quarter-res
// source that is horizontally flipped and cropped, etc,
// and all without modifying the source image in advance
// or having to accept all kinds of extra drawing parameters.
void blit(int dst_x, int dst_y, Image32 dst, Image32 src);
// We don't have to do things like this (and I think I lost
// some capabilities with this version below but it hurts my
// brain too much to think about what capabilities were lost):
void blit_gross(int dst_x, int dst_y, int dst_w, int dst_h, uint32_t* dst,
int src_x, int src_y, int src_w, int src_h,
const uint32_t* src, bool flip_x, bool flip_y);
By using negative values and passing it to an image operation (ex: a blit operation), the result will naturally be flipped without having to actually flip the image. It'll end up being "drawn flipped", so to speak, just as with the case of using OGL with a negative scaling transformation matrix.

C++/SDL: Fading out a surface already having per-pixel alpha information

Suppose we have a 32-bit PNG file of some ghostly/incorporeal character, which is drawn in a semi-transparent fashion. It is not equally transparent in every place, so we need the per-pixel alpha information when loading it to a surface.
For fading in/out, setting the alpha value of an entire surface is a good way; but not in this case, as the surface already has the per-pixel information and SDL doesn't combine the two.
What would be an efficient workaround (instead of asking the artist to provide some awesome fade in/out animation for the character)?
I think the easiest way for you to achieve the result you want is to start by loading the source surface containing your character sprites, then, for every instance of your ghost create a working copy of the surface. What you'll want to do is every time the alpha value of an instance change, SDL_BlitSurface (doc) your source into your working copy and then apply your transparency (which you should probably keep as a float between 0 and 1) and then apply your transparency on every pixel's alpha channel.
In the case of a 32 bit surface, assuming that you initially loaded source and allocated working SDL_Surfaces you can probably do something along the lines of:
SDL_BlitSurface(source, NULL, working, NULL);
if(SDL_MUSTLOCK(working))
{
if(SDL_LockSurface(working) < 0)
{
return -1;
}
}
Uint8 * pixels = (Uint8 *)working->pixels;
pitch_padding = (working->pitch - (4 * working->w));
pixels += 3; // Big Endian will have an offset of 0, otherwise it's 3 (R, G and B)
for(unsigned int row = 0; row < working->h; ++row)
{
for(unsigned int col = 0; col < working->w; ++col)
{
*pixels = (Uint8)(*pixels * character_transparency); // Could be optimized but probably not worth it
pixels += 4;
}
pixels += pitch_padding;
}
if(SDL_MUSTLOCK(working))
{
SDL_UnlockSurface(working);
}
This code was inspired from SDL_gfx (here), but if you're doing only that, I wouldn't bother linking against a library just for that.

Setting individual pixels of an RGB frame for ffmpeg encoding

I'm trying to change the test pattern of an ffmpeg streamer, Trouble syncing libavformat/ffmpeg with x264 and RTP , into familiar RGB format. My broader goal is to compute frames of a streamed video on the fly.
So I replaced its AV_PIX_FMT_MONOWHITE with AV_PIX_FMT_RGB24, which is "packed RGB 8:8:8, 24bpp, RGBRGB..." according to http://libav.org/doxygen/master/pixfmt_8h.html .
To stuff its pixel array called data, I've tried many variations on
for (int y=0; y<HEIGHT; ++y) {
for (int x=0; x<WIDTH; ++x) {
uint8_t* rgb = data + ((y*WIDTH + x) *3);
const double i = x/double(WIDTH);
// const double j = y/double(HEIGHT);
rgb[0] = 255*i;
rgb[1] = 0;
rgb[2] = 255*(1-i);
}
}
At HEIGHTxWIDTH= 80x60, this version yields
, when I expect a single blue-to-red horizontal gradient.
640x480 yields the same 4-column pattern, but with far more horizontal stripes.
640x640, 160x160, etc, yield three columns, cyan-ish / magenta-ish / yellow-ish, with the same kind of horizontal stripiness.
Vertical gradients behave even more weirdly.
Appearance was unaffected by an AV_PIX_FMT_RGBA attempt (4 not 3 bytes per pixel, alpha=255). Also unaffected by a port from C to C++.
The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.
Access each Pixel of AVFrame asks the same question in less detail, so far unanswered.
The streamer emits one warning, which I doubt affects appearance:
[rtp # 0x269c0a0] Encoder did not produce proper pts, making some up.
So. How do you set the RGB value of a pixel in a frame to be sent to sws_scale() (and then to x264_encoder_encode() and av_interleaved_write_frame())?
Use avpicture_fill() as described in Encoding a screenshot into a video using FFMPEG .
Instead of passing data directly to sws_scale(), do this:
AVFrame* pic = avcodec_alloc_frame();
avpicture_fill((AVPicture *)pic, data, AV_PIX_FMT_RGB24, WIDTH, HEIGHT);
and then replace the 2nd and 3rd args of sws_scale() with
pic->data, pic->linesize,
Then the gradients above work properly, at many resolutions.
The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.
Stride (AKA linesize) is the distance in bytes between two lines. For various reasons having mostly to do with optimization it is often larger than simply width in bytes, so there is padding on the end of each line.
In your case, without any padding, stride should be width * 3.