How to most efficiently modify R / G / B values? - c++

So I wanted to implement lighting in my pixel based rendering system, googled and found out to display R / G / B values lighter or darker I have to multiply each red green and blue value by a number < 1 to display it darker and by a number > 1 to display it lighter.
So I implemented it like this, but its really dragging down my performance since I have to do this for each pixel:
void PixelRenderer::applyLight(Uint32& color){
Uint32 alpha = color >> 24;
alpha << 24;
alpha >> 24;
Uint32 red = color >> 16;
red = red << 24;
red = red >> 24;
Uint32 green = color >> 8;
green = green << 24;
green = green >> 24;
Uint32 blue = color;
blue = blue << 24;
blue = blue >> 24;
red = red * 0.5;
green = green * 0.5;
blue = blue * 0.5;
color = alpha << 24 | red << 16 | green << 8 | blue;
}
Any ideas or examples on how to improve the speed?

Try this: (EDIT: as it turns out, this is only a readability improvement, but read on for more insights.)
void PixelRenderer::applyLight(Uint32& color)
{
Uint32 alpha = color >> 24;
Uint32 red = (color >> 16) & 0xff;
Uint32 green = (color >> 8) & 0xff;
Uint32 blue = color & 0xff;
red = red * 0.5;
green = green * 0.5;
blue = blue * 0.5;
color = alpha << 24 | red << 16 | green << 8 | blue;
}
That having been said, you should understand that performing operations of that sort using a general-purpose processor such as the CPU of your computer is bound to be extremely slow. That's why hardware-accelerated graphics cards were invented.
EDIT
If you insist on operating this way, then you will probably have to resort to hacks in order to improve efficiency. One type of hack which is very often used when dealing with 8-bit channel values is lookup tables. With a lookup table, instead of multiplying each individual channel value by a float, you precompute an array of 256 values where the index into the array is a channel value, and the value in that index is the precomputed result of multiplying the channel value by that float. Then, when converting your image, you just use channel values to lookup entries of the array instead of performing actual float multiplication. This is much, much faster. (But still not nearly as fast as programming dedicated, massively parallel hardware do that stuff for you.)
EDIT
As others have already pointed out, if you are not planning to operate on the alpha channel, then you do not need to extract it and then later apply it, you can just leave it unaltered. So, you can just do color = (color & 0xff000000) | red << 16 | green << 8 | blue;

Shifts and masks like this are generally very fast on a modern processor. I might look at a few other things:
Follow the first rule of optimisation - profile your code. You can do this simply by calling the method millions of times and timing it. Are your calculations slow, or is it something else? What is slow? Try omitting part of the method - do things speed up?
Make sure that this function is declared inline (and make sure it has actually been inlined). The function call overhead will massively outweigh the pixel manipulations (particularly if it is virtual).
Consider declaring your method Uint32 PixelRenderer::applyLight(Uint32 color) and returning the modified value, that may help avoid some dereferences and give the compiler some additional optimisation opportunities.
Avoid fp to integer conversions, they can be very expensive. If a plain integer divide is insufficient, look at using fixed-point math.
Finally, look at the assembler to see what the compiler has generated (with optimisations on). Are there any branches or conversions? Has your method actually been inlined?

To preserve the alpha value in the front use:
(color>>1)&0x7F7F7F | (color&0xFF000000)
(A tweak on what Wimmel offered in the comments).
I think the 'learning curve' here is that you were using shift and shift back to mask out bits. You should use & with a masking value.
For a more general solution (where 0.0<=factor<=1.0) :
void PixelRenderer::applyLight(Uint32& color, double factor){
Uint32 alpha=color&0xFF000000;
Uint32 red= (color&0x00FF0000)*factor;
Uint32 green= (color&0x0000FF00)*factor;
Uint32 blue=(color&0x000000FF)*factor;
color=alpha|(red&0x00FF0000)|(green&0x0000FF00)|(blue&0x000000FF);
}
Notice there is no need to shift the components down to the low order bits before performing the multiplication.
Ultimately you may find that the bottleneck is floating point conversions and arithmetic.
To reduce that you should consider either:
Reduce it to a scaling factor for example in the range 0-256.
Precompute factor*component as a 256 element array and 'pick' the components out oft.
I'm proposing a range of 257 because you can achieve the factor as follows:
For a more general solution (where 0<=factor<=256) :
void PixelRenderer::applyLight(Uint32& color, Uint32 factor){
Uint32 alpha=color&0xFF000000;
Uint32 red= ((color&0x00FF0000)*factor)>>8;
Uint32 green= ((color&0x0000FF00)*factor)>>8;
Uint32 blue=((color&0x000000FF)*factor)>>8;
color=alpha|(red&0x00FF0000)|(green&0x0000FF00)|(blue&0x000000FF);
}
Here's a runnable program illustrating the first example:
#include <stdio.h>
#include <inttypes.h>
typedef uint32_t Uint32;
Uint32 make(Uint32 alpha,Uint32 red,Uint32 green,Uint32 blue){
return (alpha<<24)|(red<<16)|(green<<8)|blue;
}
void output(Uint32 color){
printf("alpha=%"PRIu32" red=%"PRIu32" green=%"PRIu32" blue=%"PRIu32"\n",(color>>24),(color&0xFF0000)>>16,(color&0xFF00)>>8,color&0xFF);
}
Uint32 applyLight(Uint32 color, double factor){
Uint32 alpha=color&0xFF000000;
Uint32 red= (color&0x00FF0000)*factor;
Uint32 green= (color&0x0000FF00)*factor;
Uint32 blue=(color&0x000000FF)*factor;
return alpha|(red&0x00FF0000)|(green&0x0000FF00)|(blue&0x000000FF);
}
int main(void) {
Uint32 color1=make(156,100,50,20);
Uint32 result1=applyLight(color1,0.9);
output(result1);
Uint32 color2=make(255,255,255,255);
Uint32 result2=applyLight(color2,0.1);
output(result2);
Uint32 color3=make(78,220,200,100);
Uint32 result3=applyLight(color3,0.05);
output(result3);
return 0;
}
Expected Output is:
alpha=156 red=90 green=45 blue=18
alpha=255 red=25 green=25 blue=25
alpha=78 red=11 green=10 blue=5

One thing that I don't see anyone else mentioning is parallelizing your code. There are at least 2 ways to do this: SIMD instructions, and multiple threads.
SIMD instructions (like SSE, AVX, etc.) perform the same math on multiple pieces of data at the same time. So you could, for example, multiply the red, green, blue, and alpha of a pixel by the same values in 1 instruction, like this:
vec4 lightValue = vec4(0.5, 0.5, 0.5, 1.0);
vec4 result = vec_Mult(inputPixel, lightValue);
That's the equivalent of:
lightValue.red = 0.5;
lightValue.green = 0.5;
lightValue.blue = 0.5;
lightValue.alpha = 1.0;
result.red = inputPixel.red * lightValue.red;
result.green = inputPixel.green * lightValue.green;
result.blue = inputPixel.blue * lightValue.blue;
result.alpha = inputPixel.alpha * lightValue.alpha;
You can also cut your image into tiles and perform the lightening operation on several tiles at once using threads run on multiple cores. If you're using C++11, you can use std::thread to start multiple threads. Otherwise your OS probably has functionality for threading, such as WinThreads, Grand Central Dispatch, pthreads, boost threads, Threading Building Blocks, etc.
You can combine both of the above and have multithreaded code that operates on whole pixels at a time.
If you want to take it even further, you can do your processing on the GPU of your machine using OpenGL, OpenCL, DirectX, Metal, Mantle, CUDA, or one of the other GPGPU technologies. GPUs are generally hundreds of cores that can very quickly process many tiles in parallel, each of which processes whole pixels (rather than just channels) at a time.
But an even better option may be not to write any code at all. It's extremely likely that someone has already done this work and you can leverage it. For example, on MacOS there's CoreImage and the Accelerate framework. On iOS you also have CoreImage, and there's also GPUImage. I'm sure there are similar libraries on Windows, Linux, and other OSes you might be working with.

Another solution without using bit shifters, is to convert your 32 bits uint into a struct.
Try to keep your implementation in the .h include file, so that it can be inlined
If you don't want to have the implementation inlined (see above), modify your applyLight method to accept an array of pixels. Method call overhead can be significant for such a small method
Enable "loop unroll" optimisation on your compiler, which will enable the usage of SIMD instructions
Implementation:
class brightness {
private:
struct pixel { uint8_t b, g, r, a; };
float factor;
static inline void apply(uint8_t& p, float f) {
p = max(min(int(p * f), 255),0);
}
public:
brightness(float factor) : factor(factor) { }
void apply(uint32_t& color){
pixel& p = (pixel&)color;
apply(p.b, factor);
apply(p.g, factor);
apply(p.r, factor);
}
};
Implementation with a lookup table (slower when you use "loop unroll"):
class brightness {
struct pixel { uint8_t b, g, r, a; };
uint8_t table[256];
public:
brightness(float factor) {
for(int i = 0; i < 256; i++)
table[i] = max(min(int(i * factor), 255), 0);
}
void apply(uint32_t& color){
pixel& p = (pixel&)color;
p.b = table[p.b];
p.g = table[p.g];
p.r = table[p.r];
}
};
// usage
brightness half_bright(0.5);
uint32_t pixel = 0xffffffff;
half_bright.apply(pixel);

Related

Efficient Way to draw many individual pixels to a screen in SDL2

I'm currently working on something in C++ using SDL2 that requires being able to draw a lot of individual pixels with specific color values to the screen every update. I'm using SDL_RenderDrawPoint just to make sure my program works but I'm sure the performance on that is terrible. From a cursory search it seems like using a texture that is the size of my window would be fastest by using SDL_UpdateTexture and updating it with a vector of pixels with my desired pixel values with a default of {0,0,0,0} RGBA value for any pixel not changed.
However every attempt I've had at writing it fails and I'm not sure where my misunderstandings lie. This is my current code that attempts to draw a specific RGBA color value to a specific x,y coordinate in my texture. I assume the part of the buffer I'm accessing using my x,y values is incorrect but I'm unsure how to make it correct if so.
Any help is appreciated including suggestions on how to efficiently do this without a texture if there's a better way.
SDL_Texture* windowTexture = SDL_CreateTexture(render, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, screenWidth, screenHeight);
unsigned int* lockedPixels = nullptr;
std::vector<int> pixels(screenHeight*screenWidth*4, 0);
int pitch = 0;
int start = (y * screenWidth + x) * 4;
pixels[start + 0] = B;
pixels[start + 1] = G;
pixels[start + 2] = R;
pixels[start + 3] = A;
SDL_UpdateTexture(windowTexture, nullptr, pixels.data(), screenWidth * 4);
The pixel format RGBA8888 means that each pixel is a 32 bit element with each channel (i.e. red, green, blue or alpha) taking up 8 bits, in that order.
You may want to declare pixels as containing the type "32 bit unsigned integer". An unsigned int is typically 32 bits, but it may also be larger.
std::vector<Uint32> pixels(screenHeight*screenWidth, 0); // Note: no *4
The individual R, G, B, A values (which should each be 8 bit unsigned integers) can be combined into one pixel by using shifts and bit-wise ORs:
int start = y * screenWidth + x; // Note: no *4
pixels[start] = (R << 24U) | (G << 16U) | (B << 8U) | A;
Lastly, you may want to not hardcode the last parameter of SDL_UpdateTexture (i.e. pitch). Instead, use screenWidth * sizeof(Uint32).
The implementation above is basically a direct implementation of "RGBA8888" and allows you to access individual pixels.
Alternatively, you could also declare an array of four times the size containing 8 bit unsigned integers. Then, the first four indices would correspond to the R, G, B, A values of the first pixel, the next four indices would correspond to the R, G, B, A values of the second pixel, etc.
Which one is faster would depend on the exact system and use-case (whether the most common operations are on pixels or individual channels).
PS. Instead of Uint32 you could also use C++'s own std::uint32_t from the cstdlib header.

How can I make the faces of my cube smoothly transition between all colors of the rainbow?

I have a program in Visual Studio that is correctly rendering a 3D cube that is slowly spinning. I have a working FillTriangle() function that fills in the faces of the cube with any color whose hex code I enter as a parameter (for example, 0x00ae00ff for purple). I have set the color of each face to start at red (0xFF000000), and then I have a while loop in main() that updates the scene and draws new pixels every frame. I also have a Timer class that handles all sorts of time-related things, including the Update() method that updates things every frame. I want to make it so that the colors of the faces smoothly transitions from one color to the next, through every color of the rainbow, and I want it to loop and do it as long as the program is running. Right now, it is smoothly transitioning between a few colors before suddenly jumping to another color. For example, it might smoothly transition from yellow to orange to red, but then suddenly jump to green. Here is the code that is doing that right now:
...
main()
{
...
float d = 0.0f; //float for the timer to increment
//screenPixels is the array of all pixels on the screen, numOfPixels is the number of pixels being displayed
while(Update(screenPixels, numOfPixels))
{
...
timer.Signal(); //change in time between the last 2 signals
d += timer.Delta(); //timer.Delta() is the average current time
if(d > (1/30)) // 1 divided by number of frames
{
//Reset timer
d = 0.0f;
//Add to current pixel color being displayed
pixelColor += 0x010101FF;
}
...
}
...
}
Is there a better way to approach this? Adding to the current pixel color was the first thing that came to my mind, and it's kind of working, but it keeps skipping colors for some reason.
That constant is going to overflow with each addition. Not just as a whole number, but across each component of the color spectrum: R, G, and B.
You need to break your pixelColor into separate Red, Green, and Blue colors and do math on each byte independently. And leave Alpha fixed at 255 (fully opaque). And check for overflow/underflow along the way. When you reach an overflow or underflow moment, just change direction from incrementing to decrementing.
Also, I wouldn't increment each component by the same value (1) on each step. With the same increment on R,G, and B, you'd just be adding "more white" to the color. If you want a more natural rainbow loop, we can do something like the following:
Change this:
pixelColor += 0x010101FF;
To this:
// I'm assuming pixelColor is RGBA
int r = (pixelColor >> 24) & 0x0ff;
int g = (pixelColor >> 16) & 0x0ff;
int b = (pixelColor >> 8) & 0x0ff;
r = Increment(r, &redInc);
r = Increment(g, &greenInc);
g = Increment(g, &blueInc);
pixelColor = (r << 24) | (g << 16) | (b << 8) | 0x0ff;
Where redInc, greenInc, and blueInc are defined and initialized outside your main while loop as follows:
int redInc = -1;
int greenInc = 2;
int blueInc = 4;
And the increment function is something like this:
void Increment(int color, int* increment) {
color += *increment;
if (color < 0) {
color = 0;
*increment = (rand() % 4 + 1);
} else if (color > 255) {
color = 255;
*increment = -(rand() % 4 + 1);
}
}
That should cycle through the colors in a more natural fashion (from darker to brighter to darker again) with a bit of randomness so it's never the same pattern twice. You can play with the randomness by adjusting the initial colorInc constants at initialization time as well as how the *increment value gets updated in the Increment function.
If you see any weird color flickering, it's quite possible that you have the alpha byte in the wrong position. It might be the high byte, not the low byte. Similarly, some systems order the colors in the integer as RGBA. Others do ARGB. And quite possible RGB is flipped with BGR.

Fast, good quality pixel interpolation for extreme image downscaling

In my program, I am downscaling an image of 500px or larger to an extreme level of approx 16px-32px. The source image is user-specified so I do not have control over its size. As you can imagine, few pixel interpolations hold up and inevitably the result is heavily aliased.
I've tried bilinear, bicubic and square average sampling. The square average sampling actually provides the most decent results but the smaller it gets, the larger the sampling radius has to be. As a result, it gets quite slow - slower than the other interpolation methods.
I have also tried an adaptive square average sampling so that the smaller it gets the greater the sampling radius, while the closer it is to its original size, the smaller the sampling radius. However, it produces problems and I am not convinced this is the best approach.
So the question is: What is the recommended type of pixel interpolation that is fast and works well on such extreme levels of downscaling?
I do not wish to use a library so I will need something that I can code by hand and isn't too complex. I am working in C++ with VS 2012.
Here's some example code I've tried as requested (hopefully without errors from my pseudo-code cut and paste). This performs a 7x7 average downscale and although it's a better result than bilinear or bicubic interpolation, it also takes quite a hit:
// Sizing control
ctl(0): "Resize",Range=(0,800),Val=100
// Variables
float fracx,fracy;
int Xnew,Ynew,p,q,Calc;
int x,y,p1,q1,i,j;
//New image dimensions
Xnew=image->width*ctl(0)/100;
Ynew=image->height*ctl(0)/100;
for (y=0; y<image->height; y++){ // rows
for (x=0; x<image->width; x++){ // columns
p1=(int)x*image->width/Xnew;
q1=(int)y*image->height/Ynew;
for (z=0; z<3; z++){ // channels
for (i=-3;i<=3;i++) {
for (j=-3;j<=3;j++) {
Calc += (int)(src(p1-i,q1-j,z));
} //j
} //i
Calc /= 49;
pset(x, y, z, Calc);
} // channels
} // columns
} // rows
Thanks!
The first point is to use pointers to your data. Never use indexes at every pixel. When you write: src(p1-i,q1-j,z) or pset(x, y, z, Calc) how much computation is being made? Use pointers to data and manipulate those.
Second: your algorithm is wrong. You don't want an average filter, but you want to make a grid on your source image and for every grid cell compute the average and put it in the corresponding pixel of the output image.
The specific solution should be tailored to your data representation, but it could be something like this:
std::vector<uint32_t> accum(Xnew);
std::vector<uint32_t> count(Xnew);
uint32_t *paccum, *pcount;
uint8_t* pin = /*pointer to input data*/;
uint8_t* pout = /*pointer to output data*/;
for (int dr = 0, sr = 0, w = image->width, h = image->height; sr < h; ++dr) {
memset(paccum = accum.data(), 0, Xnew*4);
memset(pcount = count.data(), 0, Xnew*4);
while (sr * Ynew / h == dr) {
paccum = accum.data();
pcount = count.data();
for (int dc = 0, sc = 0; sc < w; ++sc) {
*paccum += *i;
*pcount += 1;
++pin;
if (sc * Xnew / w > dc) {
++dc;
++paccum;
++pcount;
}
}
sr++;
}
std::transform(begin(accum), end(accum), begin(count), pout, std::divides<uint32_t>());
pout += Xnew;
}
This was written using my own library (still in development) and it seems to work, but later I changed the variables names in order to make it simpler here, so I don't guarantee anything!
The idea is to have a local buffer of 32 bit ints which can hold the partial sum of all pixels in the rows which fall in a row of the output image. Then you divide by the cell count and save the output to the final image.
The first thing you should do is to set up a performance evaluation system to measure how much any change impacts on the performance.
As said precedently, you should not use indexes but pointers for (probably) a substantial
speed up & not simply average as a basic averaging of pixels is basically a blur filter.
I would highly advise you to rework your code to be using "kernels". This is the matrix representing the ratio of each pixel used. That way, you will be able to test different strategies and optimize quality.
Example of kernels:
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Upsampling/downsampling kernel:
http://www.johncostella.com/magic/
Note, from the code it seems you apply a 3x3 kernel but initially done on a 7x7 kernel. The equivalent 3x3 kernel as posted would be:
[1 1 1]
[1 1 1] * 1/9
[1 1 1]

Transparent spectrogram selection overlays

I'm trying to create transparent selection overlays on top of a spectrogram but it doesn't quite work. I mean the result is not really satisfactory. In contrast, the overlays painted on top of a waveform work well but I need to support both the waveform as well as the spectrogram view (and maybe other views in the future)
The selection overlay works fine in the waveform view
Here's the selection overlay in the spectrogram view (the selection looks really bad and obscures parts of the spectrogram)
The code (VCL) is the same for both views
void TWaveDisplayContainer::DrawSelectedRegion(){
if(selRange.selStart.x == selRange.selEnd.x){
DrawCursorPosition( selRange.selStart.x);
return;
}
Graphics::TBitmap *pWaveBmp = eContainerView == WAVEFORM ? pWaveBmpLeft : pSfftBmpLeft;
TRect selRect(selRange.selStart.x, 0, selRange.selEnd.x, pWaveLeft->Height);
TCanvas *pCanvas = pWaveLeft->Canvas;
int copyMode = pCanvas->CopyMode;
pCanvas->Draw(0,0, pWaveBmp);
pCanvas->Brush->Color = clActiveBorder;
pCanvas->CopyMode = cmSrcAnd;
pCanvas->Rectangle(selRect);
pCanvas->CopyRect(selRect, pWaveBmp->Canvas, selRect);
pCanvas->CopyMode = copyMode;
if(numChannels == 2){
TCanvas* pOtherCanvas = pWaveRight->Canvas;
pWaveBmp = eContainerView == WAVEFORM ? pWaveBmpRight :
pSfftBmpRight;
pOtherCanvas->Draw(0,0, pWaveBmp);
pOtherCanvas->Brush->Color = clActiveBorder;
pOtherCanvas->CopyMode = cmSrcAnd;
pOtherCanvas->Rectangle(selRect);
pOtherCanvas->CopyRect(selRect, pWaveBmp->Canvas, selRect);
pOtherCanvas->CopyMode = copyMode;
}
}
So, I'm using cmSrcAnd copy mode and the CopyRect method to do the actual painting/drawing (TCanvas corresponds to a device context (HDC on Windows). I think, since a spectrogram, unlike a waveform, doesn't really have a single background colour using simple mixing copy modes isn't going to work well in most cases.
Note that I can still accomplish what I want but that would require messing with the individual pixels, which is something I'd like to avoid if possible)
I'm basically looking for an API (VCL wraps GDI so even WINAPI is fine) able to do this.
Any help is much appreciated
I'm going to answer my own question and hopefully this will prove to be useful to some people. Since there's apparently no way this can be achieved
in either plain VCL or using WINAPI (except in some situations), I've written a simple function that blends a bitmap (32bpp / 24bpp) with an overlay colour (any colour).
The actual result will also depend on the weights (w0,w1) given to the red, green and blue components of an individual pixel. Changing these will produce
an overlay that leans more toward the spectrogram colour or the overlay colour respectively.
The code
Graphics::TBitmap *TSelectionOverlay::GetSelectionOverlay(Graphics::TBitmap *pBmp, TColor selColour,
TRect &rect, EChannel eChannel){
Graphics::TBitmap *pSelOverlay = eChannel==LEFT ? pSelOverlayLeft : pSelOverlayRight;
const unsigned cGreenShift = 8;
const unsigned cBlueShift = 16;
const unsigned overlayWidth = abs(rect.right-rect.left);
const unsigned overlayHeight = abs(rect.bottom-rect.top);
pSelOverlay->Width = pBmp->Width;
pSelOverlay->Height = pBmp->Height;
const unsigned startOffset = rect.right>rect.left ? rect.left : rect.right;
pSelOverlay->Assign(pBmp);
unsigned char cRed0, cGreen0, cBlue0,cRed1, cGreen1, cBlue1, bRedColor0, bGreenColor0, bBlueColor0;
cBlue0 = selColour >> cBlueShift;
cGreen0 = selColour >> cGreenShift & 0xFF;
cRed0 = selColour & 0xFF;
unsigned *pPixel;
for(int i=0;i<overlayHeight;i++){
pPixel = (unsigned*)pSelOverlay->ScanLine[i];//provides access to the pixel array
for(int j=0;j<overlayWidth;j++){
unsigned pixel = pPixel[startOffset+j];
cBlue1 = pixel >> cBlueShift;
cGreen1 = pixel >> cGreenShift & 0xFF;
cRed1 = pixel & 0xFF;
//blend the current bitmap pixel with the overlay colour
const float w0 = 0.5f; //these weights influence the appearance of the overlay (here we use 50%)
const float w1 = 0.5f;
bRedColor0 = cRed0*w0+cRed1*w1;
bGreenColor0 = cGreen0*w0+cGreen1*w1);
bBlueColor0 = cBlue0*w0+cBlue1*w1;
pPixel[startOffset+j] = ((bBlueColor0 << cBlueShift) | (bGreenColor0 << cGreenShift)) | bRedColor0;
}
}
return pSelOverlay;
}
Note that for some reason, CopyRect used with a CopyMode value of cmSrcCopy didn't work well so I used Draw instead.
pCanvas->CopyMode = cmSrcCopy;
pCanvas->CopyRect(dstRect, pSelOverlay->Canvas, srcRec);//this still didn't work well--possibly a bug
so I used
pCanvas->Draw(0,0, pSelOverlay);
The result

adjust bitmap image brightness/contrast using c++

adjust image brightness/contrast using c++ without using any other 3rd party library or dependancy
Image brightness is here - use the mean of the RGB values and shift them.
Contrast is here with other languages solutions available as well.
Edit in case the above links die:
The answer given by Jerry Coffin below covers the same topic and has links that still live.
But, to adjust brightness, you add a constant value to each for the R,G,B fields of an image. Make sure to use saturated math - don't allow values to go below 0 or above the maximum allowed in your bit-depth (8-bits for 24-bit color)
RGB_struct color = GetPixelColor(x, y);
size_t newRed = truncate(color.red + brightAdjust);
size_t newGreen = truncate(color.green + brightAdjust);
size_t newBlue = truncate(color.blue + brightAdjust);
For contrast, I have taken and slightly modified code from this website:
float factor = (259.0 * (contrast + 255.0)) / (255.0 * (259.0 - contrast));
RGB_struct color = GetPixelColor(x, y);
size_t newRed = truncate((size_t)(factor * (color.red - 128) + 128));
size_t newGreen = truncate((size_t)(factor * (color.green - 128) + 128));
size_t newBlue = truncate((size_t)(factor * (color.blue - 128) + 128));
Where truncate(int value) makes sure the value stays between 0 and 255 for 8-bit color. Note that many CPUs have intrinsic functions to do this in a single cycle.
size_t truncate(size_t value)
{
if(value < 0) return 0;
if(value > 255) return 255;
return value;
}
Read in the image with a library just as the Independent JPEG library. When you have raw data, you can convert it from RGB to HSL or (preferably) CIE Lab*. Both contrast and brightness will basically just involve adjustments to the L channel -- to adjust brightness, just adjust all the L values up or down by an appropriate amount. To adjust contrast, you basically adjust the difference between a particular value and the center value. You'll generally want to do this non-linearly, so values near the middle of the range are adjusted quite a bit, but values close to the ends or the range aren't affected nearly as much (and any that are at the very ends, aren't changed at all).
Once you've done that, you can convert back to RGB, and then back to a normal format such as JPEG.