Vertically flipping an Char array: is there a more efficient way? - c++

Lets start with some code:
QByteArray OpenGLWidget::modifyImage(QByteArray imageArray, const int width, const int height){
if (vertFlip){
/* Each pixel constist of four unisgned chars: Red Green Blue Alpha.
* The field is normally 640*480, this means that the whole picture is in fact 640*4 uChars wide.
* The whole ByteArray is onedimensional, this means that 640*4 is the red of the first pixel of the second row
* This function is EXTREMELY SLOW
*/
QByteArray tempArray = imageArray;
for (int h = 0; h < height; ++h){
for (int w = 0; w < width/2; ++w){
for (int i = 0; i < 4; ++i){
imageArray.data()[h*width*4 + 4*w + i] = tempArray.data()[h*width*4 + (4*width - 4*w) + i ];
imageArray.data()[h*width*4 + (4*width - 4*w) + i] = tempArray.data()[h*width*4 + 4*w + i];
}
}
}
}
return imageArray;
}
This is the code I use right now to vertically flip an image which is 640*480 (The image is actually not guaranteed to be 640*480, but it mostly is). The color encoding is RGBA, which means that the total array size is 640*480*4. I get the images with 30 FPS, and I want to show them on the screen with the same FPS.
On an older CPU (Athlon x2) this code is just too much: the CPU is racing to keep up with the 30 FPS, so the question is: can I do this more efficient?
I am also working with OpenGL, does that have a gimmic I am not aware of that can flip images with relativly low CPU/GPU usage?

According to this question, you can flip an image in OpenGL by scaling it by (1,-1,1). This question explains how to do transformations and scaling.

You can improve at least by doing it blockwise, making use of the cache architecture. In your example one of the accesses (either the read OR the write) will be off-cache.

For a start it can help to "capture scanlines" if you're using two loops to loop through the pixels of an image, like so:
for (int y = 0; y < height; ++y)
{
// Capture scanline.
char* scanline = imageArray.data() + y*width*4;
for (int x = 0; x < width/2; ++x)
{
const int flipped_x = width - x-1;
for (int i = 0; i < 4; ++i)
swap(scanline[x*4 + i], scanline[flipped_x*4 + i]);
}
}
Another thing to note is that I used swap instead of a temporary image. That'll tend to be more efficient since you can just swap using registers instead of loading pixels from a copy of the entire image.
But also it generally helps if you use a 32-bit integer instead of working one byte at a time if you're going to be doing anything like this. If you're working with pixels with 8-bit types but know that each pixel is 32-bits, e.g., as in your case, you can generally get away with a case to uint32_t*, e.g.
for (int y = 0; y < height; ++y)
{
uint32_t* scanline = (uint32_t*)imageArray.data() + y*width;
std::reverse(scanline, scanline + width);
}
At this point you might parellelize the y loop. Flipping an image horizontally (it should be "horizontal" if I understood your original code correctly) in this way is a little bit tricky with the access patterns, but you should be able to get quite a decent boost using the above techniques.
I am also working with OpenGL, does that have a gimmic I am not aware
of that can flip images with relativly low CPU/GPU usage?
Naturally the fastest way to flip images is to not touch their pixels at all and just save the flipping for the final part of the pipeline when you render the result. For this you might render a texture in OGL with negative scaling instead of modifying the pixels of a texture.
Another thing that's really useful in video and image processing is to represent an image to process like this for all your image operations:
struct Image32
{
uint32_t* pixels;
int32_t width;
int32_t height;
int32_t x_stride;
int32_t y_stride;
};
The stride fields are what you use to get from one scanline (row) of an image to the next vertically and one column to the next horizontally. When you use this representation, you can use negative values for the stride and offset the pixels accordingly. You can also use the stride fields to, say, render only every other scanline of an image for fast interactive half-res scanline previews by using y_stride=height*2 and height/=2. You can quarter-res an image by setting x stride to 2 and y stride to 2*width and then halving the width and height. You can render a cropped image without making your blit functions accept a boatload of parameters by just modifying these fields and keeping the y stride to width to get from one row of the cropped section of the image to the next:
// Using the stride representation of Image32, this can now
// blit a cropped source, a horizontally flipped source,
// a vertically flipped source, a source flipped both ways,
// a half-res source, a quarter-res source, a quarter-res
// source that is horizontally flipped and cropped, etc,
// and all without modifying the source image in advance
// or having to accept all kinds of extra drawing parameters.
void blit(int dst_x, int dst_y, Image32 dst, Image32 src);
// We don't have to do things like this (and I think I lost
// some capabilities with this version below but it hurts my
// brain too much to think about what capabilities were lost):
void blit_gross(int dst_x, int dst_y, int dst_w, int dst_h, uint32_t* dst,
int src_x, int src_y, int src_w, int src_h,
const uint32_t* src, bool flip_x, bool flip_y);
By using negative values and passing it to an image operation (ex: a blit operation), the result will naturally be flipped without having to actually flip the image. It'll end up being "drawn flipped", so to speak, just as with the case of using OGL with a negative scaling transformation matrix.

Related

Fast, good quality pixel interpolation for extreme image downscaling

In my program, I am downscaling an image of 500px or larger to an extreme level of approx 16px-32px. The source image is user-specified so I do not have control over its size. As you can imagine, few pixel interpolations hold up and inevitably the result is heavily aliased.
I've tried bilinear, bicubic and square average sampling. The square average sampling actually provides the most decent results but the smaller it gets, the larger the sampling radius has to be. As a result, it gets quite slow - slower than the other interpolation methods.
I have also tried an adaptive square average sampling so that the smaller it gets the greater the sampling radius, while the closer it is to its original size, the smaller the sampling radius. However, it produces problems and I am not convinced this is the best approach.
So the question is: What is the recommended type of pixel interpolation that is fast and works well on such extreme levels of downscaling?
I do not wish to use a library so I will need something that I can code by hand and isn't too complex. I am working in C++ with VS 2012.
Here's some example code I've tried as requested (hopefully without errors from my pseudo-code cut and paste). This performs a 7x7 average downscale and although it's a better result than bilinear or bicubic interpolation, it also takes quite a hit:
// Sizing control
ctl(0): "Resize",Range=(0,800),Val=100
// Variables
float fracx,fracy;
int Xnew,Ynew,p,q,Calc;
int x,y,p1,q1,i,j;
//New image dimensions
Xnew=image->width*ctl(0)/100;
Ynew=image->height*ctl(0)/100;
for (y=0; y<image->height; y++){ // rows
for (x=0; x<image->width; x++){ // columns
p1=(int)x*image->width/Xnew;
q1=(int)y*image->height/Ynew;
for (z=0; z<3; z++){ // channels
for (i=-3;i<=3;i++) {
for (j=-3;j<=3;j++) {
Calc += (int)(src(p1-i,q1-j,z));
} //j
} //i
Calc /= 49;
pset(x, y, z, Calc);
} // channels
} // columns
} // rows
Thanks!
The first point is to use pointers to your data. Never use indexes at every pixel. When you write: src(p1-i,q1-j,z) or pset(x, y, z, Calc) how much computation is being made? Use pointers to data and manipulate those.
Second: your algorithm is wrong. You don't want an average filter, but you want to make a grid on your source image and for every grid cell compute the average and put it in the corresponding pixel of the output image.
The specific solution should be tailored to your data representation, but it could be something like this:
std::vector<uint32_t> accum(Xnew);
std::vector<uint32_t> count(Xnew);
uint32_t *paccum, *pcount;
uint8_t* pin = /*pointer to input data*/;
uint8_t* pout = /*pointer to output data*/;
for (int dr = 0, sr = 0, w = image->width, h = image->height; sr < h; ++dr) {
memset(paccum = accum.data(), 0, Xnew*4);
memset(pcount = count.data(), 0, Xnew*4);
while (sr * Ynew / h == dr) {
paccum = accum.data();
pcount = count.data();
for (int dc = 0, sc = 0; sc < w; ++sc) {
*paccum += *i;
*pcount += 1;
++pin;
if (sc * Xnew / w > dc) {
++dc;
++paccum;
++pcount;
}
}
sr++;
}
std::transform(begin(accum), end(accum), begin(count), pout, std::divides<uint32_t>());
pout += Xnew;
}
This was written using my own library (still in development) and it seems to work, but later I changed the variables names in order to make it simpler here, so I don't guarantee anything!
The idea is to have a local buffer of 32 bit ints which can hold the partial sum of all pixels in the rows which fall in a row of the output image. Then you divide by the cell count and save the output to the final image.
The first thing you should do is to set up a performance evaluation system to measure how much any change impacts on the performance.
As said precedently, you should not use indexes but pointers for (probably) a substantial
speed up & not simply average as a basic averaging of pixels is basically a blur filter.
I would highly advise you to rework your code to be using "kernels". This is the matrix representing the ratio of each pixel used. That way, you will be able to test different strategies and optimize quality.
Example of kernels:
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Upsampling/downsampling kernel:
http://www.johncostella.com/magic/
Note, from the code it seems you apply a 3x3 kernel but initially done on a 7x7 kernel. The equivalent 3x3 kernel as posted would be:
[1 1 1]
[1 1 1] * 1/9
[1 1 1]

GLbyte Data in Strange Format -- NPR Technique

I'm working on an edge detection algorithm for a NPR technique. I plan on just using difference of gaussians to find the edges.
I thought that I would take a copy of the current screen, then analyze and recolor the pixels so that I have a map to draw the edges with.
This is my screen copy logic so far:
int width = rd->width();
int height = rd->height();
GLbyte * data = (GLbyte *)malloc( width * height * 3 );
if( data ) {
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);
}
float color = 0;
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
color = data[i*width+j];
}
}
Seeing as I'm just grabbing everything, I didn't think that the alpha component was necessary to copy. rd is my render device, and data is being output like this:
2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Vy2Vy2Vy2Vx2Vx2Vx2Vx2Vx2Vx2Vx2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy3Vy3Vy3Vy3Vy3Vy2Vy2Vy1Vy1Uy0Uy1Vy1Vy1Vy1Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy1Vy1Vy0Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Ux0Ux0Ux0Tx0Tx0Tx0Tx0Tx0Ux0Ux0Ux0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx/Tx/Tw/Tw/Tx/Tx0Tx0Tx0Tx/Tx/Tw.Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw.Tw.Tw.Tw.Tw/Tw/Tw/Tw/Tx/Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Vy1Vy1Vy2Vy2Vz2Vz3Wz3Wz3Vz3Vz3Vz3Vz...
And I have no idea how to handle that. I tried reading a value as shown below with the float color but that didn't really help me, as I don't really know what it means. Is each color I'm reading an intensity value of the pixel, or do I need to read three data points in a row to get all the channels?
What is a good way to get the data displayed on the screen, modify it, and redraw it?
You are telling glReadPixels that you want to read RGB values in 3 BYTES and you are putting it in a single float value. This cannot work.
Try the following instead:
unsigned char color[3];
for ...
color[0] = data[3*(i*width+j)];
color[1] = data[3*(i*width+j)+1];
color[2] = data[3*(i*width+j)+2];
I haven't tried it so there might be some mistakes. But you get the idea.
You could also tell glReadPixels that you only want GL_RED in GL_FLOAT and put it in a float buffer if you are processing black and white images and only want the intensity. Or GL_LUMINANCE; it's really up to you but you need to be coherent between the parameters you pass to glReadPixels and the way you parse that data.

C++/SDL: Fading out a surface already having per-pixel alpha information

Suppose we have a 32-bit PNG file of some ghostly/incorporeal character, which is drawn in a semi-transparent fashion. It is not equally transparent in every place, so we need the per-pixel alpha information when loading it to a surface.
For fading in/out, setting the alpha value of an entire surface is a good way; but not in this case, as the surface already has the per-pixel information and SDL doesn't combine the two.
What would be an efficient workaround (instead of asking the artist to provide some awesome fade in/out animation for the character)?
I think the easiest way for you to achieve the result you want is to start by loading the source surface containing your character sprites, then, for every instance of your ghost create a working copy of the surface. What you'll want to do is every time the alpha value of an instance change, SDL_BlitSurface (doc) your source into your working copy and then apply your transparency (which you should probably keep as a float between 0 and 1) and then apply your transparency on every pixel's alpha channel.
In the case of a 32 bit surface, assuming that you initially loaded source and allocated working SDL_Surfaces you can probably do something along the lines of:
SDL_BlitSurface(source, NULL, working, NULL);
if(SDL_MUSTLOCK(working))
{
if(SDL_LockSurface(working) < 0)
{
return -1;
}
}
Uint8 * pixels = (Uint8 *)working->pixels;
pitch_padding = (working->pitch - (4 * working->w));
pixels += 3; // Big Endian will have an offset of 0, otherwise it's 3 (R, G and B)
for(unsigned int row = 0; row < working->h; ++row)
{
for(unsigned int col = 0; col < working->w; ++col)
{
*pixels = (Uint8)(*pixels * character_transparency); // Could be optimized but probably not worth it
pixels += 4;
}
pixels += pitch_padding;
}
if(SDL_MUSTLOCK(working))
{
SDL_UnlockSurface(working);
}
This code was inspired from SDL_gfx (here), but if you're doing only that, I wouldn't bother linking against a library just for that.

Setting individual pixels of an RGB frame for ffmpeg encoding

I'm trying to change the test pattern of an ffmpeg streamer, Trouble syncing libavformat/ffmpeg with x264 and RTP , into familiar RGB format. My broader goal is to compute frames of a streamed video on the fly.
So I replaced its AV_PIX_FMT_MONOWHITE with AV_PIX_FMT_RGB24, which is "packed RGB 8:8:8, 24bpp, RGBRGB..." according to http://libav.org/doxygen/master/pixfmt_8h.html .
To stuff its pixel array called data, I've tried many variations on
for (int y=0; y<HEIGHT; ++y) {
for (int x=0; x<WIDTH; ++x) {
uint8_t* rgb = data + ((y*WIDTH + x) *3);
const double i = x/double(WIDTH);
// const double j = y/double(HEIGHT);
rgb[0] = 255*i;
rgb[1] = 0;
rgb[2] = 255*(1-i);
}
}
At HEIGHTxWIDTH= 80x60, this version yields
, when I expect a single blue-to-red horizontal gradient.
640x480 yields the same 4-column pattern, but with far more horizontal stripes.
640x640, 160x160, etc, yield three columns, cyan-ish / magenta-ish / yellow-ish, with the same kind of horizontal stripiness.
Vertical gradients behave even more weirdly.
Appearance was unaffected by an AV_PIX_FMT_RGBA attempt (4 not 3 bytes per pixel, alpha=255). Also unaffected by a port from C to C++.
The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.
Access each Pixel of AVFrame asks the same question in less detail, so far unanswered.
The streamer emits one warning, which I doubt affects appearance:
[rtp # 0x269c0a0] Encoder did not produce proper pts, making some up.
So. How do you set the RGB value of a pixel in a frame to be sent to sws_scale() (and then to x264_encoder_encode() and av_interleaved_write_frame())?
Use avpicture_fill() as described in Encoding a screenshot into a video using FFMPEG .
Instead of passing data directly to sws_scale(), do this:
AVFrame* pic = avcodec_alloc_frame();
avpicture_fill((AVPicture *)pic, data, AV_PIX_FMT_RGB24, WIDTH, HEIGHT);
and then replace the 2nd and 3rd args of sws_scale() with
pic->data, pic->linesize,
Then the gradients above work properly, at many resolutions.
The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.
Stride (AKA linesize) is the distance in bytes between two lines. For various reasons having mostly to do with optimization it is often larger than simply width in bytes, so there is padding on the end of each line.
In your case, without any padding, stride should be width * 3.

OpenCV: Accessing And Taking The Square Root Of Pixels

I'm using OpenCV for object detection and one of the operations I would like to be able to perform is a per-pixel square root. I imagine the loop would be something like:
IplImage* img_;
...
for (int y = 0; y < img_->height; y++) {
for(int x = 0; x < img_->width; x++) {
// Take pixel square root here
}
}
My question is how can I access the pixel value at coordinates (x, y) in an IplImage object?
Assuming img_ is of type IplImage, and assuming 16 bit unsigned integer data, I would say
unsigned short pixel_value = ((unsigned short *)&(img_->imageData[img_->widthStep * y]))[x];
See also here for IplImage definition.
OpenCV IplImage is a one dimensional array. You must create a single index to get at image data. The position of your pixel will be based on the color depth, and number of channels in your image.
// width step
int ws = img_->withStep;
// the number of channels (colors)
int nc = img_->nChannels;
// the depth in bytes of the color
int d = img_->depth&0x0000ffff) >> 3;
// assuming the depth is the size of a short
unsigned short * pixel_value = (img_->imageData)+((y*ws)+(x*nc*d));
// this gives you a pointer to the first color in a pixel
//if your are rolling grayscale just dereference the pointer.
You can pick a channel (color) by moving over pixel pointer pixel_value++. I would suggest using a look up table for square roots of pixels if this is going to be any sort of real time application.
please use the CV_IMAGE_ELEM macro.
Also, consider using cvPow with power=0.5 instead of working on pixels yourself, which should be avoided anyways
You may find several ways of reaching image elements in Gady Agam's nice OpenCV tutorial here.