Expression must be a modifiable L value error - c++

I am writing a code that reads a ppm file and stores the width, the height and the pixels of the file to an image object. In my image class I have a pointer that holds the image data. I also have an error in a method that sets the rgb values for a x, y pixel.
typedef load compontent_t
class Image
{
public:
enum channel_t { RED = 0, GREEN, BLUE };
protected:
component_t * buffer; //! Holds the image data.
unsigned int width, //! The width of the image (in pixels)
height; //! The height of the image (in pixels)
// data mutators
/*! Sets the RGB values for an (x,y) pixel.
*
* The method should perform any necessary bounds checking.
*
* \param x is the (zero-based) horizontal index of the pixel to set.
* \param y is the (zero-based) vertical index of the pixel to set.
* \param value is the new color for the (x,y) pixel.
*/
void setPixel(unsigned int x, unsigned int y, Color & value) {
if (x > 0 && x < width && y > 0 && y < height) {
size_t locpixel = y*width + x;
size_t componentofpixel = locpixel * 3;
*buffer + componentofpixel = value.r;
*buffer + componentofpixel + 1 = value.g;
*buffer + componentofpixel + 2 = value.b;
}
else {
cout << "Pixel out of bounds" << endl;
}
}
Image(unsigned int width, unsigned int height, component_t * data_ptr): width(width), height(height),buffer(data_ptr) {}
So in the setPixel method when i am trying to find the correct spot of the buffer to set the rgb value it shows me the error: "expression must be a modifiable L value"

Related

Pixel data unpacking to smaller sections

I'm trying to write a function that unpacks an image into separate quads. But for some reason the results are distorted (kinda stretched 45 degrees), so I must be reading the pixel array incorrectly, though I can't see the problem with my function...
The function takes 2 unsigned char arrays, "source" and "target" and two unsigned int values, the "width" and "height" of the source image. Width is dividable by 4, and height is dividable by 3 (both return the same value, because the texture is 600 * 450) so each face is 150*150 px. So the w/h values are correct. Then it also takes in 2 ints, "xIt" and "yIt" which determine the offset - which 150*150 block should be read.
Here's the function:
const unsigned int trgImgWidth = width / 4;
const unsigned int trgImgHeight = height / 3;
unsigned int trgBufferOffset = 0;
// Compute pixel offset to start reading from
unsigned int Yoffset = yIt * trgImgHeight * width * 3;
unsigned int Xoffset = xIt * trgImgWidth * 3;
for (unsigned int y = 0; y < trgImgHeight; y++)
{
unsigned int o = Yoffset + Xoffset; // Offset of current line of pixels
for (unsigned int x = 0; x < trgImgWidth * 3; x++) // for each pixel component (rgb) in the line
{
target[trgBufferOffset] = source[o + x];
trgBufferOffset++;
}
Yoffset += width * 3;
}
Anyone see where I might be going wrong here?

How to access every pixel?

If I have a PixelBuffer object of size (200 * 200 * 3) where each pixel has three consecutive spots for the RGB colors. How can I index them so that if i am trying to implement the DDA line drawing algorithm. I have seen a lot on the web that uses PutPixel(x,y) but im not sure how I can access the pixels in this method.
The pixels will be arranged row by row, with each pixel using 3 bytes. To address a point (x, y), you basically just need to multiply the y value by the size of a row (which is the width multiplied by 3), multiply the x value by the size of a pixel (3).
With a few constants for readability, the code for the function could look like this:
const int IMG_WIDTH = 200;
const int IMG_HEIGHT = 200;
const int BYTES_PER_PIXEL = 3;
const int BYTES_PER_ROW = IMG_WIDTH * BYTES_PER_PIXEL;
void PutPixel(uint8_t* pImgData, int x, int y, const uint8_t color[3])
{
uint8_t pPixel = pImgData + y * BYTES_PER_ROW + x * BYTES_PER_PIXEL;
for (int iByte = 0; iByte < BYTES_PER_PIXEL; ++iByte)
{
pPixel[iByte] = color[iByte];
}
}
Example how this function could be used:
// Allocate image data.
uint8_t* pImgData = new uint8_t[IMG_WIDTH * IMG_HEIGHT];
// Initialize image data, unless you are planning to set all pixels.
...
// Set pixel (50, 30) to yellow.
uint8_t yellow[3] = {255, 255, 0};
PutPixel(pImgData, 50, 30, yellow);
Once you have your image built in memory, you can store the content in a pixel buffer object using glBufferData():
GLuint bufId = 0;
glGenBuffers(1, &bufId);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, bufId);
glBufferData(GL_PIXEL_UNPACK_BUFFER, IMG_HEIGHT * BYTES_PER_ROW,
pImgData, GL_STATIC_DRAW);

Extracting raw data from template for use in CUDA

The following code is a snippet from the PCL (point cloud) library. It calculates the integral sum of an image.
template <class DataType, unsigned Dimension> class IntegralImage2D
{
public:
static const unsigned dim_fst = Dimension;
typedef cv::Vec<typename TypeTraits<DataType>::IntegralType, dim_fst> FirstType;
std::vector<FirstType> img_fst;
//.... lots of methods missing here that actually calculate the integral sum
/** \brief Compute the first order sum within a given rectangle
* \param[in] start_x x position of rectangle
* \param[in] start_y y position of rectangle
* \param[in] width width of rectangle
* \param[in] height height of rectangle
*/
inline FirstType getFirstOrderSum(unsigned start_x, unsigned start_y, unsigned width, unsigned height) const
{
const unsigned upper_left_idx = start_y * (wdt + 1) + start_x;
const unsigned upper_right_idx = upper_left_idx + width;
const unsigned lower_left_idx =(start_y + height) * (wdt + 1) + start_x;
const unsigned lower_right_idx = lower_left_idx + width;
return(img_fst[lower_right_idx] + img_fst[upper_left_idx] - img_fst[upper_right_idx] - img_fst[lower_left_idx]);
}
Currently the results are obtained using the following code:
IntegralImage2D<float,3> iim_xyz;
IntegralImage2D<float, 3>::FirstType fo_elements;
IntegralImage2D<float, 3>::SecondType so_elements;
fo_elements = iim_xyz.getFirstOrderSum(pos_x - rec_wdt_2, pos_y - rec_hgt_2, rec_wdt, rec_hgt);
so_elements = iim_xyz.getSecondOrderSum(pos_x - rec_wdt_2, pos_y - rec_hgt_2, rec_wdt, rec_hgt);
However I'm trying to parallelise the code (write getFirstOrderSum as a CUDA device function). Since CUDA doesn't recognise these FirstType and SecondType objects (or any opencv objects for that matter) I'm struggling (I'm new to C++) to extract the raw data from the template.
If possible I would like to cast the img_fst object to some kind of vector or array that I can allocate on the cuda kernel.
it seems img_fst is of type std::vector<cv::Matx<double,3,1>
As it turns out you can pass the raw data as you would using a normal vector.
void computation(ps::IntegralImage2D<float, 3> iim_xyz){
cv::Vec<double, 3>* d_img_fst = 0;
cudaErrorCheck(cudaMalloc((void**)&d_img_fst, sizeof(cv::Vec<double, 3>)*(iim_xyz.img_fst.size())));
cudaErrorCheck(cudaMemcpy(d_img_fst, &iim_xyz.img_fst[0], sizeof(cv::Vec<double, 3>)*(iim_xyz.img_fst.size()), cudaMemcpyHostToDevice));
//..
}
__device__ double* getFirstOrderSum(unsigned start_x, unsigned start_y, unsigned width, unsigned height, int wdt, cv::Vec<double, 3>* img_fst)
{
const unsigned upper_left_idx = start_y * (wdt + 1) + start_x;
const unsigned upper_right_idx = upper_left_idx + width;
const unsigned lower_left_idx = (start_y + height) * (wdt + 1) + start_x;
const unsigned lower_right_idx = lower_left_idx + width;
double* result = new double[3];
result[0] = img_fst[lower_right_idx].val[0] + img_fst[upper_left_idx].val[0] - img_fst[upper_right_idx].val[0] - img_fst[lower_left_idx].val[0];
result[1] = img_fst[lower_right_idx].val[1] + img_fst[upper_left_idx].val[1] - img_fst[upper_right_idx].val[1] - img_fst[lower_left_idx].val[1];
result[2] = img_fst[lower_right_idx].val[2] + img_fst[upper_left_idx].val[2] - img_fst[upper_right_idx].val[2] - img_fst[lower_left_idx].val[2];
return result; //i have to delete this pointer otherwise I will create memory leak
}

How to set a pixel in a SDL_surface?

I need to use the following function from this page. The SDL_Surface structure is defined as
typedef struct SDL_Surface {
Uint32 flags; /* Read-only */
SDL_PixelFormat *format; /* Read-only */
int w, h; /* Read-only */
Uint16 pitch; /* Read-only */
void *pixels; /* Read-write */
SDL_Rect clip_rect; /* Read-only */
int refcount; /* Read-mostly */
} SDL_Surface;
The function is:
void set_pixel(SDL_Surface *surface, int x, int y, Uint32 pixel)
{
Uint8 *target_pixel = (Uint8 *)surface->pixels + y * surface->pitch + x * 4;
*(Uint32 *)target_pixel = pixel;
}
Here I have few doubts, may be due to the lack of a real picture.
Why do we need to multiply surface->pitch by y, and x by 4?
What is the necessity of declaring target_pixel as an 8-bit integer pointer first, then casting it into a 32-bit integer pointer later?
How does target_pixel retain the pixel value after the set_pixel function return?
Since each pixel has size 4 (the surface is using Uint32-valued pixels), but the computation is being made in Uint8. The 4 is ugly, see below.
To make the address calculation be in bytes.
Since the pixel to be written really is 32-bit, the pointer must be 32-bit to make it a single write.
The calculation has to be in bytes since the surface's pitch field is in bytes.
Here's a (less aggressive than my initial attempt) re-write:
void set_pixel(SDL_Surface *surface, int x, int y, Uint32 pixel)
{
Uint32 * const target_pixel = (Uint32 *) ((Uint8 *) surface->pixels
+ y * surface->pitch
+ x * surface->format->BytesPerPixel);
*target_pixel = pixel;
}
Note how we use surface->format->BytesPerPixel to factor out the 4. Magic constants are not a good idea. Also note that the above assumes that the surface really is using 32-bit pixels.
You can use the code below:
unsigned char* pixels = (unsigned char*)surface -> pixels;
pixels[4 * (y * surface -> w + x) + c] = 255;
x is the x of the point you want, y is the y of the point and c shows what information you want:
c=0 corresponds to blue
c=1 corresponds to green
c=2 corresponds to red
c=3 corresponds to alpha(opacity)

SDL return code 3 from SDL at strange place in code

I am getting a 3 error code from an SDL executable, and it seems to be in a place where I pass a SDL color by value and I don't understand the reason.
void Map::draw(SDL_Surface *surface, int level){
//the surface is locked
if ( SDL_MUSTLOCK(surface) )
SDL_LockSurface(surface);
long start= (long)level * this->xmax * this->ymax;
long end= (long)(level+1) * this->xmax * this->ymax;
for(long n=start; n<end; ++n){
Node *pn= this->nodes+n;
//exit(18); //exit code is 18
draw_pixel_nolock(surface, pn->location.x, pn->location.y, colors[pn->content]);
}
//the surface is unlocked
if ( SDL_MUSTLOCK(surface) )
SDL_UnlockSurface(surface);
}
And the graphics function called is:
SDL_Color colors[]= { {0,0,0}, {0xFF,0,0}, {0,0xFF,0}, {0,0,0xFF} };
void PutPixel32_nolock(SDL_Surface * surface, int x, int y, Uint32 color)
{
Uint8 * pixel = (Uint8*)surface->pixels;
pixel += (y * surface->pitch) + (x * sizeof(Uint32));
*((Uint32*)pixel) = color;
}
void PutPixel24_nolock(SDL_Surface * surface, int x, int y, Uint32 color)
{
Uint8 * pixel = (Uint8*)surface->pixels;
pixel += (y * surface->pitch) + (x * sizeof(Uint8) * 3);
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
pixel[0] = (color >> 24) & 0xFF;
pixel[1] = (color >> 16) & 0xFF;
pixel[2] = (color >> 8) & 0xFF;
#else
pixel[0] = color & 0xFF;
pixel[1] = (color >> 8) & 0xFF;
pixel[2] = (color >> 16) & 0xFF;
#endif
}
void PutPixel16_nolock(SDL_Surface * surface, int x, int y, Uint32 color)
{
Uint8 * pixel = (Uint8*)surface->pixels;
pixel += (y * surface->pitch) + (x * sizeof(Uint16));
*((Uint16*)pixel) = color & 0xFFFF;
}
void PutPixel8_nolock(SDL_Surface * surface, int x, int y, Uint32 color)
{
Uint8 * pixel = (Uint8*)surface->pixels;
pixel += (y * surface->pitch) + (x * sizeof(Uint8));
*pixel = color & 0xFF;
}
//this function draws a pixel of wanted color on a surface at (x,y) coordinate
void draw_pixel_nolock(SDL_Surface *surface, int x, int y, SDL_Color s_color)
{ exit(19);//exit code is 3
//SDL_MapRGB return a color map depending on bpp (definition)
Uint32 color = SDL_MapRGB(surface->format, s_color.r, s_color.g, s_color.b);
//byte per pixel
int bpp = surface->format->BytesPerPixel;
//here is checked the number of byte used by our surface
switch (bpp)
{
case 1: // 1 byte => 8-bpp
PutPixel8_nolock(surface, x, y, color);
break;
case 2: // 2 byte => 16-bpp
PutPixel16_nolock(surface, x, y, color);
break;
case 3: // 3 byte => 24-bpp
PutPixel24_nolock(surface, x, y, color);
break;
case 4: // 4 byte => 32-bpp
PutPixel32_nolock(surface, x, y, color);
break;
}
}
The code returns error code 18 when I exit there, but never returns error code 19, and gives errror code 3 instead. What could possibly go wrong?
Without seeing the entire code it's hard to tell, but as a general practice:
Validate that
long start= (long)level * this->xmax * this->ymax;
long end= (long)(level+1) * this->xmax * this->ymax;
start and end are valid offsets for your node array, otherwise this->node + n will return a garbage pointer.
Validate that
Node *pn= this->nodes+n;
Is not null and a valid pointer to a Node object
Validate that
pn->content
Is within the bounds of your colors array