Using glDrawArrays to upload frame buffers? - opengl

I'm attempting to create a software renderer for my tile based game and got stuck in the most important step : which is copying pixel chunks
to frame buffer.
First off , here's my class :
struct pixel_buffer
{
int width,height,bytesPerPixel;
unsigned char* pixels;
pixel_buffer()
{
pixels = NULL;
}
pixel_buffer(const char* imageName)
{
Image i = LoadImage(imageName);
width = i.width();
height = i.height();
bpp = i.bpp();
pixels = i.duplicate();
i.destroy();
}
pixel_buffer(int w,int h,int bpp)
{
width = w;
height = h;
bytesPerPixel = bpp;
pixels = new unsigned char[w*h*bpp];
}
~pixel_buffer()
{
delete pixels;
}
void attach(int x,int y,const pixel_buffer& other)
{
//How to copy the contents of "other" at x,y offset of pixel buffer ?
}
};
//This is the screen buffer and the contents get uploaded with glDrawArrays
pixel_buffer screen(640,480,4);
//This is a simple tile
pixel_buffer tile("files\\tiles\\brick.bmp");
//Attach tile to screen at offset 50,10
screen.attach(50,10,tile);
And this is the part that confuses me:
void attach(int x,int y,const pixel_buffer& other)
{
//How to copy the contents of "other" at x,y offset of pixel buffer ?
}
Im not sure how i am supposed to attach("copy") the pixel buffer of the tile to the screen buffer.
I tried this but didn't worked :
void attach(int x,int y,const pixel_buffer& other)
{
memcpy(&pixels[x + y * (this->width * this->bpp),other.pixels,other.width * other.height * other.bpp);
}
So i would like to ask for some help :-D

Related

Get certain pixels with their coordonates from an Image with Qt/QML

We are creating a game where there are maps. On those maps, players can walk, but to know if they can walk somewhere, we have another image, where the path is paint.
The player can move by clicking on the map, if the click match with the collider image, the character should go to the clicked point with a pathfinder. If not, the character don't move.
For example, here is a map and its collision path image :
How can I know if I've clicked on the collider (this is a png with one color and transparency) in Qt ?
I'm using QML and Felgo for rendering so if there is already a way to do it with QML, it's even better, but I can implement it in C++ too.
My second question is how can I do a pathfinder ? I know the algorithms for that but should I move by using pixels ?
I've seen the QPainterPath class which could be what i'm looking for, how can I read all pixels with a certain color of my image and know their coordonates ?
Thanks
QML interface doesn't provide efficient way to resolve your task. It should be done at C++ side.
To get image data you can use:
QImage to load image
Call N times QImage::constScanLine, each time read K pixels. N equals to image height in pixels, K equals to width.
How to deal with returned uchar* of QImage::constScanLine?
You should call QImage::format() to determine pixel format hidden by uchar*. Or you can call QImage::convertToFormat(QImage::Format_RGB32) and always cast pixel data from uchar* to your custom struct like PixelData:
#pragma pack(push, 1)
struct PixelData {
uint8_t padding;
uint8_t r;
uint8_t g;
uint8_t b;
};
#pragma pack(pop)
according to this documentation: https://doc.qt.io/qt-5/qimage.html#Format-enum
Here is compilable solution for loading image into RAM for further effective working with it's data:
#include <QImage>
#pragma pack(push, 1)
struct PixelData {
uint8_t padding;
uint8_t r;
uint8_t g;
uint8_t b;
};
#pragma pack(pop)
void loadImage(const char* path, int& w, int& h, PixelData** data) {
Q_ASSERT(data);
QImage initialImage;
initialImage.load(path);
auto image = initialImage.convertToFormat(QImage::Format_RGB32);
w = image.width();
h = image.height();
*data = new PixelData[w * h];
PixelData* outData = *data;
for (int y = 0; y < h; y++) {
auto scanLine = image.constScanLine(y);
memcpy(outData, scanLine, sizeof(PixelData) * w);
outData += w;
}
}
void pathfinder(const PixelData* data, int w, int h) {
// Your algorithm here
}
void cleanupData(PixelData* data) {
delete[] data;
}
int main(int argc, char *argv[])
{
int width, height;
PixelData* data;
loadImage("D:\\image.png", width, height, &data);
pathfinder(data, width, height);
cleanupData(data);
return 0;
}
You can access each pixel by calling this function
inline const PixelData& getPixel(int x, int y, const PixelData* data, int w) {
return *(data + (w * y) + x);
}
... or use this formula somewhere in your pathfinding algorithm, where it could be more efficient.

Expression must be a modifiable L value error

I am writing a code that reads a ppm file and stores the width, the height and the pixels of the file to an image object. In my image class I have a pointer that holds the image data. I also have an error in a method that sets the rgb values for a x, y pixel.
typedef load compontent_t
class Image
{
public:
enum channel_t { RED = 0, GREEN, BLUE };
protected:
component_t * buffer; //! Holds the image data.
unsigned int width, //! The width of the image (in pixels)
height; //! The height of the image (in pixels)
// data mutators
/*! Sets the RGB values for an (x,y) pixel.
*
* The method should perform any necessary bounds checking.
*
* \param x is the (zero-based) horizontal index of the pixel to set.
* \param y is the (zero-based) vertical index of the pixel to set.
* \param value is the new color for the (x,y) pixel.
*/
void setPixel(unsigned int x, unsigned int y, Color & value) {
if (x > 0 && x < width && y > 0 && y < height) {
size_t locpixel = y*width + x;
size_t componentofpixel = locpixel * 3;
*buffer + componentofpixel = value.r;
*buffer + componentofpixel + 1 = value.g;
*buffer + componentofpixel + 2 = value.b;
}
else {
cout << "Pixel out of bounds" << endl;
}
}
Image(unsigned int width, unsigned int height, component_t * data_ptr): width(width), height(height),buffer(data_ptr) {}
So in the setPixel method when i am trying to find the correct spot of the buffer to set the rgb value it shows me the error: "expression must be a modifiable L value"

Blitting a surface onto another surface (SDL2, C++, VS2015)

I'm working on a small game for school. I tiled an image on screen, but every time my character moves I have to re-tile it (the tiles are behind the character, because it's a grid and the character moves in the cells). I tried to tile everything onto a different surface, and then have that surface blit onto my screen surface to avoid having to retile it every single time and save on process time.
It didn't really work, it's like the surface that I tile on forgets what was tiled onto it. It doesn't error it, it just doesn't display the tiled surface on my window surface.
Here's my code (the relevant part at least)
void postaviTiles() {
SDL_BlitSurface(cell, NULL, polje, &offsetcell); //cell
for (int i = 0; i < 89; i++) {
SDL_Delay(5);
if (offsetcell.x < 450) {
offsetcell.x += 50;
SDL_BlitSurface(cell, NULL, polje, &offsetcell);
}
else {
offsetcell.x = 9;
offsetcell.y += 50;
SDL_BlitSurface(cell, NULL, polje, &offsetcell);
}
SDL_UpdateWindowSurface(okno);
}
poljezrisano = true;
}
//--------------------------------------------------------------//
void tileCells() {
if (poljezrisano == false) {
postaviTiles();}
SDL_BlitSurface(polje, NULL, oknoSurface, NULL); //cell
SDL_UpdateWindowSurface(okno);
}
//--------------------------------------------------------------//
Worth mentioning is that tiling it every single time works fine, but I want to tile it once, have that on a surface and then just blit that surface onto my screen surface.
P.S.: Sorry about most of the variables and function names not being in English
The SDL_BlitSurface takes in a source surface, a clip of that source surface, then the destination surface and a position where you want to display (blit) your source.
The last parameter thats passed to SDL_BlitSurface ignores the width and height, it just takes in the x an y.
Here is a quote from the documentation:
The width and height in srcrect determine the size of the copied rectangle. Only the position is used in the dstrect (the width and height are ignored).
And the prototype for the function:
int SDL_BlitSurface(SDL_Surface* src,
const SDL_Rect* srcrect,
SDL_Surface* dst,
SDL_Rect* dstrect)
That's one thing to keep in mind, not sure if that applies to your case, since your variable names aren't English.
But essentially with this line:
SDL_BlitSurface(cell, NULL, polje, &offsetcell);
You are telling SDL that you want all of cell placed inside polje at the position offsetcell.x and offsetcell.y with the width of cell.w and the height of cell.h.
If you wanted to place cell inside polje using the width and height of offsetcell then you would have to use another blit function, namely SDL_BlitScaled
Here is how I would blit tiles inside a grid (map).
SDL_Surface* grid; // assuming this is the new grid surface that holds the blited tiles
SDL_Surface* tile; // assuming this is a single tile of width 50, height 50
SDL_Surface* windowSurface;
SDL_Window* window;
int TileWidth = 50;
int TileHeight = 50;
int NumTiles = 90;
int TileColumns = 450 / TileWidth; // = 9, so we have a grid of 9X10
bool isFillGridTiles = false;
void FillGridTiles()
{
for (int i = 0;i < NumTiles; i++)
{
auto y = i / TileColumns; // divide to get the y position and...
auto x = i % TileColumns; // the remainder is the x position inside the grid
SDL_Rect srcClip;
srcClip.x = 0;
srcClip.y = 0;
srcClip.w = TileWidth;
srcClip.h = TileHeight;
SDL_Rect dstClip;
dstClip.x = x * TileWidth;
dstClip.y = y * TileHeight;
dstClip.w = TileWidth;
dstClip.h = TileHeight;
SDL_BlitSurface(tile, &srcClip, grid, &dstClip); //since we have the same width and height, we can use SDL_BlitSurface instead of SDL_BlitScaled
}
isFillGridTiles = true;
}
void BlitOnScreen()
{
if(!isFillGridTiles)
{
FillGridTiles();
}
SDL_BlitSurface(grid, NULL, windowSurface, NULL);
SDL_UpdateWindowSurface(window);
}
Not sure if the code is complete as posted, but it seems you are not initializing offsetcell. That fits the symptom of having nothing show up. Explicit definition of offsetcell might be better than the incremental method you've provided. For example:
for( offsetcell.x = 0; offsetcell.x < 450; offsetcell.x += 50) {
for( offsetcell.y = 0; offsetcell.y < 450; offsetcell.y += 50) {
...
}
}

No member named idleMovie in ofVideoPlayer

Any C++ gurus able to tell me why I get the following error
I've tried recreating the header file several times.
Is there possible something wrong with my header?
No member named idleMovie in ofVideoPlayer
void testApp::setup(){
// setup video dimensions
videoWidth = 320;
videoHeight = 240;
outputWidth = 320;
outputHeight = 240;
// masked image ofTexture
maskedImg.allocate(outputWidth,outputHeight,GL_RGBA);
// videograbber init
vidGrabber.setVerbose(true);
vidGrabber.initGrabber(videoWidth,videoHeight);
// background quicktime
vidPlayer.loadMovie("gracht5000.mov");
vidPlayer.play();
// video source
colorImg.allocate(outputWidth,outputHeight);
// grayscale source
grayImage.allocate(outputWidth,outputHeight);
// static difference image
grayBg.allocate(outputWidth,outputHeight);
// difference (mask) between grayscale source and static image
grayDiff.allocate(outputWidth,outputHeight);
bLearnBakground = true;
threshold = 80;
}
//--------------------------------------------------------------
void testApp::update()
{
ofBackground(100,100,100);
bool bNewFrame = false;
vidPlayer.idleMovie();
vidGrabber.grabFrame();
bNewFrame = vidGrabber.isFrameNew();
if (bNewFrame)
{
colorImg.setFromPixels(vidGrabber.getPixels(), outputWidth,outputHeight);
grayImage = colorImg;
// learn new background image
if (bLearnBakground == true){
grayBg = grayImage; // the = sign copys the pixels from grayImage into grayBg (operator overloading)
bLearnBakground = false;
}
// take the abs value of the difference between background and incoming and then threshold:
grayDiff.absDiff(grayBg, grayImage);
grayDiff.threshold(threshold);
grayDiff.blur( 3 );
// pixels array of the mask
unsigned char * maskPixels = grayDiff.getPixels();
// pixel array of webcam video
unsigned char * colorPixels = colorImg.getPixels();
// numpixels in mask
int numPixels = outputWidth * outputHeight;
// masked video image (RGBA) (final result)
unsigned char * maskedPixels = new unsigned char[outputWidth*outputHeight*4];
// loop the mask
for(int i = 0; i < numPixels; i+=1 )
{
int basePixelRGBA = 4 * i;
int basePixelRGB = 3 * i;
// compose final result
maskedPixels[ basePixelRGBA + 0 ] = colorPixels[basePixelRGB]; // take pixels from webcam source
maskedPixels[ basePixelRGBA + 1 ] = colorPixels[basePixelRGB+1]; // take pixels from webcam source
maskedPixels[ basePixelRGBA + 2 ] = colorPixels[basePixelRGB+2]; // take pixels from webcam source
maskedPixels[ basePixelRGBA + 3 ] = maskPixels[i]; // alpha channel from mask pixel array
}
// load final image into texture
maskedImg.loadData(maskedPixels, outputWidth,outputHeight, GL_RGBA );
}
}
//--------------------------------------------------------------
void testApp::draw(){
ofSetColor(0xffffff);
// draw bg video
vidPlayer.draw(0,0);
// draw masked webcam feed
ofEnableAlphaBlending();
maskedImg.draw(20,20);
ofDisableAlphaBlending();
// info
ofSetColor(0xffffff);
char reportStr[1024];
sprintf(reportStr, "bg subtraction and blob detection\npress ' ' to capture bg\nthreshold %i (press: +/-)\n, fps: %f", threshold, ofGetFrameRate());
ofDrawBitmapString(reportStr, 20, 600);
}
HEADER
#pragma once
#include "ofMain.h"
#include "ofxOpenCv.h"
class ofApp : public ofBaseApp{
public:
void setup();
void update();
void draw();
void keyPressed(int key);
void keyReleased(int key);
void mouseMoved(int x, int y );
void mouseDragged(int x, int y, int button);
void mousePressed(int x, int y, int button);
void mouseReleased(int x, int y, int button);
void windowResized(int w, int h);
void dragEvent(ofDragInfo dragInfo);
void gotMessage(ofMessage msg);
int videoWidth;
int videoHeight;
int outputWidth;
int outputHeight;
int threshold;
ofVideoGrabber vidGrabber;
ofVideoPlayer vidPlayer;
ofxCvColorImage colorImg;
ofxCvGrayscaleImage grayImage;
ofxCvGrayscaleImage grayBg;
ofxCvGrayscaleImage grayDiff;
ofTexture maskedImg;
bool bLearnBakground;
};
vidPlayer.idleMovie(); replaced with vidPlayer.update();
and vidGrabber.grabFrame(); got replaced by vidGrabber.update();

World to screen space coordinates in OpenSceneGraph

So I've got a class Label that inherits from osg::Geode which I draw in the world space in OpenSceneGraph. After displaying each frame, I then want to read the screen space coordinates of
each Label, so I can find out how much they overlap in the screen space. To this end, I created a class ScreenSpace which should calculate this (the interesting function is calc_screen_coords.)
I wrote a small subroutine that dumps each frame with some extra information, including the ScreenSpace box which represents what the program thinks the screen space coordinates are:
Now in the above picture, there seems to be no problem; but if I rotate it to the other side (with my mouse), then it looks quite different:
And that is what I don't understand.
Is my world to screen space calculation wrong?
Or am I getting the wrong BoundingBox from the Drawable?
Or maybe it has something to do with the setAutoRotateToScreen(true) directive that I give the osgText::Text object?
Is there a better way to do this? Should I try to use a Billboard instead? How would I do that though? (I tried and it totally didn't work for me — I must be missing something...)
Here is the source code for calculating the screen space coordinates of a Label:
struct Pixel {
// elided methods...
int x;
int y;
}
// Forward declarations:
pair<Pixel, Pixel> calc_screen_coords(const osg::BoundingBox& box, const osg::Camera* cam);
void rearange(Pixel& left, Pixel& right);
class ScreenSpace {
public:
ScreenSpace(const Label* label, const osg::Camera* cam)
{
BoundingBox box = label->getDrawable(0)->computeBound();
tie(bottom_left_, upper_right_) = calc_screen_coords(box, cam);
rearrange(bottom_left_, upper_right_);
}
// elided methods...
private:
Pixel bottom_left_;
Pixel upper_right_;
}
pair<Pixel, Pixel> calc_screen_coords(const osg::BoundingBox& box, const osg::Camera* cam)
{
Vec4d vec (box.xMin(), box.yMin(), box.zMin(), 1.0);
Vec4d veq (box.xMax(), box.yMax(), box.zMax(), 1.0);
Matrixd transmat
= cam->getViewMatrix()
* cam->getProjectionMatrix()
* cam->getViewport()->computeWindowMatrix();
vec = vec * transmat;
vec = vec / vec.w();
veq = veq * transmat;
veq = veq / veq.w();
return make_pair(
Pixel(static_cast<int>(vec.x()), static_cast<int>(vec.y())),
Pixel(static_cast<int>(veq.x()), static_cast<int>(veq.y()))
);
}
inline void swap(int& v, int& w)
{
int temp = v;
v = w;
w = temp;
}
inline void rearrange(Pixel& left, Pixel& right)
{
if (left.x > right.x) {
swap(left.x, right.x);
}
if (left.y > right.y) {
swap(left.y, right.y);
}
}
And here is the construction of Label (I tried to abridge it a little):
// Forward declaration:
Geometry* createLeader(straph::Point pos, double height, Color color);
class Label : public osg::Geode {
public:
Label(font, fontSize, text, color, position, height, margin, bgcolor, leaderColor)
{
osgText::Text* txt = new osgText::Text;
txt->setFont(font);
txt->setColor(color.vec4());
txt->setCharacterSize(fontSize);
txt->setText(text);
// Set display properties and height
txt->setAlignment(osgText::TextBase::CENTER_BOTTOM);
txt->setAutoRotateToScreen(true);
txt->setPosition(toVec3(position, height));
// Create bounding box and leader
typedef osgText::TextBase::DrawModeMask DMM;
unsigned drawMode = DMM::TEXT | DMM::BOUNDINGBOX;
drawMode |= DMM::FILLEDBOUNDINGBOX;
txt->setBoundingBoxColor(bgcolor.vec4());
txt->setBoundingBoxMargin(margin);
txt->setDrawMode(drawMode);
this->addDrawable(txt);
Geometry* leader = createLeader(position, height, leaderColor);
this->addDrawable(leader);
}
// elided methods and data members...
}
Geometry* createLeader(straph::Point pos, double height, Color color)
{
Geometry* leader = new Geometry();
Vec3Array* array = new Vec3Array();
array->push_back(Vec3(pos.x, pos.y, height));
array->push_back(Vec3(pos.x, pos.y, 0.0f));
Vec4Array* colors = new Vec4Array(1);
(*colors)[0] = color.vec4();
leader->setColorArray(colors);
leader->setColorBinding(Geometry::BIND_OVERALL);
leader->setVertexArray(array);
leader->addPrimitiveSet(new DrawArrays(PrimitiveSet::LINES, 0, 2));
LineWidth* lineWidth = new osg::LineWidth();
lineWidth->setWidth(2.0f);
leader->getOrCreateStateSet()->setAttributeAndModes(lineWidth, osg::StateAttribute::ON);
return leader;
}
Any pointers or help?
I found a solution that works for me, but is also unsatisfying, so if you have a better solution, I'm all ears.
Basically, I take different points from the Label that I know will be at certain points,
and I calculate the screen space by combining this. For the left and right sides, I take
the bounds of the regular bounding box, and for the top and bottom, I calculate it with the
center of the bounding box and the position of the label.
ScreenSpace::ScreenSpace(const Label* label, const osg::Camera* cam)
{
const Matrixd transmat
= cam->getViewMatrix()
* cam->getProjectionMatrix()
* cam->getViewport()->computeWindowMatrix();
auto topixel = [&](Vec3 v) -> Pixel {
Vec4 vec(v.x(), v.y(), v.z(), 1.0);
vec = vec * transmat;
vec = vec / vec.w();
return Pixel(static_cast<int>(vec.x()), static_cast<int>(vec.y()));
};
// Get left right coordinates
vector<int> xs; xs.reserve(8);
vector<int> ys; ys.reserve(8);
BoundingBox box = label->getDrawable(0)->computeBound();
for (int i=0; i < 8; i++) {
Pixel p = topixel(box.corner(i));
xs.push_back(p.x);
ys.push_back(p.y);
};
int xmin = *min_element(xs.begin(), xs.end());
int xmax = *max_element(xs.begin(), xs.end());
// Get up-down coordinates
int ymin = topixel(dynamic_cast<const osgText::Text*>(label->getDrawable(0))->getPosition()).y;
int center = topixel(box.center()).y;
int ymax = center + (center - ymin);
bottom_left_ = Pixel(xmin, ymin);
upper_right_ = Pixel(xmax, ymax);
z_ = distance_from_camera(label, cam);
}