Get certain pixels with their coordonates from an Image with Qt/QML - c++

We are creating a game where there are maps. On those maps, players can walk, but to know if they can walk somewhere, we have another image, where the path is paint.
The player can move by clicking on the map, if the click match with the collider image, the character should go to the clicked point with a pathfinder. If not, the character don't move.
For example, here is a map and its collision path image :
How can I know if I've clicked on the collider (this is a png with one color and transparency) in Qt ?
I'm using QML and Felgo for rendering so if there is already a way to do it with QML, it's even better, but I can implement it in C++ too.
My second question is how can I do a pathfinder ? I know the algorithms for that but should I move by using pixels ?
I've seen the QPainterPath class which could be what i'm looking for, how can I read all pixels with a certain color of my image and know their coordonates ?
Thanks

QML interface doesn't provide efficient way to resolve your task. It should be done at C++ side.
To get image data you can use:
QImage to load image
Call N times QImage::constScanLine, each time read K pixels. N equals to image height in pixels, K equals to width.
How to deal with returned uchar* of QImage::constScanLine?
You should call QImage::format() to determine pixel format hidden by uchar*. Or you can call QImage::convertToFormat(QImage::Format_RGB32) and always cast pixel data from uchar* to your custom struct like PixelData:
#pragma pack(push, 1)
struct PixelData {
uint8_t padding;
uint8_t r;
uint8_t g;
uint8_t b;
};
#pragma pack(pop)
according to this documentation: https://doc.qt.io/qt-5/qimage.html#Format-enum
Here is compilable solution for loading image into RAM for further effective working with it's data:
#include <QImage>
#pragma pack(push, 1)
struct PixelData {
uint8_t padding;
uint8_t r;
uint8_t g;
uint8_t b;
};
#pragma pack(pop)
void loadImage(const char* path, int& w, int& h, PixelData** data) {
Q_ASSERT(data);
QImage initialImage;
initialImage.load(path);
auto image = initialImage.convertToFormat(QImage::Format_RGB32);
w = image.width();
h = image.height();
*data = new PixelData[w * h];
PixelData* outData = *data;
for (int y = 0; y < h; y++) {
auto scanLine = image.constScanLine(y);
memcpy(outData, scanLine, sizeof(PixelData) * w);
outData += w;
}
}
void pathfinder(const PixelData* data, int w, int h) {
// Your algorithm here
}
void cleanupData(PixelData* data) {
delete[] data;
}
int main(int argc, char *argv[])
{
int width, height;
PixelData* data;
loadImage("D:\\image.png", width, height, &data);
pathfinder(data, width, height);
cleanupData(data);
return 0;
}
You can access each pixel by calling this function
inline const PixelData& getPixel(int x, int y, const PixelData* data, int w) {
return *(data + (w * y) + x);
}
... or use this formula somewhere in your pathfinding algorithm, where it could be more efficient.

Related

Converting Depth Image into open cv Mat format

I am using a PMD camera to capture depth image in the below format
struct DepthData
{
int version;
std::chrono::microseconds timeStamp;
uint16_t width;
uint16_t height;
Vector of uint_16 exposureTimes;
Vector of DepthPoint points; //!< array of points
};
Depth point structure looks like this
struct DepthPoint
{
float x; //!< X coordinate [meters]
float y; //!< Y coordinate [meters]
float z; //!< Z coordinate [meters]
float noise; //!< noise value [meters]
uint16_t grayValue; //!< 16-bit gray value
uint8_t depthConfidence; //!< value 0 = bad, 255 = good
};
And I am trying to convert it into opencv mat data structure. Below is the code.
But it is throwing an exception. Kindly help
const int imageSize = w * h;
Mat out = cv::Mat(h, w, CV_16UC3, Scalar(0, 0, 0));
const Scalar S;
for (int h = 0; h < out.rows; h++)
{
//printf("%" PRIu64 "\n", point.at(h).grayValue);
for (int w = 0; w < out.cols; w++)
{
//printf("%" PRIu64 "\n", point.at(h).grayValue);
out.at<cv::Vec3f>(h,w)[0] = point[w].x;
out.at<cv::Vec3f>(h, w)[1] = point[w].y;
out.at<cv::Vec3f>(h, w)[2] = point[w].z;
}
}
imwrite("E:/softwares/1.8.0.71/bin/depthImage1.png", out);
You seem to be doing a couple of things wrong.
You are creating an image of type CV_16UC3 which is a 3 channel 16 bit unsigned char image. Then in out.at<cv::Vec3f>(h,w)[0] you try to read part of it as a vector of 3 floats. You should probably create your image as a float image instead.
For further details please provide the exception. It will be easier to help.
UPD: If you just want a depth image, create an image like this:
Mat out = cv::Mat::zeros(h, w, CV_16UC1);
Then in every pixel:
out.at<uint16_t>(h, w) = depth_point.grayValue;

Using halide with HDR images represented as float array

that's my first post here so sorry if I do something wrong:). I will try to do my best.
I currently working on my HDR image processing program, and I wonna implement some basing TMO using Halide. Problem is all my images are represented as float array (with order like: b1,g1,r1,a1, b2,g2,r2,a2, ... ). Using Halide to process image require Halide::Image class. Problem is I don't know how to pass those data there.
Anyone can help, or have same problem and know the answer?
Edit:
Finally got it! I need to set stride on input and output buffer in generator. Thx all for help:-)
Edit:
I tried two different ways:
int halideOperations( float data[] , int size, int width,int heighy )
{
buffer_t input_buf = { 0 };
input_buf.host = &data[0];
}
or:
int halideOperations( float data[] , int size, int width,int heighy )
{
Halide::Image(Halide::Type::Float, x, y, 0, 0, data);
}
I was thinking about editing Halide.h file and changing uint8_t * host to float_t * host but i don't think it's good idea.
Edit:
I tried using code below with my float image (RGBA):
AOT function generation:
int main(int arg, char ** argv)
{
Halide::ImageParam img(Halide::type_of<float>(), 3);
Halide::Func f;
Halide::Var x, y, c;
f(x, y, c) = Halide::pow(img(x,y,c), 2.f);
std::vector<Halide::Argument> arguments = { img };
f.compile_to_file("function", arguments);
return 0;
}
Proper code calling:
int halideOperations(float data[], int size, int width, int height)
{
buffer_t output_buf = { 0 };
buffer_t buf = { 0 };
buf.host = (uint8_t *)data;
float * output = new float[width * height * 4];
output_buf.host = (uint8_t*)(output);
output_buf.extent[0] = buf.extent[0] = width;
output_buf.extent[1] = buf.extent[1] = height;
output_buf.extent[2] = buf.extent[2] = 4;
output_buf.stride[0] = buf.stride[0] = 4;
output_buf.stride[1] = buf.stride[1] = width * 4;
output_buf.elem_size = buf.elem_size = sizeof(float);
function(&buf, &output_buf);
delete output;
return 1;
}
unfortunately I got crash with msg:
Error: Constraint violated: f0.stride.0 (4) == 1 (1)
I think something is wrong with this line: output_buf.stride[0] = buf.stride[0] = 4, but I'm not sure what should I change. Any tips?
If you are using buffer_t directly, you must cast the pointer assigned to host to a uint8_t * :
buf.host = (uint8_t *)&data[0]; // Often, can be just "(uint8_t *)data"
This is what you want to do if you are using Ahead-Of-Time (AOT) compilation and the data is not being allocated as part of the code which directly calls Halide. (Other methods discussed below control the storage allocation so they cannot take a pointer that is passed to them.)
If you are using either Halide::Image or Halide::Tools::Image, then the type casting is handled internally. The constructor used above for Halide::Image does't exist as Halide::Image is a template class where the underlying data type is a template parameter:
Halide::Image<float> image_storage(width, height, channels);
Note this will store the data in planar layout. Halide::Tools::Image is similar but has an option to do interleaved layout. (Personally, I try not to use either of these outside of small test programs. There is a long term plan to rationalize all of this which will proceed after the arbitrary dimension buffer_t branch is merged. Note also Halide::Image requires libHalide.a to be linked where Halide::Tools::Image does not and is header file only via including common/halide_image.h .)
There is also the Halide::Buffer class which is a wrapper on buffer_t that is useful in Just-In-Time (JIT) compilation. It can reference passed in storage and is not templated. However my guess is you want to use buffer_t directly and simply need the type cast to assign host. Also be sure to set the elem_size field of buffer_t to "sizeof(float)".
For an interleaved float buffer, you'll end up with something like:
buffer_t buf = {0};
buf.host = (uint8_t *)float_data; // Might also need const_cast
// If the buffer doesn't start at (0, 0), then assign mins
buf.extent[0] = width; // In elements, not bytes
buf.extent[1] = height; // In elements, not bytes
buf.extent[2] = 3; // Assuming RGB
// No need to assign additional extents as they were init'ed to zero above
buf.stride[0] = 3; // RGB interleaved
buf.stride[1] = width * 3; // Assuming no line padding
buf.stride[2] = 1; // Channel interleaved
buf.elem_size = sizeof(float);
You will also need to pay attention to the bounds in the Halide code itself. Probably best to look at the set_stride and bound calls in tutorial/lesson_16_rgb_generate.cpp for information on that.
In addition to Zalman's answer above you also have to specify the strides for the inputs and outputs when defining your Halide function like below:
int main(int arg, char ** argv)
{
Halide::ImageParam img(Halide::type_of<float>(), 3);
Halide::Func f;
Halide::Var x, y, c;
f(x, y, c) = Halide::pow(img(x,y,c), 2.f);
// You need the following
f.set_stride(0, f.output_buffer().extent(2));
f.set_stride(1, f.output_buffer().extent(0) * f.output_buffer().extent(2));
img.set_stride(0, img.extent(2));
img.set_stride(1, img.extent(2) *img.extent(0));
// <- up to here
std::vector<Halide::Argument> arguments = { img };
f.compile_to_file("function", arguments);
return 0;
}
then your code should run.

FreeType2 my_draw_bitmap undefined

I am trying to run "7. Simple text rendering" the "a. Basic code" from here but the function "my_draw_bitmap" seems to be undefined. I tried to use GLEW, but the issue is the same. Then I saw "pngwriter" library here, but the compilation for Visual Studio 2013 with Cmake give error.
Please someone help, where 'my_draw_bitmap' function is defined?
The tutorial states
The function my_draw_bitmap is not part of FreeType but must be provided by the application to draw the bitmap to the target surface. In this example, it takes a pointer to a FT_Bitmap descriptor and the position of its top-left corner as arguments.
What this means is that you need to implement the function for copying the glyphs to the texture or bitmap to be rendered yourself (assuming there isn't a suitable function available in the libraries you're using).
The below code should be appropriate for copying the pixels of a single glyph to an array that could be copied to a texture.
unsigned char **tex;
void makeTex(const unsigned int width, const unsigned int height)
{
tex = (unsigned char**)malloc(sizeof(char*)*height);
tex[0] = (unsigned char*)malloc(sizeof(char)*width*height);
memset(tex[0], 0, sizeof(char)*width*height);
for (int i = 1; i < height;i++)
{
tex[i] = tex[i*width];
}
}
void paintGlyph(FT_GlyphSlot glyph, unsigned int penX, unsigned int penY)
{
for (int y = 0; y<glyph->bitmap.rows; y++)
{
//src ptr maps to the start of the current row in the glyph
unsigned char *src_ptr = glyph->bitmap.buffer + y*glyph->bitmap.pitch;
//dst ptr maps to the pens current Y pos, adjusted for the current char row
unsigned char *dst_ptr = tex[penY + (glyph->bitmap.rows - y - 1)] + penX;
//copy entire row
for (int x = 0; x<glyph->bitmap.pitch; x++)
{
dst_ptr[x] = src_ptr[x];
}
}
}

Using glDrawArrays to upload frame buffers?

I'm attempting to create a software renderer for my tile based game and got stuck in the most important step : which is copying pixel chunks
to frame buffer.
First off , here's my class :
struct pixel_buffer
{
int width,height,bytesPerPixel;
unsigned char* pixels;
pixel_buffer()
{
pixels = NULL;
}
pixel_buffer(const char* imageName)
{
Image i = LoadImage(imageName);
width = i.width();
height = i.height();
bpp = i.bpp();
pixels = i.duplicate();
i.destroy();
}
pixel_buffer(int w,int h,int bpp)
{
width = w;
height = h;
bytesPerPixel = bpp;
pixels = new unsigned char[w*h*bpp];
}
~pixel_buffer()
{
delete pixels;
}
void attach(int x,int y,const pixel_buffer& other)
{
//How to copy the contents of "other" at x,y offset of pixel buffer ?
}
};
//This is the screen buffer and the contents get uploaded with glDrawArrays
pixel_buffer screen(640,480,4);
//This is a simple tile
pixel_buffer tile("files\\tiles\\brick.bmp");
//Attach tile to screen at offset 50,10
screen.attach(50,10,tile);
And this is the part that confuses me:
void attach(int x,int y,const pixel_buffer& other)
{
//How to copy the contents of "other" at x,y offset of pixel buffer ?
}
Im not sure how i am supposed to attach("copy") the pixel buffer of the tile to the screen buffer.
I tried this but didn't worked :
void attach(int x,int y,const pixel_buffer& other)
{
memcpy(&pixels[x + y * (this->width * this->bpp),other.pixels,other.width * other.height * other.bpp);
}
So i would like to ask for some help :-D

Split an image into 64x64 chunks

I'm just wondering how I would go about splitting a pixel array (R,G,B,A,R,G,B,A,R,G,B,A,etc.) into 64x64 chunks. I've tried several methods myself but they all seem really over-complex and a bit too messy.
I have the following variables (obviously filled with information):
int nWidth, nHeight;
unsigned char *pImage;
And basically for each chunk I want to call:
void ProcessChunk(int x, int y, int width, int height)
You may be wondering why there is a width and height argument for the processing function, but if the image is not exactly divisible by 64, then there will be smaller chunks at the edge of the image (right-hand side and the bottom). See this image for a better understanding what I mean (red chunks are the chunks that will have <64 width or height arguments).
Thanks in advance.
Just define a MIN() macro to determine the minimum of two expressions, and then it's easy:
void ProcessChunk(unsigned char *pImage, int x, int y, int width, int height);
#define MIN(a, b) ((a) < (b) ? (a) : (b))
#define CHUNKSIZE 64
void ProcessImage(unsigned char *pImage, int nWidth, int nHeight)
{
int x, y;
for (x = 0; x < nWidth; x += CHUNKSIZE)
for (y = 0; y < nHeight; y += CHUNKSIZE)
ProcessChunk(pImage, x, y, MIN(nWidth - x, CHUNKSIZE), MIN(nHeight - y, CHUNKSIZE));
}
I will assume that the chunks are unsigned char *chunk[64*64*4], in a similar way to the image itself.
int chunkOffset = 0;
int bufferOffset = y*nWidth*4 + x*4;
memset(chunk, 0, 64*64*4*sizeof(unsigned char));
for (int line = 0; line < height; line++)
{
memcpy(chunk+chunkOffset, buffer+bufferOffset, width*4*sizeof(unsigned char));
chunkOffset += 64*4;
bufferOffset += nWidth*4;
}
In a real code I would replace the "4"s with sizeof(PIXEL) and the "64"s with CHUNK_WIDTH & CHUNK_HEIGHT, but you get the idea. I would also check width & height for accuracy, and basically you don't really need them (you can easily calculate them inside the function).
Also note that while the "sizeof(unsigned char)" isn't really needed, I like putting it there for clarification. It doesn't effect the runtime since it's evaluated in compilation time.