I am fairly new to Irrlicht, but I am not new to C++. For the last couple of weeks I did alot of Googling, reading Irrlicht API documentations, etc. For some reason I can't seems to be able to create a 3D plane mesh.
Here is what I got so far.
irr::scene::ISceneNode* ground = sceneManager->addMeshSceneNode(plane);
ground->setPosition(irr::core::vector3df(0, 0, 10));
irr::scene::ICameraSceneNode* cam = sceneManager->addCameraSceneNode();
cam->setTarget(ground->getPosition());
sceneManager->addMeshSceneNode(plane);
I also try creating a 3D cube mesh using this method
irr::scene::IMesh* plane = geomentryCreator->createPlaneMesh(irr::core::dimension2d<irr::f32>(100, 100), irr::core::dimension2d<irr::u32>(100, 100));
irr::scene::ISceneNode* cube = sceneManager->addCubeSceneNode(20);
cube->render();
For some reason the screen remain black with nothing rendered. Nothing seems to work. Any suggestions?
Your problem is that the camera and the plane both have the same Y coordinate. You never specified any position for the camera, so it is at the point (0, 0, 0), so its Y coordinate is 0. You also specified the coordinate of the plane to be (0, 0, 10), so its Y coordinate is also 0. Since the Y coordinate is up in Irrlicht, this means that you are looking at the plane from the slice like in this drawing:
This is why you don't see anything. To see something, you have to place the camera higher up. The point (0, 50, 0) will work.
Also, if you don't have any lights in the scene, the plane will be black just like the background, since it's by default sensitive to lighting. To change this, you need to make the plane insensitive to lighting with the following code:
plane->setMaterialFlag(irr::video::EMF_LIGHTING, false);
If the plane's color is black, which it is by default, you will have a black plane on a black background, so you won't see anything. So I suggest you make the background white instead by using this as the beginScene method in the main loop:
driver->beginScene(true, true, irr::video::SColor(255, 255, 255, 255));
Normally with this code, you should be able to see the following screenshot:
irr::IrrlichtDevice *device = irr::createDevice(irr::video::EDT_OPENGL);
irr::video::IVideoDriver *driver = device->getVideoDriver();
irr::scene::ISceneManager *sceneManager = device->getSceneManager();
const irr::scene::IGeometryCreator *geomentryCreator = sceneManager->getGeometryCreator();
irr::scene::IMesh* plane = geomentryCreator->createPlaneMesh(irr::core::dimension2d<irr::f32>(100, 100), irr::core::dimension2d<irr::u32>(100, 100));
irr::scene::ISceneNode* cube = sceneManager->addCubeSceneNode(20);
cube->render();
irr::scene::ISceneNode* ground = sceneManager->addMeshSceneNode(plane);
ground->setPosition(irr::core::vector3df(0, 0, 10));
plane->setMaterialFlag(irr::video::EMF_LIGHTING, false); //This is important
irr::scene::ICameraSceneNode* cam = sceneManager->addCameraSceneNode();
cam->setPosition(irr::core::vector3df(0, 50, 0)); //This is also important
cam->setTarget(ground->getPosition());
sceneManager->addMeshSceneNode(plane);
while(device->run()){
driver->beginScene(true, true, irr::video::SColor(255, 255, 255, 255)); //Important for the background to be white
sceneManager->drawAll();
driver->endScene();
}
Related
in the following rectangle function, rectangles are drawn.
// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame)
{
//Draw a rectangle displaying the bounding box
rectangle(frame, Point(left, top), Point(right, bottom), Scalar(255, 178, 50),LINE_4);
//bluring region
cout << frame;
//Get the label for the class name and its confidence
string label = format("%.2f", conf);
if (!classes.empty())
{
CV_Assert(classId < (int)classes.size());
label = classes[classId] + ":" + label;
}
//Display the label at the top of the bounding box
int baseLine;
Size labelSize = getTextSize(label, FONT_ITALIC, 0.5, 1, &baseLine);
top = max(top, labelSize.height);
putText(frame, label, Point(left, top), FONT_ITALIC, 0.5, Scalar(255, 255, 255), 1);
}
frame here is a multi-array of the image.
Point(left, top) is the top-left point of the rectangle.
I would like to censor everything in this rectangle in the form of a blur.
Since I come from Python programming, it is a bit difficult to define the array of these rectangles.
It would be very nice if you could help me.
Thank you very much and best regards.
Here is the Python equivalent to #HansHirse's answer. The idea is the same except we use Numpy slicing to obtain the ROI
import cv2
# Read in image
image = cv2.imread('1.png')
# Create ROI coordinates
topLeft = (60, 40)
bottomRight = (340, 120)
x, y = topLeft[0], topLeft[1]
w, h = bottomRight[0] - topLeft[0], bottomRight[1] - topLeft[1]
# Grab ROI with Numpy slicing and blur
ROI = image[y:y+h, x:x+w]
blur = cv2.GaussianBlur(ROI, (51,51), 0)
# Insert ROI back into image
image[y:y+h, x:x+w] = blur
cv2.imshow('blur', blur)
cv2.imshow('image', image)
cv2.waitKey()
The way to go is setting up a corresponding region of interest (ROI) by using cv::Rect. Since you already have your top left and bottom right locations as cv::Points, you get this more or less for free. Afterwards, just use - for example - cv::GaussianBlur only on that ROI. Using the C++ API, this approach works for a lot of OpenCV methods.
The code is quite simple, see the following snippet:
// (Just use your frame instead.)
cv::Mat image = cv::imread("path/to/your/image.png");
// Top left and bottom right cv::Points are already defined.
cv::Point topLeft = cv::Point(60, 40);
cv::Point bottomRight = cv::Point(340, 120);
// Set up proper region of interest (ROI) using a cv::Rect from the two cv::Points.
cv::Rect roi = cv::Rect(topLeft, bottomRight);
// Only blur image within ROI.
cv::GaussianBlur(image(roi), image(roi), cv::Size(51, 51), 0);
For some exemplary input like this
the above code generates the following output:
Hope that helps!
I want to plot circles on a image where each previous circle is deleted on the image before the next circle is drawn.
I have to following configuration:
I have several picture (let says 10)
For each picture I test several pixel for some condition (let say 50 pixels).
For each pixel I'm testing (or working on) I want to draw a circle at that pixel for visualization purpose (for me to visualize that pixel).
To summarize I have 2 for loop, one looping over the 10 images and the other looping over the 50 pixels.
I done the following (see code above). The circles are correctly drawn but when the next circle is drawn, the previous circle is still visible (at the end all circle are drawn on the same image) but what I want to have is (after a circle was drawn) to close the picture (or window) somehow and reopen a new one and plot the next circle on it and so on
for(int imgID=0; imgID < numbImgs; imgID++)
{
cv::Mat colorImg = imgVector[imgID];
for(int pixelID=0; pixelID < numPixelsToBeTested; pixelID++)
{
some_pixel = ... //some pixel
x = some_pixel(0); y = some_pixel(1);
cv::Mat colorImg2 = colorImg; //redefine the image for each pixel
cv::circle(colorImg2, cv::Point(x,y),5, cv::Scalar(0,0,255),1, cv::LINE_8, 0);
// creating a new window each time
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
cv::imshow("Display", colorImg2);
cv::waitKey(0);
cv::destroyWindow("Display");
}
}
What is wrong in my code?
Thanks guys
cv::circle() manipulates the input image within the API call, so what you need to do is to create a clone of the original image, draw circles on the cloned image and at each iteration swap the cloned image with original image.
It is also a good idea to break your program into smaller methods, making the code more readable and easy to understand, Following code may give you a starting point.
void visualizePoints(cv::Mat mat) {
cv::Mat debugMat = mat.clone();
// Dummy set of points, to be replace with the 50 points youo are using.
std::vector<cv::Point> points = {cv::Point(30, 30), cv::Point(30, 100), cv::Point(100, 30), cv::Point(100, 100)};
for (cv::Point p:points) {
cv::circle(debugMat, p, 5, cv::Scalar(0, 0, 255), 1, cv::LINE_8, 0);
cv::imshow("Display", debugMat);
cv::waitKey(800);
debugMat = mat.clone();
}
}
int main() {
std::vector<std::string> imagePaths = {"path/img1.png", "path/img2.png", "path/img3.png"};
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
for (std::string path:imagePaths) {
cv::Mat img = cv::imread(path);
visualizePoints(img);
}
}
I am trying to draw an arrow with OpenCV 3.2:
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main()
{
Mat image(480, 640, CV_8UC3, Scalar(255, 255, 255)); //White background
Point from(320, 240); //Middle
Point to(639, 240); //Right border
arrowedLine(image, from, to, Vec3b(0, 0, 0), 1, LINE_AA, 0, 0.1);
imshow("Arrow", image);
waitKey(0);
return 0;
}
An arrow is drawn, but at the tip some pixels are missing:
To be more precise, two columns of pixels are not colored correctly (zoomed):
If I disable antialiasing, i.e., if I use
arrowedLine(image, from, to, Vec3b(0, 0, 0), 1, LINE_8, 0, 0.1);
instead (note the LINE_8 instead of LINE_AA), the pixels are there, albeit without antialiasing:
I am aware that antialiasing might rely on neighboring pixels, but it seems strange that pixels are not drawn at all at the borders instead of being drawn without antialiasing. Is there a workaround for this issue?
Increasing the X coordinate, e.g. to 640 or 641) makes the problem worse, i.e., more of the arrow head pixels disappear, while the tip still lacks nearly two complete pixel columns.
Extending and cropping the image would solve the neighboring pixels issue, but in my original use case, where the problem appeared, I cannot enlarge my image, i.e., its size must remain constant.
After a quick review, I've found that OpenCV draws AA lines using a Gaussian filter, which contracts the final image.
As I've suggested in comments, you can implement your own function for the AA mode (you can call the original one if AA is disabled) extending the points manually (see code below to have an idea).
Other option may be to increase the line width when using AA.
You may also simulate the AA effect of OpenCV but on the final image (may be slower but helpful if you have many arrows). I'm not an OpenCV expert so I'll write a general scheme:
// Filter radius, the higher the stronger
const int kRadius = 3;
// Image is extended to fit pixels that are not going to be blurred
Mat blurred(480 + kRadius * 2, 640 + kRadius * 2, CV_8UC3, Scalar(255, 255, 255));
// Points moved a according to filter radius (need testing, but the idea is that)
Point from(320, 240 + kRadius);
Point to(639 + kRadius * 2, 240 + kRadius);
// Extended non-AA arrow
arrowedLine(blurred, ..., LINE_8, ...);
// Simulate AA
GaussianBlur(blurred, blurred, Size(kRadius, kRadius), ...);
// Crop image (be careful, it doesn't copy data)
Mat image = blurred(Rect(kRadius, kRadius, 640, 480));
Another option may be to draw the arrow in an image twice as large and the scale it down with a good smoothing filter.
Obviously, last two options will work only if you don't have any previous data on the image. If so, then use a transparent image for temporal drawing and overlay it at the end.
I'm new to game programming. Here I have some sprites, say Mario sprites in a spritesheet. It just 32 x 32 pixel for each sprite. One sprite contain one movement, full body of Mario. Unfortunately, I have to work at least with 800 x 640 display. As you might guess, Mario become look so small in display.
So far, I'm just scale spritesheet in GIMP2 so that Mario doesn't look like ant in screen.
Is there any way to handle it? Maybe Allegro has something I don't know.
I already search it in documentation.
It sounds like you want a way to scale an image within allegro. There are two was to achieve this:
use al_draw_tinted_scaled_rotated_bitmap_region. Pass your scaling factor (e.g. 2.0) as the xscale and yscale arguments.
void al_draw_tinted_scaled_rotated_bitmap_region(ALLEGRO_BITMAP *bitmap,
0, 0, 32, 32, // draw the first 32x32 sprite in the sheet
al_map_rgb(255, 255, 255), // don't tint the sprite
16, 16, // the origin is 16, 16, half the 32x32 size
200, 200, // draw at the point 200, 200 on the display
2.0, 2.0, // scale by 2 in the x and y directions
0, 0); // don't apply any angle or flags
Use a transform to scale your image.
ALLEGRO_TRANSFORM trans;
al_identity_transform(&trans);
al_scale_transform(&trans, 2, 2); // scale by a factor of 2
al_use_transform(&trans);
// draw here
Note that in any case (including your original solution of scaling the spritesheet), scaling an image up will cause it to appear more pixellated.
I am writing a program, to display two cameras next to each other. In Qt it is pretty simple with the QCamera. But my Cameras are turned by 90°, so I have to turn the Camera in the porgram too.
The QCamera variable has no command to turn it, so I want to display it in a label, and not in a viewfinder. So I take an Image, turn it and display it in a label.
QImage img;
QPixmap img_;
img = ui->viewfinder->grab().toImage();
img_ = QPixmap::fromImage(img);
img_ = img_.transformed(QTransform().rotate((90)%360));
QImage img2;
QPixmap img2_;
img2 = ui->viewfinder->grab().toImage();
img2_ = QPixmap::fromImage(img2);
img2_ = img2_.transformed(QTransform().rotate((90)%360));
ui->label->setPixmap(img_);
ui->label_2->setPixmap(img2_);
When I start the program there are just two black boxes next to each other.
(In the code there is missing the part where I deklare it, but the camera works fine in the viewfinder so I think there is no problem)
Try this:
img_ = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1); (for take a snapshot as QPixmap)
or
img = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1).toImage(); (for take a snapshot as QImage)
You can use the orientation of the camera to correct the image orientation in view finder as described in Qt documentation. Here is the link:
http://doc.qt.io/qt-5/cameraoverview.html
and here is the code found in the documentation:
// Assuming a QImage has been created from the QVideoFrame that needs to be presented
QImage videoFrame;
QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation
// Get the current display orientation
const QScreen *screen = QGuiApplication::primaryScreen();
const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());
int rotation;
if (cameraInfo.position() == QCamera::BackFace) {
rotation = (cameraInfo.orientation() - screenAngle) % 360;
} else {
// Front position, compensate the mirror
rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
}
// Rotate the frame so it always shows in the correct orientation
videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
It looks like you don't even understand, what you are looking at...
Whats the purpose of pasting stuff like that to forum? Did you read ALL description about this? Its only part of the code that - i see You dont understand, but you try to be smart :)