Converting child to screen space in cocos2d - cocos2d-iphone

currently I have 2 images, ImageA and ImageB.
ImageB is a child of ImageA
Assuming ImageA position is in (100, 100);
then ImageB local position is (10, 0);
So what I'm trying to do is how can I retrieve the screen position of ImageB whereby it will return me (110, 100) instead of (10, 0)?

You can use this API to get world position of sprite.
CGPoint loc =[self convertToWorldSpace:sprite.position];

Related

Blur content from a rectangle with Opencv

in the following rectangle function, rectangles are drawn.
// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame)
{
//Draw a rectangle displaying the bounding box
rectangle(frame, Point(left, top), Point(right, bottom), Scalar(255, 178, 50),LINE_4);
//bluring region
cout << frame;
//Get the label for the class name and its confidence
string label = format("%.2f", conf);
if (!classes.empty())
{
CV_Assert(classId < (int)classes.size());
label = classes[classId] + ":" + label;
}
//Display the label at the top of the bounding box
int baseLine;
Size labelSize = getTextSize(label, FONT_ITALIC, 0.5, 1, &baseLine);
top = max(top, labelSize.height);
putText(frame, label, Point(left, top), FONT_ITALIC, 0.5, Scalar(255, 255, 255), 1);
}
frame here is a multi-array of the image.
Point(left, top) is the top-left point of the rectangle.
I would like to censor everything in this rectangle in the form of a blur.
Since I come from Python programming, it is a bit difficult to define the array of these rectangles.
It would be very nice if you could help me.
Thank you very much and best regards.
Here is the Python equivalent to #HansHirse's answer. The idea is the same except we use Numpy slicing to obtain the ROI
import cv2
# Read in image
image = cv2.imread('1.png')
# Create ROI coordinates
topLeft = (60, 40)
bottomRight = (340, 120)
x, y = topLeft[0], topLeft[1]
w, h = bottomRight[0] - topLeft[0], bottomRight[1] - topLeft[1]
# Grab ROI with Numpy slicing and blur
ROI = image[y:y+h, x:x+w]
blur = cv2.GaussianBlur(ROI, (51,51), 0)
# Insert ROI back into image
image[y:y+h, x:x+w] = blur
cv2.imshow('blur', blur)
cv2.imshow('image', image)
cv2.waitKey()
The way to go is setting up a corresponding region of interest (ROI) by using cv::Rect. Since you already have your top left and bottom right locations as cv::Points, you get this more or less for free. Afterwards, just use - for example - cv::GaussianBlur only on that ROI. Using the C++ API, this approach works for a lot of OpenCV methods.
The code is quite simple, see the following snippet:
// (Just use your frame instead.)
cv::Mat image = cv::imread("path/to/your/image.png");
// Top left and bottom right cv::Points are already defined.
cv::Point topLeft = cv::Point(60, 40);
cv::Point bottomRight = cv::Point(340, 120);
// Set up proper region of interest (ROI) using a cv::Rect from the two cv::Points.
cv::Rect roi = cv::Rect(topLeft, bottomRight);
// Only blur image within ROI.
cv::GaussianBlur(image(roi), image(roi), cv::Size(51, 51), 0);
For some exemplary input like this
the above code generates the following output:
Hope that helps!

Opencv c++. Draw a circle on different pixel of image in a for loop, (the image should be open new at each loop run)

I want to plot circles on a image where each previous circle is deleted on the image before the next circle is drawn.
I have to following configuration:
I have several picture (let says 10)
For each picture I test several pixel for some condition (let say 50 pixels).
For each pixel I'm testing (or working on) I want to draw a circle at that pixel for visualization purpose (for me to visualize that pixel).
To summarize I have 2 for loop, one looping over the 10 images and the other looping over the 50 pixels.
I done the following (see code above). The circles are correctly drawn but when the next circle is drawn, the previous circle is still visible (at the end all circle are drawn on the same image) but what I want to have is (after a circle was drawn) to close the picture (or window) somehow and reopen a new one and plot the next circle on it and so on
for(int imgID=0; imgID < numbImgs; imgID++)
{
cv::Mat colorImg = imgVector[imgID];
for(int pixelID=0; pixelID < numPixelsToBeTested; pixelID++)
{
some_pixel = ... //some pixel
x = some_pixel(0); y = some_pixel(1);
cv::Mat colorImg2 = colorImg; //redefine the image for each pixel
cv::circle(colorImg2, cv::Point(x,y),5, cv::Scalar(0,0,255),1, cv::LINE_8, 0);
// creating a new window each time
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
cv::imshow("Display", colorImg2);
cv::waitKey(0);
cv::destroyWindow("Display");
}
}
What is wrong in my code?
Thanks guys
cv::circle() manipulates the input image within the API call, so what you need to do is to create a clone of the original image, draw circles on the cloned image and at each iteration swap the cloned image with original image.
It is also a good idea to break your program into smaller methods, making the code more readable and easy to understand, Following code may give you a starting point.
void visualizePoints(cv::Mat mat) {
cv::Mat debugMat = mat.clone();
// Dummy set of points, to be replace with the 50 points youo are using.
std::vector<cv::Point> points = {cv::Point(30, 30), cv::Point(30, 100), cv::Point(100, 30), cv::Point(100, 100)};
for (cv::Point p:points) {
cv::circle(debugMat, p, 5, cv::Scalar(0, 0, 255), 1, cv::LINE_8, 0);
cv::imshow("Display", debugMat);
cv::waitKey(800);
debugMat = mat.clone();
}
}
int main() {
std::vector<std::string> imagePaths = {"path/img1.png", "path/img2.png", "path/img3.png"};
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
for (std::string path:imagePaths) {
cv::Mat img = cv::imread(path);
visualizePoints(img);
}
}

Allegro 5 : Handling small sprite dimension in big resolution display

I'm new to game programming. Here I have some sprites, say Mario sprites in a spritesheet. It just 32 x 32 pixel for each sprite. One sprite contain one movement, full body of Mario. Unfortunately, I have to work at least with 800 x 640 display. As you might guess, Mario become look so small in display.
So far, I'm just scale spritesheet in GIMP2 so that Mario doesn't look like ant in screen.
Is there any way to handle it? Maybe Allegro has something I don't know.
I already search it in documentation.
It sounds like you want a way to scale an image within allegro. There are two was to achieve this:
use al_draw_tinted_scaled_rotated_bitmap_region. Pass your scaling factor (e.g. 2.0) as the xscale and yscale arguments.
void al_draw_tinted_scaled_rotated_bitmap_region(ALLEGRO_BITMAP *bitmap,
0, 0, 32, 32, // draw the first 32x32 sprite in the sheet
al_map_rgb(255, 255, 255), // don't tint the sprite
16, 16, // the origin is 16, 16, half the 32x32 size
200, 200, // draw at the point 200, 200 on the display
2.0, 2.0, // scale by 2 in the x and y directions
0, 0); // don't apply any angle or flags
Use a transform to scale your image.
ALLEGRO_TRANSFORM trans;
al_identity_transform(&trans);
al_scale_transform(&trans, 2, 2); // scale by a factor of 2
al_use_transform(&trans);
// draw here
Note that in any case (including your original solution of scaling the spritesheet), scaling an image up will cause it to appear more pixellated.

Irrlicht - Creating 3D plane/cube mesh

I am fairly new to Irrlicht, but I am not new to C++. For the last couple of weeks I did alot of Googling, reading Irrlicht API documentations, etc. For some reason I can't seems to be able to create a 3D plane mesh.
Here is what I got so far.
irr::scene::ISceneNode* ground = sceneManager->addMeshSceneNode(plane);
ground->setPosition(irr::core::vector3df(0, 0, 10));
irr::scene::ICameraSceneNode* cam = sceneManager->addCameraSceneNode();
cam->setTarget(ground->getPosition());
sceneManager->addMeshSceneNode(plane);
I also try creating a 3D cube mesh using this method
irr::scene::IMesh* plane = geomentryCreator->createPlaneMesh(irr::core::dimension2d<irr::f32>(100, 100), irr::core::dimension2d<irr::u32>(100, 100));
irr::scene::ISceneNode* cube = sceneManager->addCubeSceneNode(20);
cube->render();
For some reason the screen remain black with nothing rendered. Nothing seems to work. Any suggestions?
Your problem is that the camera and the plane both have the same Y coordinate. You never specified any position for the camera, so it is at the point (0, 0, 0), so its Y coordinate is 0. You also specified the coordinate of the plane to be (0, 0, 10), so its Y coordinate is also 0. Since the Y coordinate is up in Irrlicht, this means that you are looking at the plane from the slice like in this drawing:
This is why you don't see anything. To see something, you have to place the camera higher up. The point (0, 50, 0) will work.
Also, if you don't have any lights in the scene, the plane will be black just like the background, since it's by default sensitive to lighting. To change this, you need to make the plane insensitive to lighting with the following code:
plane->setMaterialFlag(irr::video::EMF_LIGHTING, false);
If the plane's color is black, which it is by default, you will have a black plane on a black background, so you won't see anything. So I suggest you make the background white instead by using this as the beginScene method in the main loop:
driver->beginScene(true, true, irr::video::SColor(255, 255, 255, 255));
Normally with this code, you should be able to see the following screenshot:
irr::IrrlichtDevice *device = irr::createDevice(irr::video::EDT_OPENGL);
irr::video::IVideoDriver *driver = device->getVideoDriver();
irr::scene::ISceneManager *sceneManager = device->getSceneManager();
const irr::scene::IGeometryCreator *geomentryCreator = sceneManager->getGeometryCreator();
irr::scene::IMesh* plane = geomentryCreator->createPlaneMesh(irr::core::dimension2d<irr::f32>(100, 100), irr::core::dimension2d<irr::u32>(100, 100));
irr::scene::ISceneNode* cube = sceneManager->addCubeSceneNode(20);
cube->render();
irr::scene::ISceneNode* ground = sceneManager->addMeshSceneNode(plane);
ground->setPosition(irr::core::vector3df(0, 0, 10));
plane->setMaterialFlag(irr::video::EMF_LIGHTING, false); //This is important
irr::scene::ICameraSceneNode* cam = sceneManager->addCameraSceneNode();
cam->setPosition(irr::core::vector3df(0, 50, 0)); //This is also important
cam->setTarget(ground->getPosition());
sceneManager->addMeshSceneNode(plane);
while(device->run()){
driver->beginScene(true, true, irr::video::SColor(255, 255, 255, 255)); //Important for the background to be white
sceneManager->drawAll();
driver->endScene();
}

OpenCV feature detectors - ROI mask for better performance?

In OpenCV it is possible to specify a region of interest via a mask as input for a feature detector algorithm. From my perspective, I would expect a huge performance gain, but a simple test with a small ROI cannot confirm that.
Is it reasonable to expect a better performance when using masks in OpenCV? Or is it necessary to trim the images?
Most likely the mask simply removes all keypoints outside the mask, so OpenCV has still to parse the entire image.
You can reduce the size of your image to improve the speed
I'm not sure if this is something you're looking for (esp since this is in Java), but check out this file, specifically the function at line 121.
Here it is for your convenience:
MatOfRect diceDetections = new MatOfRect(); // Essentially an array of locations where our dice features were detected. (Stupid wrappers)
// Note that detectMultiScale has thrown an unknown exception before (literally, unknown). This is to prevent crashing.
try {
diceCascade.detectMultiScale(image, diceDetections, 1.1, 4, 0, new Size(20, 20), new Size(38, 38));
} catch (Exception e) {
e.printStackTrace();
}
// Debug, used for console output
String curDetect = "";
// Iterates for every Dice ROI
for (int i = 0; i < diceDetections.toArray().length; i++) {
Rect diceRect = diceDetections.toArray()[i];
// Draws rectangles around our detected ROI
Point startingPoint = new Point(diceRect.x, diceRect.y);
Point endingPoint = new Point(diceRect.x + diceRect.width, diceRect.y + diceRect.height);
Imgproc.rectangle(image, startingPoint, endingPoint, new Scalar(255, 255, 0));
MatOfRect pipDetections = new MatOfRect();
try {
/*
* Now this is interesting. We essentially create a sub-array of the image, with our dice ROI as the image. Then we perform the detection on the image. This gives us the relative
* positions of our pip ROIs to the dice ROI. Later on, we can draw the circles around the pip ROI, with the centers' positions adjusted by adding the dice ROI positions, so that it
* renders properly. This is an amazing trick, as it not only eliminates false positives in non-dice ROIs, but it reduces how many pixels the classifier has to analyze to only at most
* 38 x 38 pixels (because of the size restraints provided while detecting dice ROIs). This means we can set the precision to an insane level, without performance loss.
*/
pipCascade.detectMultiScale(image.submat(diceRect), pipDetections, 1.01, 4, 0, new Size(2, 2), new Size(10, 10));
} catch (Exception e) {
e.printStackTrace();
}
// Gets the number of detected pips and draws a cricle around the ROI
int numPips = 0;
for (int y = 0; y < pipDetections.toArray().length; y++) {
Rect pipRect = pipDetections.toArray()[y]; // Provides the relative position of the pips to the dice ROI
/*
* Finds the absolute center of a pip. diceRect.x and diceRect.y provides the top-left position of the dice ROI. pipRect.x and pipRect.y provides the top-left position of the pip ROI.
* Normally, to find a center of an object with size (w, h) with the top-left point (x, y), we divide the width and height by two, and then add on the x pos to the width and y pos to
* the height. Now, since pipDetections only provide relative positioning to the dice ROI, we also need to add the dice position to find our absolute center position (aka relative to
* the entire image).
*/
Point center = new Point(diceRect.x + pipRect.x + pipRect.width / 2, diceRect.y + pipRect.y + pipRect.height / 2);
Imgproc.ellipse(image, center, new Size(pipRect.width / 2, pipRect.height / 2), 0, 0, 360, new Scalar(255, 0, 255), 1, 0, 0);
numPips++;
}
In a nutshell, I have two classifiers, one to recognize dice (line 129) and one to recognize the pips (black dots) on the dice. It gets an array of dice ROI, and then for each item in the array, take a submatrix of the image (located at the ROI), and have the pip classifier scan over that matrix instead of the whole image (line 156). However, if you're trying to display the detections (pips in my example), you'll need to offset it by the positions of the ROI that you're in, hence the work at line 171 and 172.
I'm certain that this achieves the same performance gain you're look for, just not necessarily in the same fashion (subimaging vs masking).