I have a function that detects motion between two frames and stores a cropped image of only the moving Object in the variable cv::Mat result_cropped. Now I want to add a function that checks the result_cropped for black pixels. I wrote the code for that easely but I'm completly stuck on trying to implement it in my class.
For some reason my blackDetection(Mat & cropped) can't access the cropped image which results in the program crashing.
Heres my simplified code:
void ActualRec::run(){
while (isActive){
//...code to check for motion
//if there was motion a cropped image will be stored in result_cropped
number_of_changes = detectMotion(motion, result, result_cropped, region, max_deviation, color);
if(number_of_changes>=there_is_motion) {
if(number_of_sequence>0){
// there was motion detected, store cropped image - this works
saveImg(pathnameThresh, result_cropped);
if (blackDetection(result_cropped)==true){
//the cropped image has black pixels
}
else {
//the cropped image has no black pixels
}
number_of_sequence++;
}
else
{
// no motion was detected
}
}
}
bool ActualRec::blackDetection(Mat & result_cropped){
//...check for black pixels, program crashes since result_cropped is empty
//if i add imshow("test",result_cropped) I keep getting an empty window
if (blackPixelCounter>0){
return true;
}
else return false;
}
Again, the problem is that I can't manage to access result_cropped in blackDetection(Mat & result_cropped).
\\edit: my complete code for this class http://pastebin.com/3i0WdLG0 . Please someone help me. This problem doesn't make any sense for me..
You don't have a cv::waitKey() in blackDetection(), so you will crash before you get to the cvWaitKey() in run(). You are jumping to conclusions that result_cropped is "empty".
You have not allocated croppedBlack anywhere, so you will crash on croppedBlack.at<Vec3b>(y,x)[c] =.
Add this at the start of blackDetection() (e.g.):
croppedBlack.create(result_cropped.size(), result_cropped.type());
To make it faster see How to scan images ... with OpenCV : The efficient way
bool ActualRec::blackDetection(Mat& result_cropped)
{
croppedBlack.create(result_cropped.size(), result_cropped.type());
int blackCounter = 0;
for(int y = 0; y < result_cropped.rows; ++y)
{
Vec3b* croppedBlack_row = croppedBlack.ptr<Vec3b>(y);
Vec3b* result_cropped_row = result_cropped.ptr<Vec3b>(y);
for(int x = 0; x < result_cropped.cols; ++x)
{
for(int c = 0; c < 3; ++c)
{
croppedBlack_row[x][c] =
saturate_cast<uchar>(alpha * result_cropped_row[x][c] + beta);
}
}
}
}
Related
I’m creating a waveform widget in TouchGFX, but unsure how best to loop the waveform back to zero at the end because there are three frame buffers so you have to invalidate over an area three times or you get flickering . How would you handle looping the array back to start (x=0).
The main issue is my code originally assumed there was only one frame buffer. I think my code needs to be refactored for three framebuffers or add the ability to write directly to the frame buffer. Any hints would be greatly appreciated.
bool Graph::drawCanvasWidget(const Rect& invalidatedArea) const
{
if (numPoints < 3)
{
// A graph line with a single (or not even a single) point is invisible
return true;
}
else{
Canvas canvas(this, invalidatedArea);
for (int index = 0; index < (numPoints-1); index++)
{
canvas.moveTo(points[index].x,points[index].y);
canvas.lineTo(points[index].x,points[index+1].y);
canvas.lineTo(points[index+1].x,points[index+1].y);
canvas.lineTo(points[index+1].x,points[index].y);
}
return canvas.render(); // Shape above automatically closed
}
return true;
}
void Graph::newPoint(int y)
{
if(numPoints==501){
numPoints=0;
}else if ((maxPoints-numPoints)<=20){
points[numPoints].x = numPoints;
points[numPoints].y = y;
Rect minimalRect(480,0,20,100);
invalidateRect(minimalRect);
numPoints++;
}else{
points[numPoints].x = numPoints;
points[numPoints].y = y;
Rect minimalRect(numPoints-3,0,20,100);
invalidateRect(minimalRect);
numPoints++;
}
}
With TouchGFX 4.15.0 (just out) the TouchGFX Designer now supports a Graph widget (previously only found in source code in demos) which can be used to produce your waveforms. It has some more elegant ways of inserting points which may suit your needs.
I have an extremely strange situation in which the code adds the same frame to a vector while it doesn't when there is a rotation before the addition. Let me show you :
#include <chrono>
#include <opencv2/opencv.hpp>
#include <vector>
/* Write all the images in a certain directory. All the images with the same name present
in the directory will be overwritten. */
void CameraThread::writeAllFrames(std::vector<cv::Mat> vectorFrame) {
std::string path;
for (size_t i = 0; i < vectorFrame.size(); ++i) {
path = "./Images/image" + std::to_string(i) + ".png";
imwrite(path, vectorFrame.at(i));
}
capturing = 0;
}
int main(){
std::string window_name = "Left Cam";
cv::VideoCapture* videoCapture = new cv::VideoCapture(0);
cv::namedWindow(window_name, CV_WINDOW_NORMAL); //create a window for the camera
std::vector <cv::Mat> capturedFrame; // Vector in which the frames are going to be saved
int i = 0; // Counts how many images are saved
bool capturing = 0;
int amountFramesCaptured = 10;
int periodCapture = 250; // ms
while(1){
bool bSuccess = videoCapture->read(frame); // It captures the frame.
/*The next 2 lines take around 25ms. They turn the frame 90° to the left.*/
cv::transpose(frame, frame);
cv::flip(frame, frame, 0);
if (capturing == 0) {
/* If there is no frame capture, we just display the frames in a window.*/
imshow(window_name, frame);
} else if (capturing == 1) { // We capture the frames here.
capturedFrame.push_back(frame);
Sleep(periodCapture);
++i;
if (i == amountFramesCaptured) {
writeAllFrames(capturedFrame); // Write all frames in a directory.
puts("Frames copied in the directory.");
capturedFrame.clear(); // Clear the vector in case we recapture an other time.
i = 0;
capturing = 0;
}
}
}
return 0;
}
Here, we capture a frame thanks to videoCapture->read(frame). I wanted to rotate the frame so i used the next two lines. Then I tested the capture of the images and it worked well (I know it because I have a motion object in front of the camera). Lastly, I decided not to rotate the frames after some tests because the rotation takes too much resources (around 25 ms) and I needed to synchronize the capture with some blinking LEDs. So I took off the two lines that permitted the rotation and that's when suddenly, the code decided to add the same frame to the vector.
In conclusion, the writing on the hard drive works well when there is a rotation and it doesn't when there isn't (because of the vector). It confuses me so much, tell me if you see something I don't.
I am forced to calculate pixel difference pixel by pixel between frames of a video.
I initialize the pixeldifference variable to 0. (assumeall the frames are valid frames in the video)
The problem is lastFrame and frame are always identical which means the code with the cout statement "Pixel count incremented" is never triggered. I know a couple of frames can be identical but i never see that output statement even once. Which leads me to believe the two frames are identical always. Should i do something else? I'd appreciate any guidance. I'm very new to opencv (excus a little bad coding practice inside it was for debugging purposes)
Mat lastFrame;
Mat frame;
capture.read(lastFrame);
capture.read(frame);
while(counter<tofind)
{
for(int cur_row=0_PX;cur_row<max_PX;cur_row++)
{
for(int cur_cols=0;cur_cols<frame.cols;cur_cols++)
{
Vec3b pixels_currentFrame = frame.at<cv::Vec3b>(cur_row, cur_cols);
Vec3b pixels_lastFrame = lastFrame.at<cv::Vec3b>(cur_row, cur_cols);
bCur=int(pixels_currentFrame.val[0]);
gCur=int(pixels_currentFrame.val[1]);
rCur=int(pixels_currentFrame.val[2]);
bPrev=int(pixels_lastFrame.val[0]);
gPrev=int(pixels_lastFrame.val[1]);
rPrev=int(pixels_lastFrame.val[2]);
bDiff=abs(bCur-bPrev);
gDiff=abs(gCur-gPrev);
rDiff=abs(rCur-rPrev);
if (abs(bCur-bPrev) > checkval ||
abs(rCur-rPrev) > checkval ||
abs(gCur-gPrev) > checkval)
{
pixeldifference++;
cout<<"Pixel count incremented"<<endl;
}
}
}
lastFrame=frame;
capture.read(frame);
/*
some other stuff happens here
*/
counter++;
}
How can I set up an if statement so after every frame the sprite shows the next frame and then the next and then once it goes through all the frames it is over?
I tried using if statements and it has never worked for me, could anyone give an example?
Edit:
After demand for code I have decided to add a sample.
int frame4 = 1;
if(frame4 = 1)
{
WalkDownFrame1(); //Renders frame 4
}
else if(frame4 = 2)
{
WalkDownFrame2(); //Renders frame 2
}
else if(frame4 = 3)
{
WalkDownFrame3(); //Renders frame 3
}
else if(frame4 = 4)
{
WalkDownFrame4(); //Renders frame 4
}
else if(frame4 = 5)
{
frame4 = 1;
}
frame4++;
no matter what modifications I apply it stays stuck on one frame.
I'm assuming you mean if the conditions are true the animation occurs and if the conditions are false it stops, in which case it would look like
/*Rest of your model transformation code*/
if(shouldbeanimating){
/*animation code*/
}else{
/*default state or nothing if you want the model to
freeze at that point in the animation*/
}
Then whenever the program should stop the animation you just set shouldbeanimating to false
Well, you need to know how many frames your animation has. Then you proceed to draw frame after frame. If you hit the last frame you go back to the first or you stop.
Here's a link that will help you. It's doesnt matter if its SDL or any other lib, the approach is always the same.
http://lazyfoo.net/SDL_tutorials/lesson20/index.php
As an example
while(isAnimating)
{
framecounter++;
isAnimating = (framecounter < MAX_FRAMES);
}
They're many solutions. You can do a Sprite Class and add an attribute _tick, store your Texture into a container, like a std::vector. I'll give you a short hint. You put a method tick() to increment your tick number. You put an attribute nbFrames, that contains the number of frames for the current sprite.
int tick;
int nbFrames; //equivalent of textures.size()
std::vector<SDL_Texture*> textures;
SDL_Texture *idle_frame;
bool moving;
int x;
int y;
Sprite::Sprite()
{
_tick = 0;
//init your textures
moving = false;
x = 0;
y = 0;
}
int _tick;
void Sprite::tick(void)
{
// Consider that _tick can reach the Max Value of int32
_tick++;
}
void Sprite::render(void)
{
SDL_Texture *texture;
if(moving)
{
int fr_index = _tick % nbFrames;
texture = textures[fr_index];
}
else
{
texture = idle_frame;
}
//then you do your render code with SDL_RenderCopy, or OpenGL code
//.
//..
}
Still missing some other thing to handle, but that an hint for your solution.
Is it possible that you are just assigning in the if statement and not testing? Or did you just miss typed it here?
Because if your code is if(frame = 0) it should be (frame == 0).
I'm working with some object detection using OpenCV 2.4.4 with C++. Unfortunately those objects that I must detect doesn't have unique shape, but they do have some really specific color range in the HSV color space (right now i'm detecting some red objects).
Also, the object should be present only in some part of the image, so I'm not scanning all the image, moreover, under this ROI I have sub-ROIs (little rectangles, and I need to know in which of these rectangles the object is in a frame).
So, first I tried to detect it something was found in the large ROI and if it was I set it to some color alien to my ambient. I did it with the following code:
cvtColor(frame(Range(0,espv),Range::all()),framhsv,CV_BGR2HSV);
MatIterator_<Vec3b> it, ith, end, endh;
ith = framhsv.begin<Vec3b>();
endh = framhsv.end<Vec3b>();
it = (frame(Range(0,espv),Range::all())).begin<Vec3b>();
for (;ith != endh; ++it,++ith){
if( (((*ith)[0] <=7 && (*ith)[0]>=0)||((*ith)[0]>170)) && (*ith)[1]>=160){
(*it)[0] = 63;
(*it)[1] = 0;
(*it)[2] = 255;}}
Everything worked right and the object was "detected", so I passed to the next steep, which is detecting if the object was in each sub-ROIs. To do so I created the following function the returns the number o "detected" pixels in each sub-ROI, so I can consider which is the correct sub-ROI.
Here is the function:
Vector<int> CountRed(cv::Mat &frame, int hspa, int vspa, int nlines ,int nblocks){
Vector<int> count(nblocks*nlines,0);
for(int i = 1; i<= nlines; i++){
for(int j = 1; j<= nblocks; j++){
Mat framhsv;
cvtColor(frame(Range((i-1)*vspa,i*vspa),Range((j-1)*hspa,j*hspa)),framhsv,CV_BGR2HSV); //Converte somente a área de interesse para HSV
MatIterator_<Vec3b> it, ith, endh;
ith = framhsv.begin<Vec3b>();
endh = framhsv.end<Vec3b>();
it = (frame(Range((i-1)*vspa,i*vspa),Range((j-1)*hspa,j*hspa))).begin<Vec3b>();
for (;ith != endh; ++it,++ith){
if( (((*ith)[0] <=7 && (*ith)[0]>=0)||((*ith)[0]>170)) && (*ith)[1]>=160){
(*it)[0] = 63;
(*it)[1] = 0;
(*it)[2] = 255;
count[(j)+nblocks*(i-1)-1]++;}}}}
return count;
}
The code compiles without warnings or errors and the video shows up, but if I insert the object that I must detect in any part of the ROI the code stop working, this also happens if I remove the:
count[(j)+nblocks*(i-1)-1]++;
line. I get the following error:
Access violation writing location 0xFFFFF970.
I really think the problem must be in accessing the iterator when not using Range::All()
To try to clarify what I'm doing here is an image of a frame:
http://i.imgur.com/Irvr8bk.jpg
the purple region is the frame, the red region is the ROI, and each of the black ones are the sub-ROIs.
I've also tried using the frame(Rect(0,0,frame.cols,2*vspa) to define the ROI and it also worked, but I got the same error when trying to work with the sub-ROIs using cv::Rect. So, I really think this must be a MatIterator_ error when not accessing the full row structure.
So, what should I do to work with those sub-ROIs ?