std::thread and ros::ok() do not work in ROS - c++

I have a function that is executing by std::thread. I want it works until the user closes the terminal that running roscore by pressing Ctrl+C. Because of that I use this inside the thread:
void publish_camera_on_topic(std::vector<Camera> cameras, const std::vector<ros::Publisher> publishers, const int camera_index)
{
int frameSize;
BYTE *imagePtr;
// frame id
int frame_id = 0;
cv_bridge::CvImage img_bridge;
sensor_msgs::Image img_msg;
while (ros::ok()) {
// Grab and display a single image from each camera
imagePtr = cameras[camera_index].getRawImage();
frameSize = cameras[camera_index].getFrameSize();
cameras[camera_index].createRGBImage(imagePtr,frameSize);
unsigned char* pImage = cameras[camera_index].getImage();
if (NULL != pImage) {
Mat image(cameras[camera_index].getMatSize(), CV_8UC3, pImage, Mat::AUTO_STEP);
// release asap
cameras[camera_index].releaseImage();
//cvtColor(image, image, CV_BGR2RGB,3);
// publish on ROS topic
std_msgs::Header header; // empty header
header.seq = frame_id; // user defined counter
header.stamp = ros::Time::now(); // time
img_bridge = cv_bridge::CvImage(header, sensor_msgs::image_encodings::RGB8, image);
img_bridge.toImageMsg(img_msg); // from cv_bridge to sensor_msgs::Image
publishers[camera_index].publish(img_msg); // ros::Publisher pub_img = node.advertise<sensor_msgs::Image>("topic", queuesize);
}
// increase frame Id
frame_id = frame_id + 1;
}
std::cout << "ROS closing for thread of camera " << camera_index << " recieved." << std::endl;
}
Also, I create thread like this:
// image publisher
// for each camera create an publisher
std::vector<ros::Publisher> publishers;
for (size_t i = 0; i < cameras.size(); i++) {
char topic_name[200];
sprintf(topic_name, "/lumenera_camera_package/%d", i + 1);
publishers.push_back(nh.advertise<sensor_msgs::Image>(topic_name, 10));
}
// work with each camera on a seprate thread
std::vector<std::thread> thread_vector;
for(size_t i=0; i < cameras.size(); i++) {
thread_vector.push_back(std::thread(publish_camera_on_topic, cameras, publishers, i));
}
ros::spin();
std::for_each(thread_vector.begin(), thread_vector.end(), [](std::thread &t){t.join(); });
for(size_t i=0; i < cameras.size(); i++) {
cameras[i].stopStreaming();
}
ROS_INFO("Node: [lumenera_camera_node] has been Ended.");
However, when I press Ctrl+C in the terminal and stop the roscore, the threads keep running, and the value of ros::ok() does not change.

The problem is solved. The issue is ros::ok() does not check for ROS master. Instead of this line:
while (ros::ok()) { //do sth}
This line should be used:
while (ros::ok() && ros::master::check()) { // do sth}

Related

How to correctly using mutex in OpenCV-ROS Threads?

I have just started using C++ for some image processing tasks. I want to integrate my RGB (OpenCV Mat) and Depth (PCL) data which I get from ros::Subscribe into colored-pointcloud data.
I use the cv::Mat acquiredImage to hold the transmitted image from ros::Subscribe and then the Mat acquiredImage is used for another processes in another threads, but I am facing segmentation fault. Or the error is shown like this:
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that. Viewtest: ../../src/xcb_io.c:260:
poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
Aborted (core dumped)
I have tried using std::mutex but it still doesn't work. Could anyone tell me how to properly manage the cv::Mat in two different threads?
typedef pcl::PointXYZRGB XYZRGB;
typedef pcl::PointCloud<XYZRGB> pclXYZRGB;
typedef pcl::PointCloud<XYZRGB>::Ptr pclXYZRGBptr;
typedef pcl::PointCloud<XYZRGB>::ConstPtr pclXYZRGBcptr;
pclXYZRGBptr acquiredCloud (new pclXYZRGB());
pclXYZRGBptr acquiredCloud2 (new pclXYZRGB());
cv::Mat acquiredImage, acquiredImageRotated;
std::thread thread_RunThread;
std::mutex mutexMutex;
bool stopThread, has_data1, has_data2;
inline float PackRGB(uint8_t r, uint8_t g, uint8_t b) {
uint32_t color_uint = ((uint32_t)r << 16 | (uint32_t) g << 8 | (uint32_t)b);
return *reinterpret_cast<float *> (&color_uint);
}
void RunThread(){
while(ros::ok()){
ros::spinOnce();
}
}
void imageReceive(const sensor_msgs::ImageConstPtr& msg){
mutexMutex.lock();
acquiredImage = cv::Mat(cv_bridge::toCvShare(msg, "bgr8")->image);
mutexMutex.unlock();
has_data1 = true;
}
void cloudReceive(const sensor_msgs::PointCloud2ConstPtr& cloudInMsg){
//mutexMutex.lock();
pcl::fromROSMsg(*cloudInMsg, *acquiredCloud);
has_data2 = true;
//mutexMutex.unlock();
}
void StartThread(){
stopThread = false;
has_data1 = has_data2 = false;
thread_RunThread = std::thread(RunThread);
while(!has_data1 && !has_data2){
std::this_thread::sleep_for(std::chrono::milliseconds(1));
std::cout << has_data1 << "-" << has_data2 << std::endl;
}
}
void CloseThread(){
stopThread = true;
thread_RunThread.join();
}
int main(int argc, char **argv){
ros::init(argc, argv, "Viewtest");
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
cv::startWindowThread();
image_transport::Subscriber sub = it.subscribe("/rsCamera/image", 1, imageReceive);
ros::Subscriber pclsubAcquirer = nh.subscribe("/rsCamera/cloud", 1, cloudReceive);
StartThread();
while (ros::ok()){
if(!has_data1 && !has_data2){
std::this_thread::sleep_for(std::chrono::milliseconds(1));
std::cout << has_data1 << "-" << has_data2 << std::endl;
}
else {
mutexMutex.lock();
cv::rotate(acquiredImage, acquiredImageRotated, cv::ROTATE_180);
mutexMutex.unlock();
copyPointCloud(*acquiredCloud, *acquiredCloud2);
int i = 640, j = 480, k;
for (auto& it : acquiredCloud2->points){
it.x = it.x; it.y = it.y; it.z = it.z;
it.rgb = PackRGB(
acquiredImageRotated.at<cv::Vec3b>(j,i)[2], // r
acquiredImageRotated.at<cv::Vec3b>(j,i)[1], // g
acquiredImageRotated.at<cv::Vec3b>(j,i)[0] // b
);
i--;
if(i <= 0) { i = 640; j--; }
if(j < 0) { break; }
}
}
}
CloseThread();
return 0;
}

When calling the function with while loop 2 times, it does not delete the first one from memory

There is a function that has a while loop. int takes a variable. for example I initialized the function with 4. Then I want to initialize the function with 16.
void Widget::on_fourCameras_clicked(){
CamerasInitialize(4);
}
void Widget::on_sixteenCameras_clicked(){
isNewCameraSelected = true;
CamerasInitialize(16);
}
When I call it with 16, it does not delete the function I started with the previous 4 from memory. I'm trying to break the previous loop by putting the flag and start the 2nd one. This time, since the click function does not change the flag before it ends, it starts the function I sent 16 first. Then it changes the flag. I can't do it in multithread because of the functions I use. What do I need to do to repeatedly call the same function with a while loop and delete the previous ones from memory?
ubuntu 20.04.02
QT / c++
void Widget::CamerasInitialize(int camNumber) {
namedWindow( "Output", cv::WINDOW_OPENGL );
setWindowProperty( "Output", WND_PROP_FULLSCREEN, WINDOW_FULLSCREEN );
cv::cuda::GpuMat *inputFrames = new cv::cuda::GpuMat[camNumber];
cv::cuda::GpuMat *inputFramesConverted = new cv::cuda::GpuMat[camNumber];
cv::cuda::GpuMat *sphericalDistortionOutput = new cv::cuda::GpuMat[camNumber];
cv::Ptr<cv::cudacodec::VideoReader> videoReader[camNumber];
cv::cuda::PtrStepSz<int32_t> *d_ptrs = NULL;
cv::cuda::PtrStepSz<int32_t> *h_ptrs = new cv::cuda::PtrStepSz<int32_t>[camNumber];
cudaMalloc(&d_ptrs, camNumber * sizeof(cv::cuda::PtrStepSz<int32_t>));
std::time_t timeBegin = std::time(0);
int tick = 0;
long frameCounter = 0;
setCurrentMatrixCpuToGpu(camNumber);
while (true)
{
for(int i=0; i<camNumber; i++){
videoReader[i]->nextFrame(inputFrames[i]);
h_ptrs[i] = inputFrames[i];
}
if(isMovedCamera){
pBackSub.release();
pBackSub = cuda::createBackgroundSubtractorMOG();
isMovedCamera = false;
}
cudaMemcpy(d_ptrs, h_ptrs, camNumber * sizeof(cv::cuda::PtrStepSz<int32_t>), cudaMemcpyHostToDevice);
GpuMat outval;
Mat outputCPU;
if(camNumber == 4){
arrayMult(d_ptrs, outval, cameraMatrixOnGpu4Camera, pixelW, pixelH);
}else if(camNumber == 16){
arrayMult(d_ptrs, outval, cameraMatrixOnGpu16Camera, pixelW, pixelH);
}
//cuda::bilateralFilter(videoFrames[0], colorImages[0], 90, 30, 30);
frameCounter++;
std::time_t timeNow = std::time(0) - timeBegin;
if (timeNow - tick >= 1)
{
tick++;
//cout << "Frames per second: " << frameCounter << endl;
frameCounter = 0;
}
outval = outval(Rect(cropPositionsX, cropPositionsY, cropPositionsW, cropPositionsH));
cuda::resize(outval, outval, Size(SCREENRESOLUTION_WITDH, SCREENRESOLUTION_HEIGHT), 0, 0, INTER_AREA);
imshow( "Output", outval);
setMouseCallback("Output", CallBackFunc, NULL);
if (waitKey(1) == 'q' || isNewCameraSelected)
{
isNewCameraSelected = false;
break;
}
}
//destroyWindow("Output");
}

Debug glibc free(): invalid pointer

I'm trying to debug code that eventually throws
*** glibc detected *** ./build/smonitor: free(): invalid pointer:
Its challenging because I don't use free... I've seen the other SO posts that have examples replicating the issue.. I need help on how to debug. First off, I'm a C/C++ n00b so my pointer skills are in-development but I'm not doing much dynamic memory allocation (I think).
I'm starting to write my own 'security' application where I take snapshots from cameras and write them to a NFS share, I'll eventually have a display of each camera's snapshot. Right now, I take captures from 1 camera and cycle them through my display window (using opencv). I can run for a while (~hour) before I get the core dump. I create a thread to run the window, I should loop until my run flag gets set to false and then I call cvReleaseImage.. I have no idea why this is failing, any guidance is greatly appreciated!
// will be replaced with camera X filename on NFS share
std::string generate_filename()
{
static const char alphanum[] =
"0123456789"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz";
std::string filename = "";
std::stringstream ss;
for (int i = 0; i < 10; i++)
{
ss << alphanum[rand() % (sizeof(alphanum) - 1)];
}
ss << ".jpg";
printf("Generated filename: %s\n", ss.str().c_str());
return ss.str();
}
std::string generate_file_path()
{
std::stringstream ss;
ss << CAPTURES_PATH << generate_filename();
return ss.str();
}
void capture_photo(std::string& filepath)
{
time_t now;
time_t end;
double seconds;
bool cancelCapture = false;
int count = 0;
CvCapture* capture = cvCreateCameraCapture(0);
printf(“Opened camera capture\n");
IplImage* frame;
while(1)
{
frame = cvQueryFrame(capture);
if (!frame)
{
fprintf(stderr, "Could not read frame from video stream\n\n");
} else
{
cvShowImage(WINDOW, frame);
cvWaitKey(100);
if (get_snapshot_enabled()) // simulate delay between snapshots
{
filepath = generate_file_path();
printf("Saving image\n");
cvSaveImage(filepath.c_str(), frame);
break;
}
}
}
printf("Ending camera capture\n");
cvReleaseCapture(&capture);
}
void* manage_window(void* arg)
{
time_t now;
time_t end;
double seconds = 0;
double stateSec;
int i = 0;
int rem = 0;
IplImage* images[10];
time_t lastUpdate;
time_t tDiff; // time diff
cvNamedWindow(WINDOW, CV_WINDOW_FREERATIO);
cvSetWindowProperty(WINDOW, CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
std::string filepath;
time(&now);
int lastPos = 0;
while (1)
{
if (get_snapshot_enabled())
{
write_console_log("Going to capture photo\n");
// camera was selected
filepath = generate_file_path();
printf("Generated filepath: %s\n", filepath.c_str());
capture_photo(filepath);
if (!filepath.empty())
{
printf("Received filepath: %s\n", filepath.c_str());
time(&now);
images[lastPos] = cvLoadImage(filepath.c_str());
cvShowImage(WINDOW, images[lastPos]);
cvWaitKey(100);
if (lastPos == 10) lastPos = 0;
else lastPos++;
}
}
time(&end);
seconds = difftime(end, now);
if (seconds >= 5)
{
cvShowImage(WINDOW, images[ i % 10])
cvWaitKey(100);
i++;
time(&now);
}
// check if we're running
if (!get_running())
{
// log some error we're not running...
write_logs("Window thread exiting, not running...");
break;
}
}
for (i=0; i < 10; i++)
cvReleaseImage(&images[i]);
pthread_exit(NULL);
}
In void* manage_window(void* arg) there are lines
IplImage* images[10];
and
images[lastPos] = cvLoadImage(filepath.c_str());
if (lastPos == 10) lastPos = 0;
else lastPos++;
where lastPos can be incremented to 10, leading to
images[10] = cvLoadImage(filepath.c_str());
i.e. invalid write beyond the end of array. I think something like valgrind would have detected this.

Pushing & Popping a frame to a Mat vector

I am trying to grab multiple frames from 2 USB cameras using threads. Issue is i need to push frames into frame_queue vector and then pop one frame at a time from the vector for further image processing.
I followed this link
https://putuyuwono.wordpress.com/2015/05/29/multi-thread-multi-camera-capture-using-opencv/
The code in this link has used concurrent_queue.h for queuing up Mat frames but instead i want to use vector Mat to store the incoming frames.
Header file CameraStreamer.h:
#include <iostream>
#include <string>
#include <thread>
#include <vector>
//#include <concurrent_queue.h>
#include <boost/thread.hpp>
#include "opencv2/videoio.hpp"
using namespace std;
using namespace cv;
//using namespace concurrency;
class CameraStreamer{
public:
//this holds camera stream urls
vector<string> camera_source;
//this holds usb camera indices
vector<int> camera_index;
//this holds OpenCV VideoCapture pointers
vector<VideoCapture*> camera_capture;
//this holds queue(s) which hold images from each camera
vector<Mat*> frame_queue;
//this holds thread(s) which run the camera capture process
vector<thread*> camera_thread;
//Constructor for IP Camera capture
CameraStreamer(vector<string> source);
//Constructor for USB Camera capture
CameraStreamer(vector<int> index);
//Destructor for releasing resource(s)
~CameraStreamer();
private:
bool isUSBCamera;
int camera_count;
//initialize and start the camera capturing process(es)
void startMultiCapture();
//release all camera capture resource(s)
void stopMultiCapture();
//main camera capturing process which will be done by the thread(s)
void captureFrame(int index);
};
Here is my CameraStreamer.cpp:
#include "camerastreamer.h"
CameraStreamer::CameraStreamer(vector<string> stream_source)
{
camera_source = stream_source;
camera_count = camera_source.size();
isUSBCamera = false;
startMultiCapture();
}
CameraStreamer::CameraStreamer(vector<int> capture_index)
{
camera_index = capture_index;
camera_count = capture_index.size();
isUSBCamera = true;
startMultiCapture();
}
CameraStreamer::~CameraStreamer()
{
stopMultiCapture();
}
void CameraStreamer::captureFrame(int index)
{
VideoCapture *capture = camera_capture[index];
while (true)
{
Mat frame;
//Grab frame from camera capture
(*capture) >> frame;
//Put frame to the queue
frame_queue[index]->push(frame);
//relase frame resource
frame.release();
}
}
void CameraStreamer::startMultiCapture()
{
VideoCapture *capture;
thread *t;
vector<Mat> *q;
for (int i = 0; i < camera_count; i++)
{
//Make VideoCapture instance
if (!isUSBCamera){
string url = camera_source[i];
capture = new VideoCapture(url);
cout << "Camera Setup: " << url << endl;
}
else{
int idx = camera_index[i];
capture = new VideoCapture(idx);
cout << "Camera Setup: " << to_string(idx) << endl;
}
//Put VideoCapture to the vector
camera_capture.push_back(capture);
//Make thread instance
t = new thread(&CameraStreamer::captureFrame, this, i);
//Put thread to the vector
camera_thread.push_back(t);
//Make a queue instance
q = new vector<Mat>;
//Put queue to the vector
frame_queue.push_back(q);
}
}
void CameraStreamer::stopMultiCapture()
{
VideoCapture *cap;
for (int i = 0; i < camera_count; i++)
{
cap = camera_capture[i];
if (cap->isOpened()){
//Relase VideoCapture resource
cap->release();
cout << "Capture " << i << " released" << endl;
}
}
}
Main.cpp:
int main()
{
//IP camera URLs
vector<string> capture_source = {
"rtsp://192.168.2.100/profile2/media.smp",
"rtsp://192.168.0.100/profile2/media.smp"
};
//USB Camera indices
vector<int> capture_index = { 0, 1 };
//Highgui window titles
vector<string> label;
for (size_t i = 0; i < capture_index.size(); i++)
{
string title = "CCTV " + to_string(i);
label.push_back(title);
}
//Make an instance of CameraStreamer
CameraStreamer cam(capture_index);
while (waitKey(20) != 27)
{
//Retrieve frames from each camera capture thread
for (size_t i = 0; i < capture_index.size(); i++)
{
Mat *frame;
//Pop frame from queue and check if the frame is valid
cam.frame_queue[i].pushback(frame);
//Show frame on Highgui window
imshow(label[i], frame);
}
}
}
i tried using
cam.frame_queue[i].pushback(frame);
but it doesn't work.
So i am not sure how do i push and pop a frame into frame_queue vector with an index.

Connection between cpp files

How I can pass a variable from a C++ program to another?
the variable that I need to pass is a string.
This is the C++ file in which I have to send the string:
#include <string>
#include <iostream>
#include <ros/ros.h>
#include <json_prolog/prolog.h>
using namespace std;
using namespace json_prolog;
int main(int argc, char *argv[])
{
ros::init(argc, argv, "Info");
Prolog pl;
int c=0;
do
{
int i=0;
std::string M;
cout<<"Declare the name of the class of interest"<< "\t";
cin>>M;
if (M=="knife")
.........
In this program I decide what string M is, but I want that this M comes from the output of another cpp file that obviously has to give a string as output.
this is the c++ that has to send me a string:
#include<aruco_marker_detector/arucoMarkerDetector.h>
namespace MarkerDetector {
void FilterButter(Vector3d &s, Vector3d &sf, Vector3d &bButter, Vector3d &aButter)
{
double r,rf;
r=bButter.transpose()*s;
rf=aButter.transpose()*sf;
sf(0)=r-rf;
s(2)=s(1);
s(1)=s(0);
sf(2)=sf(1);
sf(1)=sf(0);
}
MarkerTracker::MarkerTracker(ros::NodeHandle &n)
{
this->nh = n;//dopo questa istruzione l'indirizzo che contiene l'id del nodo n e' salvato in nh
this->it = new image_transport::ImageTransport(this->nh);//salva in &it l'indirizzo della funzione ImageTransport(this->nh) appartenente al nuovo namespace image_transport
// ros::Duration r(1);
XmlRpc::XmlRpcValue my_list;
nh.getParam("marker_ids", my_list);
for (int32_t i = 0; i < my_list.size(); ++i)
{
this->markerIDs.push_back(-1);
//ROS_ASSERT(my_list[i].getType() == XmlRpc::XmlRpcValue::TypeDouble);
this->markerIDs[i]=static_cast<int>(my_list[i]);
//ROS_ERROR_STREAM("markerIDs["<<i<<"]: "<<this->markerIDs[i]);
}
//r.sleep();
nh.getParam("marker_labels", my_list);
for (int32_t i = 0; i < my_list.size(); ++i)
{
this->markerLables.push_back("NotSet");
//ROS_ASSERT(my_list[i].getType() == XmlRpc::XmlRpcValue::TypeDouble);
this->markerLables[i]=static_cast<std::string>(my_list[i]);
//ROS_ERROR_STREAM("markerLables["<<i<<"]: "<<this->markerLables[i]);
}
//r.sleep();
this->markerTrackerID=-1;
//
//Load Parameters (rosparameters)
nh.getParam("marker_tracker_id",this->markerTrackerID);
//nh.getParam("marker_id",this->markerID);
nh.getParam("camera_info_url",this->cameraParametersFile);
nh.getParam("marker_size",this->markerSize);
nh.getParam("block_size",this->thresParam1);
nh.getParam("sub_constant",this->thresParam2);
nh.getParam("camera_reference_frame",this->cameraReferenceFrame);
nh.getParam("filter_coefficient_B", my_list);
for (int32_t i = 0; i < my_list.size(); ++i)
{
//ROS_ASSERT(my_list[i].getType() == XmlRpc::XmlRpcValue::TypeDouble);
this->B(i)=static_cast<double>(my_list[i]);
}
nh.getParam("filter_coefficient_A", my_list);
for (int32_t i = 0; i < my_list.size(); ++i)
{
//ROS_ASSERT(my_list[i].getType() == XmlRpc::XmlRpcValue::TypeDouble);
this->A(i)=static_cast<double>(my_list[i]);
}
nh.getParam("image_topic_name_raw",this->imageTopicRaw);
nh.getParam("image_topic_name_proc",this->imageTopicProcessed);
nh.getParam("camera_name_tag",this->cameraNameTag);
nh.getParam("broadcast_tf_flag",this->broadcastTF);
nh.getParam("camera_extrinsics",my_list);
VectorXd in(16);
this->TC_torso.Identity();
for (int32_t i = 0; i < my_list.size(); ++i)
{
in(i)=static_cast<double>(my_list[i]);
}
ROS_WARN_STREAM("in: \n"<<in.transpose());
// r.sleep();
// this->TC_torso.matrix()(0,0)=in(0*4+0);
// this->TC_torso.matrix()(0,1)=in(0*4+1);
// this->TC_torso.matrix()(0,2)=in(0*4+2);
// this->TC_torso.matrix()(0,3)=in(0*4+3);
// this->TC_torso.matrix()(1,0)=in(1*4+0);
// this->TC_torso.matrix()(1,1)=in(1*4+1);
// this->TC_torso.matrix()(1,2)=in(1*4+2);
// this->TC_torso.matrix()(1,3)=in(1*4+3);
// this->TC_torso.matrix()(2,0)=in(2*4+0);
// this->TC_torso.matrix()(2,1)=in(2*4+1);
// this->TC_torso.matrix()(2,2)=in(2*4+2);
// this->TC_torso.matrix()(2,3)=in(2*4+3);
// this->TC_torso.matrix()(3,0)=in(3*4+0);
// this->TC_torso.matrix()(3,1)=in(3*4+1);
// this->TC_torso.matrix()(3,2)=in(3*4+2);
// this->TC_torso.matrix()(3,3)=in(3*4+3);
for(unsigned int i=0;i<4;i++)
{
for(unsigned int j=0;j<4;j++)
{
this->TC_torso.matrix()(i,j)=in(i*4+j);
}
}
// this->TC_torso=Tmp;
// Tmp.matrix()<<in;
//
// this->TC_torso=Tmp.matrix().transpose();
ROS_WARN_STREAM("TC_torso: \n"<<TC_torso.matrix());
//r.sleep();
//ROS_INFO_STREAM("B: "<<this->B.transpose());
//ROS_INFO_STREAM("A: "<<this->A.transpose());
//r.sleep();
//ROS_INFO_STREAM("marker_size: "<<this->markerSize);
//r.sleep();
//ROS_INFO_STREAM("block_size: "<<this->thresParam1);
//ROS_INFO_STREAM("sub_constant: "<<this->thresParam2);
//r.sleep();
//ROS_INFO_STREAM("camera_info_url: "<<this->cameraParametersFile);
//ROS_INFO_STREAM("markerTrackerID: "<<this->markerTrackerID);
//r.sleep();
//ROS_INFO_STREAM("markerID: "<<this->markerID);
std::stringstream label;
label<<"SwitchFilter_"<<this->markerTrackerID;
this->switchFilterService=this->nh.advertiseService(label.str(),&MarkerDetector::MarkerTracker::SwitchFilterCallBack,this);
label.str("");
//this->cameraParameters.readFromXMLFile(this->cameraParametersFile);
this->sub = it->subscribe(this->imageTopicRaw, 1, &MarkerDetector::MarkerTracker::imageCallback,this);
//Publisher for the processed image
this->pub = it->advertise(this->imageTopicProcessed, 1);
// label<<"tfTarget_"<<this->cameraNameTag<<"_"<<this->markerTrackerID;
// this->pubTF = nh.advertise<geometry_msgs::TransformStamped>(label.str(),100);
// label.str("");
label<<"visualPoints_"<<this->cameraNameTag<<"_"<<this->markerTrackerID;
this->pubVisData=nh.advertise<aruco_marker_detector::MarkerDataArray>(label.str(),100);
label.str("");
this->Rz180<<-1,0,0,0,-1,0,0,0,1;
this->setOld=true;
this->filtered=true;
this->cameraConfigured=false;
}
MarkerTracker::~MarkerTracker()
{
delete it;
}
//bool function switch on/off the filter
bool MarkerTracker::SwitchFilterCallBack(aruco_marker_detector::switch_filter::Request &req,aruco_marker_detector::switch_filter::Response &res)
{
this->filtered=!this->filtered;//req.filtered;
res.confirm=this->filtered;
if(this->filtered)
ROS_INFO_STREAM("Marker Tracker ("<<this->markerTrackerID<<") Filter Switched ON ("<<this->filtered<<")");
else
ROS_INFO_STREAM("Marker Tracker ("<<this->markerTrackerID<<") Filter Switched OFF ("<<this->filtered<<")");
return true;
}
//This function is called everytime a new image is published
void MarkerTracker::imageCallback(const sensor_msgs::ImageConstPtr& original_image)
{
//Convert from the ROS image message to a CvImage suitable for working with OpenCV for processing
cv_bridge::CvImagePtr cv_ptr;
static tf::TransformBroadcaster br1;
tf::Transform transform;
double markerPosition[3];
double markerOrientationQ[4];
Matrix3d R,Rfixed;
//Affine3d TC_torso;
Quaterniond q_eigen;
tf::Quaternion q_tf;
//
try
{
//Always copy, returning a mutable CvImage
//OpenCV expects color images to use BGR channel order.
cv_ptr = cv_bridge::toCvCopy(original_image, sensor_msgs::image_encodings::BGR8);
}
catch (cv_bridge::Exception& e)
{
//if there is an error during conversion, display it
ROS_ERROR_STREAM(__FILE__<<"::cv_bridge exception("<<__LINE__<<": "<<e.what());
return;
}
//Get intrinsic parameters of the camera and size from image
if(!this->cameraConfigured)
{
this->cameraParameters.readFromXMLFile(this->cameraParametersFile);
this->cameraParameters.resize(cv_ptr->image.size());
this->cameraConfigured=true;
}
this->MDetector.pyrDown(0);
this->MDetector.setThresholdParams(this->thresParam1,this->thresParam2);
this->MDetector.setCornerRefinementMethod(aruco::MarkerDetector::SUBPIX);
//Detect Markers
this->MDetector.detect(cv_ptr->image,this->Markers,this->cameraParameters,this->markerSize);
std::stringstream s;
//Camera Frame
// Rz180<<-1,0,0,0,-1,0,0,0,1;
//This is the transformation from camera to world and it must be obtained from camera calib
//TC_torso.matrix()<<0,0,1,-1.1,-1,0,0,0.1,0,-1,0,0.0;
tf::transformEigenToTF(TC_torso,transform);
if(this->broadcastTF)
{
br1.sendTransform(tf::StampedTransform(transform, ros::Time::now(), cameraReferenceFrame, this->cameraNameTag));
}
tf::StampedTransform sTransform;
geometry_msgs::Transform msgTransform;
aruco_marker_detector::MarkerDataArray msgVisPointsArray;
aruco_marker_detector::MarkerData aux;
aruco::Marker tmp;
bool publishTF=false;
bool idNotDefined=true;
//for each marker, draw info and its boundaries in the image
for (unsigned int i=0;i<this->Markers.size();i++)
{
idNotDefined=true;
this->Markers[i].draw(cv_ptr->image,cv::Scalar(0,0,255),2);
this->Markers[i].OgreGetPoseParameters(markerPosition,markerOrientationQ);
R = Eigen::Quaterniond(markerOrientationQ[0], markerOrientationQ[1], markerOrientationQ[2], markerOrientationQ[3]);
Rfixed=this->Rz180*R;
q_eigen=Rfixed;
tf::quaternionEigenToTF(q_eigen,q_tf);
transform.setOrigin( tf::Vector3(-markerPosition[0], - markerPosition[1],markerPosition[2]) );
transform.setRotation(q_tf);
for(unsigned int j=0;j<this->markerIDs.size();j++)
{
if(Markers[i].id==this->markerIDs[j])
{
s<<this->markerLables[j]<<"_"<<this->cameraNameTag;
idNotDefined=false;
break;
}
}
//This is what he do if recognise a marker
//Marker with id 455 represents the target and we need to filter its pose
//If you need to filter any marker then remove the if statement and set publishTF=true
if(Markers[i].id<=40 && Markers[i].id>=20)
{
int z=Markers[i].id;
switch (z){
case 20:
{
publishTF=true;
s<<"Electronics:Phone";
break;
}
case 30:
{
publishTF=true;
s<<"Electronics:Pc";
break;
}
case 40:
{
publishTF=true;
s<<"Electronics:Printer";
break;
}
default:
{
publishTF=true;
s<<"Electronics:Undefined_Object";
}
}
}
else if(Markers[i].id<=935 && Markers[i].id>=915)
{
int z=Markers[i].id;
switch (z){
case 915:
{
publishTF=true;
s<<"Kitchen_utensil:Fork";
break;
}
case 925:
{
publishTF=true;
s<<"Kitchen_utensil:Spoon";
break;
}
case 935:
{
publishTF=true;
s<<"Kitchen_utensil:Knife";
break;
}
default:
{
publishTF=true;
s<<"Kitchen_utensil:Undefined_Object";
}
}
}
else if(Markers[i].id<=220 && Markers[i].id>=200)
{
int z=Markers[i].id;
switch (z){
case 200:
{
publishTF=true;
s<<"Container:Pot";
break;
}
case 210:
{
publishTF=true;
s<<"Container:Basket";
break;
}
case 220:
{
publishTF=true;
s<<"Container:Box";
break;
}
default:
{
publishTF=true;
s<<"Container:Undefined_Object";
}
}
}
else
{
s<<"Unknown_Object";
}
if(publishTF)
{
//Filter Signal
if(filtered)
{ //If the signal is non filtered,filter it and than save values of position and orientation
tf::Vector3 X=transform.getOrigin();
tf::Quaternion Q=transform.getRotation();
//Orientation
this->qx(0)=Q.getX();
this->qy(0)=Q.getY();
this->qz(0)=Q.getZ();
this->qw(0)=Q.getW();
//Position
this->x(0)=X.getX();
this->y(0)=X.getY();
this->z(0)=X.getZ();
if(setOld)
{
//copy the first transformation to old and vold in both real and filtered
for(unsigned int i=1;i<3;i++)
{
this->qx(i)=qx(0);
this->qy(i)=qy(0);
this->qz(i)=qz(0);
this->qw(i)=qw(0);
this->qxf(i)=qx(0);
this->qyf(i)=qy(0);
this->qzf(i)=qz(0);
this->qwf(i)=qw(0);
this->x(i)=x(0);
this->y(i)=y(0);
this->z(i)=z(0);
this->xf(i)=x(0);
this->yf(i)=y(0);
this->zf(i)=z(0);
}
setOld=false;
}
MarkerDetector::FilterButter(this->qx,this->qxf,this->B,this->A);
MarkerDetector::FilterButter(this->qy,this->qyf,this->B,this->A);
MarkerDetector::FilterButter(this->qz,this->qzf,this->B,this->A);
MarkerDetector::FilterButter(this->qw,this->qwf,this->B,this->A);
MarkerDetector::FilterButter(this->x,this->xf,this->B,this->A);
MarkerDetector::FilterButter(this->y,this->yf,this->B,this->A);
MarkerDetector::FilterButter(this->z,this->zf,this->B,this->A);
transform.setRotation(tf::Quaternion(this->qxf(0),this->qyf(0),this->qzf(0),this->qwf(0)));
transform.setOrigin(tf::Vector3(this->xf(0),this->yf(0),this->zf(0)));
}
sTransform=tf::StampedTransform(transform, ros::Time::now(), this->cameraNameTag, s.str());
if(this->broadcastTF)
{
br1.sendTransform(sTransform);
}
publishTF=false;
}
else
{
sTransform=tf::StampedTransform(transform, ros::Time::now(), this->cameraNameTag, s.str());
if(this->broadcastTF)
{
br1.sendTransform(sTransform);
}
}
//Clear the labels
s.str("");
if (cameraParameters.isValid())
{
// aruco::CvDrawingUtils::draw3dCube(cv_ptr->image,Markers1[i],cameraParameters1);
aruco::CvDrawingUtils::draw3dAxis(cv_ptr->image,Markers[i],cameraParameters);
}
aux.markerID=Markers[i].id;
cv::Point2f cent=Markers[i].getCenter();
for(unsigned int ind=0;ind<4;ind++)
{
aux.points[ind].x=Markers[i][ind].x;
aux.points[ind].y=Markers[i][ind].y;
//Force the visual points to be homogeneous --this is used with the homography transformation --
aux.points[ind].z=1.0;
}
//Plot Marker Center
aux.points[4].x=cent.x;
aux.points[4].y=cent.y;
//Force the visual points to be homogeneous --this is used with the homography transformation --
aux.points[4].z=1.0;
cv::circle(cv_ptr->image,cv::Point2f(aux.points[4].x,aux.points[4].y),1,cv::Scalar(0,255,255),6);
//Copy current transform
tf::transformTFToMsg(transform,msgTransform);
aux.tfMarker=msgTransform;
msgVisPointsArray.header.stamp = ros::Time::now();
msgVisPointsArray.header.frame_id = this->cameraNameTag;
msgVisPointsArray.mTrackerID = this->markerTrackerID;
msgVisPointsArray.markerData.push_back(aux);
//Print the visual position of the marker's center
s<<"("<<msgVisPointsArray.markerData[i].points[4].x<<","<<msgVisPointsArray.markerData[i].points[4].y<<")";
cv::putText(cv_ptr->image,s.str().c_str(),cent,cv::FONT_HERSHEY_COMPLEX,1,cv::Scalar(255,0,0),3);
s.str("");
}
/**
* The publish() function is how you send messages. The parameter
* is the message object. The type of this object must agree with the type
* given as a template parameter to the advertise<>() call, as was done
* in the constructor in main().
*/
//Convert the CvImage to a ROS image message and publish it on the "camera/image_processed" topic.
pub.publish(cv_ptr->toImageMsg());
pubVisData.publish(msgVisPointsArray);
msgVisPointsArray.markerData.clear();
}
This program recognize a marker with a specific Id! i want to use the second program using this specific marker as input
The best way to do this?
The code below will run a separate process specified by the command command and pipe the output of stdout to the string M.
FILE* p = popen("command", "r");
if (!p)
return 1;
char buf[100];
std::string M;
while (!feof(p)) {
if (fgets(buf, 100, p) != NULL)
M += buf;
}
pclose(p);
If you know that command will print whatever you need on its standard output, this should do what you want. Required includes:
#include <string>
#include <iostream>
#include <stdio.h>
EDIT:
After you posted the code of the other process, it is clear to me that you are approaching the problem wrong. You are using ROS which is basically a middleware facilitating interprocess communication in robotic applications. ROS itself provides the tools for performing the exchange of strings between the processes and you should use ROS to perform that exchange. You use topics to exchange data and in your case, one process should subscribe to a topic while another should publish to it. The receiving process will receive a callback whenever string is being published and will have access to the data. Check out http://wiki.ros.org/Topics for more info about topics in ROS.
If you are on a unix/linux system, you can pass output of one program to another with a pipe. For example
ls | wc -l
ls prints the names of all the files in a directory and wc -l takes that output and counts the number of lines.
To accept a pipe, your receiving program needs to read from stdin. For example
std::string s;
while (std::cin >> s) {
// do something with the string
}