How to capture camera's video in real time? - c++

I'm using Usb 3.0 Basler camera acA640-750uc to capture the video and here is the program to use 2 cameras and grab the frame:
the problem is when i runed this program my computer captured the video from 2 cameras but the video is slower than my actual movement about 2 sec.It means
my video is slower than the real time and I want to capture the video in real time.
How can I solve this problem?
I've tried to change the condition of for (size_t i = 0; i < cameras.GetSize(); ++i) from ++i to i++, but it doesn't work.
#include <pylon/PylonIncludes.h>
#ifdef PYLON_WIN_BUILD
#include <pylon/PylonGUI.h>
#endif
// Namespace for using pylon objects.
using namespace Pylon;
// Namespace for using cout.
using namespace std;
// Number of images to be grabbed.
static const uint32_t c_countOfImagesToGrab = 1000;
// Limits the amount of cameras used for grabbing.
// It is important to manage the available bandwidth when grabbing with
// multiple cameras.
// This applies, for instance, if two GigE cameras are connected to the
// same network adapter via a switch.
// To manage the bandwidth, the GevSCPD interpacket delay parameter and
// the GevSCFTD transmission delay
// parameter can be set for each GigE camera device.
// The "Controlling Packet Transmission Timing with the Interpacket and
// Frame Transmission Delays on Basler GigE Vision Cameras"
// Application Notes (AW000649xx000)
// provide more information about this topic.
// The bandwidth used by a FireWire camera device can be limited by
// adjusting the packet size.
static const size_t c_maxCamerasToUse = 2;
int main(int argc, char* argv[])
{
// The exit code of the sample application.
int exitCode = 0;
// Before using any pylon methods, the pylon runtime must be initialized.
PylonInitialize();
try
{
// Get the transport layer factory.
CTlFactory& tlFactory = CTlFactory::GetInstance();
// Get all attached devices and exit application if no device is found.
DeviceInfoList_t devices;
if (tlFactory.EnumerateDevices(devices) == 0)
{
throw RUNTIME_EXCEPTION("No camera present.");
}
// Create an array of instant cameras for the found devices and avoid
// exceeding a maximum number of devices.
CInstantCameraArray cameras(min(devices.size(), c_maxCamerasToUse));
// Create and attach all Pylon Devices.
for (size_t i = 0; i < cameras.GetSize(); ++i)
{
cameras[i].Attach(tlFactory.CreateDevice(devices[i]));
// Print the model name of the camera.
cout << "Using device " << cameras[i].GetDeviceInfo().GetModelName() <<
endl;
}
// Starts grabbing for all cameras starting with index 0. The grabbing
// is started for one camera after the other. That's why the images of
// all
// cameras are not taken at the same time.
// However, a hardware trigger setup can be used to cause all cameras to
// grab images synchronously.
// According to their default configuration, the cameras are
// set up for free-running continuous acquisition.
cameras.StartGrabbing();
// This smart pointer will receive the grab result data.
CGrabResultPtr ptrGrabResult;
// Grab c_countOfImagesToGrab from the cameras.
for (uint32_t i = 0; i < c_countOfImagesToGrab && cameras.IsGrabbing();
++i)
{
cameras.RetrieveResult(5000, ptrGrabResult,
TimeoutHandling_ThrowException);
// When the cameras in the array are created the camera context value
// is set to the index of the camera in the array.
// The camera context is a user settable value.
// This value is attached to each grab result and can be used
// to determine the camera that produced the grab result.
intptr_t cameraContextValue = ptrGrabResult->GetCameraContext();
#ifdef PYLON_WIN_BUILD
// Show the image acquired by each camera in the window related to each
// camera.
Pylon::DisplayImage(cameraContextValue, ptrGrabResult);
#endif
// Print the index and the model name of the camera.
cout << "Camera " << cameraContextValue << ": " <<
cameras[cameraContextValue].GetDeviceInfo().GetModelName() << endl;
// Now, the image data can be processed.
cout << "GrabSucceeded: " << ptrGrabResult->GrabSucceeded() << endl;
cout << "SizeX: " << ptrGrabResult->GetWidth() << endl;
cout << "SizeY: " << ptrGrabResult->GetHeight() << endl;
const uint8_t* pImageBuffer = (uint8_t*)ptrGrabResult->GetBuffer();
cout << "Gray value of first pixel: " << (uint32_t)pImageBuffer[0] <<
endl <<
endl;
}
}
catch (const GenericException& e)
{
// Error handling
cerr << "An exception occurred." << endl
<< e.GetDescription() << endl;
exitCode = 1;
}
// Comment the following two lines to disable waiting on exit.
cerr << endl << "Press Enter to exit." << endl;
while (cin.get() != '\n');
// Releases all pylon resources.
PylonTerminate();
return exitCode;
}

I am not experienced in this field but changing ++i to i++ obviously does not solve your problem as they are equivalent in this for definition (for (size_t i = 0; i < cameras.GetSize(); ++i)).
I am not sure but accourding to comments in code you may need to configure cameras manually (cameras may configured differently):
// According to their ***default configuration***, the cameras are
// set up for free-running continuous acquisition.
cameras.StartGrabbing();
Also, please read these comments from the code carefully and see if you correctly configure your network and parameters. I suggest you try with one camera first:
// Limits the amount of cameras used for grabbing.
// It is important to manage the available bandwidth when grabbing with
// multiple cameras.
// This applies, for instance, if two GigE cameras are connected to the
// same network adapter via a switch.
// To manage the bandwidth, the GevSCPD interpacket delay parameter and
// the GevSCFTD transmission delay
// parameter can be set for each GigE camera device.
// The "Controlling Packet Transmission Timing with the Interpacket and
// Frame Transmission Delays on Basler GigE Vision Cameras"
// Application Notes (AW000649xx000)
// provide more information about this topic.
// The bandwidth used by a FireWire camera device can be limited by
// adjusting the packet size.

Related

Getting audio sound level from FLTP audio stream

I need to get audio level or even better, EQ data from NDI audio stream in C++. Here's the struct of a audio packet:
// This describes an audio frame.
typedef struct NDIlib_audio_frame_v3_t {
// The sample-rate of this buffer.
int sample_rate;
// The number of audio channels.
int no_channels;
// The number of audio samples per channel.
int no_samples;
// The timecode of this frame in 100-nanosecond intervals.
int64_t timecode;
// What FourCC describing the type of data for this frame.
NDIlib_FourCC_audio_type_e FourCC;
// The audio data.
uint8_t* p_data;
union {
// If the FourCC is not a compressed type and the audio format is planar, then this will be the
// stride in bytes for a single channel.
int channel_stride_in_bytes;
// If the FourCC is a compressed type, then this will be the size of the p_data buffer in bytes.
int data_size_in_bytes;
};
// Per frame metadata for this frame. This is a NULL terminated UTF8 string that should be in XML format.
// If you do not want any metadata then you may specify NULL here.
const char* p_metadata;
// This is only valid when receiving a frame and is specified as a 100-nanosecond time that was the exact
// moment that the frame was submitted by the sending side and is generated by the SDK. If this value is
// NDIlib_recv_timestamp_undefined then this value is not available and is NDIlib_recv_timestamp_undefined.
int64_t timestamp;
#if NDILIB_CPP_DEFAULT_CONSTRUCTORS
NDIlib_audio_frame_v3_t(
int sample_rate_ = 48000, int no_channels_ = 2, int no_samples_ = 0,
int64_t timecode_ = NDIlib_send_timecode_synthesize,
NDIlib_FourCC_audio_type_e FourCC_ = NDIlib_FourCC_audio_type_FLTP,
uint8_t* p_data_ = NULL, int channel_stride_in_bytes_ = 0,
const char* p_metadata_ = NULL,
int64_t timestamp_ = 0
);
#endif // NDILIB_CPP_DEFAULT_CONSTRUCTORS
} NDIlib_audio_frame_v3_t;
Problem is that unlike video frames I have absolutely no idea how binary audio is packed and there's much less information about it online. The best information I found so far is this project:
https://github.com/gavinnn101/fishing_assistant/blob/7f5fcd73de1e39336226b5969cd1c5ca84c8058b/fishing_main.py#L124
It uses PyAudio however which I'm not familiar with and they use 16 bit audio format while mine seems to be 32bit and I can't figure out the struct.unpack stuff either because "%dh"%(count) is telling it some number then h for short which I don't understand how it would interpret.
Is there any C++ library that can take pointer to the data and type then has functions to extract sound level, sound level at certain hertz etc?
Or just some good information on how I would extract this myself? :)
I've searched the web a lot while finding very little. I've placed a breakpoint when the audio frame is populated but given up once I realize there's too many variables to think of that I don't have a clue about like sample rate, channels, sample count etc.
Got it working using
// This function calculates the RMS value of an audio frame
float calculateRMS(const NDIlib_audio_frame_v2_t& frame)
{
// Calculate the number of samples in the frame
int numSamples = frame.no_samples * frame.no_channels;
// Get a pointer to the start of the audio data
const float* data = frame.p_data;
// Calculate the sum of the squares of the samples
float sumSquares = 0.0f;
for (int i = 0; i < numSamples; ++i)
{
float sample = data[i];
sumSquares += sample * sample;
}
// Calculate the RMS value and return it
return std::sqrt(sumSquares / numSamples);
}
called as
// Keep receiving audio frames and printing their RMS values
NDIlib_audio_frame_v2_t audioFrame;
while (true)
{
// Wait for the next audio frame to be received
if (NDIlib_recv_capture_v2(pNDI_recv, NULL, &audioFrame, NULL, 0) != NDIlib_frame_type_audio)
continue;
// Print the RMS value of the audio frame
std::cout << "RMS: " << calculateRMS(audioFrame) << std::endl;
NDIlib_recv_free_audio_v2(pNDI_recv, &audioFrame);
}
Shoutout to chatGPT for explaining and feeding me with possible solutions until I managed to get a working solution :--)

cv::dnn::Net forward function is very slow for YOLO object detection

this->net.forward(outs, getOutputsNames(this->net));
The forward function of cv::dnn::net is very slow. Is there any way to optimize it? I am using it for object detection through YOLO v3. It takes around 2 seconds per image.
void ObjectDetector::runInference(cv::Mat& frame, std::string item, std::string output_path, std::string imagename)
{
cv::Mat blob;
cv::dnn::blobFromImage(frame, blob, 1/255.0, cv::Size(416, 416), cv::Scalar(0,0,0), true, false);
//Sets the input to the network
this->net.setInput(blob);
// Runs the forward pass to get output of the output layers
std::vector<cv::Mat> outs;
clock_t start, end;
/* Recording the starting clock tick.*/
start = clock();
/* below code is very slow */
this->net.forward(outs, getOutputsNames(this->net));
/* above code is very slow */
end = clock();
double time_taken = double(end - start) / double(CLOCKS_PER_SEC);
std::cout << "Time taken for getOutputnames is : " << std::fixed
<< time_taken << std::setprecision(5);
std::cout << " sec " << std::endl;
postprocess(frame, outs, item, output_path, imagename);
}
It seems you are running your code via on CPU backend. Since you are running a huge weight files, its normal to get lower fps in today's CPUs. Actually there are 2 ways you can increase your speed:
This solution comes with a trade-off. You get speed but lose
accuracy. This solution is decreasing width of network's input
image. Your current values are (416,416) you may decrease these
2 values and you will speed up your fps. However, your accuracy will
get down.
The cleanest solution is using GPU. If you have a gpu
hardware(CUDA capable), then you can change the network backend
to GPU and you will get speed with the same accuracy.

PCL: Normal estimation or visualization not working

I am trying to start working with PCL and do this basic example:
https://pcl.readthedocs.io/projects/tutorials/en/latest/how_features_work.html#how-3d-features-work
However, either the normal estimation or the visualization of the estimated normals doesn't work as it should.
As some test data, I generated this simple mesh in Blender:
I am using this test object because I have some point cloud data that was recorded with a real 3D-LIDAR sensor that is more complex so just for the sake of resolving this issue, I am using this simpler object here.
I convert this mesh into a PCL point cloud and that also works well.
Here is the view of the built in PCL point cloud visualizer:
However, after doing a surface normal estimation and trying to visualize it, the result is very crude:
I would have expected a view similar to the normal point cloud visualization, since I passed the original point cloud AND the normals, however I only get eleven (I counted them) normals of which some of them don't even seem to point in the right direction and no visualization of the original point cloud either.
The point cloud has 1024 points in it and the estimated normal set also has 1024 estimated normals in it so I am not sure what is going wrong here.
I ran this program with different parameters for the Sphere-Radius search (0.03, 0.06, 0.3, 0.6, 1, 3) but got similar results.
Here is my code:
int main()
{
PointCloud<PointXYZ> cloud;
PointCloud<PointXYZ>::Ptr cloud_ptr(&cloud);
string filename = "C:\\Users\\ilmu0\\Desktop\\Blendertestobjects\\planeconcave.obj";
int readresult = OBJReader::readPointCloudFromFile(filename, cloud_ptr); //My own class which reads Blender ".obj" files
if (readresult != 0)
{
cout << "File could not be read" << endl;
return 1;
}
cout << "Cloud size: " << cloud.size() << endl; // --> 1024
NormalEstimation<PointXYZ, Normal> estimation;
estimation.setInputCloud(cloud_ptr);
KdTree<PointXYZ>::Ptr tree_ptr(new KdTree<PointXYZ>());
estimation.setSearchMethod(tree_ptr);
PointCloud<Normal>::Ptr normalCloud_ptr(new PointCloud<Normal>());
estimation.setRadiusSearch(0.6);
estimation.compute(*normalCloud_ptr);
cout << "Normals: " << normalCloud_ptr->size() << endl; // --> 1024
string result;
cin >> result;
//This is used to visualize the original point cloud only
//CloudViewer viewer("Simple Cloud Viewer");
//viewer.showCloud(cloud_ptr);
//while (!viewer.wasStopped())
//{
//}
PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloudNormals<PointXYZ, Normal>(cloud_ptr, normalCloud_ptr);
while (!viewer.wasStopped())
{
viewer.spinOnce();
}
return 0;
}

intel real sense programming about librealsense2

I want to get 1280x720 depth image and 1280x720 color image.
So I founded as coded :
// License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2017 Intel Corporation. All Rights Reserved.
#include "librealsense2/rs.hpp" // Include RealSense Cross Platform API
#include "example.hpp" // Include short list of convenience functions for rendering
#include "opencv2/opencv.hpp"
#include <iostream>
#include "stb-master\stb_image_write.h"
using namespace std;
using namespace cv;
// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen
int main(int argc, char * argv[]) try
{
int width = 1280;
int height = 720;
rs2::log_to_console(RS2_LOG_SEVERITY_ERROR);
// Create a simple OpenGL window for rendering:
window app(width, height, "RealSense Capture Example");
// Declare two textures on the GPU, one for color and one for depth
texture depth_image, color_image;
// Declare depth colorizer for pretty visualization of depth data
rs2::colorizer color_map;
color_map.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED,1.f);
color_map.set_option(RS2_OPTION_COLOR_SCHEME, 2.f);
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
while (app) // Application still alive?
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
rs2::frame color = data.get_color_frame(); // Find the color data
// For cameras that don't have RGB sensor, we'll render infrared frames instead of color
if (!color)
color = data.get_infrared_frame();
// Render depth on to the first half of the screen and color on to the second
depth_image.render(depth, { 0, 0, app.width() / 2, app.height() });
color_image.render(color, { app.width() / 2, 0, app.width() / 2, app.height() });
}
return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
I want to this..
Push the c <- keyboard value
save the color image and depth image in PNG foramt
I can get the code about 2.
but, i don't know how to call the action when I press the "c"
I guess I have to use this example.hpp.
GLFWwindow * win = glfwCreateWindow(tile_w*cols, tile_h*rows, ss.str().c_str(), 0, 0);
glfwSetWindowUserPointer(win, &dev);
glfwSetKeyCallback(win, [](GLFWwindow * win, int key, int scancode, int action, int mods)
{
auto dev = reinterpret_cast<rs::device *>(glfwGetWindowUserPointer(win));
if (action != GLFW_RELEASE) switch (key)
{
case GLFW_KEY_R: color_rectification_enabled = !color_rectification_enabled; break;
case GLFW_KEY_C: align_color_to_depth = !align_color_to_depth; break;
case GLFW_KEY_D: align_depth_to_color = !align_depth_to_color; break;
case GLFW_KEY_E:
if (dev->supports_option(rs::option::r200_emitter_enabled))
{
int value = !dev->get_option(rs::option::r200_emitter_enabled);
std::cout << "Setting emitter to " << value << std::endl;
dev->set_option(rs::option::r200_emitter_enabled, value);
}
break;
case GLFW_KEY_A:
if (dev->supports_option(rs::option::r200_lr_auto_exposure_enabled))
{
int value = !dev->get_option(rs::option::r200_lr_auto_exposure_enabled);
std::cout << "Setting auto exposure to " << value << std::endl;
dev->set_option(rs::option::r200_lr_auto_exposure_enabled, value);
}
break;
}
});
This code is used in librealsense 1.X version. I would like to change this to librealsense 2.0 version code. But I do not know what to do.
How way do I change this code??
Thanks for reading!
Useful samples to get you on your way with RealSense SDK 2.0 and OpenCV are available in the repo at /wrappers/opencv
Keep in mind that the Supported Devices by SDK 2.0 are:
Intel® RealSense™ Camera D400-Series
Intel® RealSense™ Developer Kit SR300

Reading and processing images (opencv/c++)

My project is recognizing faces from images. The images have the same name and change occasionally
So i want to load and process the images every time they change
how can i modify this:
while (image==NULL) {
image = cvLoadImage("ayman.jpg", 1);
}
cout << endl << "My image was finally loaded!";
sorry for my english
Ok, let's assume than your images are different at the moment they change (random). How do we detect that it's another image, different from the previous one?
We are going to extract 3 features of your image and it is: the mean of the RED CHANNEL, GREEN CHANNEL and BLUE CHANNEL (it's a vector). So, if the image is the same, the 3 means are the same, but if it's different, the image has been changed.
So think we are in a infinite loop (your while).
while(true) {
// we are going to say here if the image has changed or not
}
Are we ok ? So this is the code:
image = cvLoadImage("ayman.jpg", 1);
Scalar meanOfImage = mean(image);
while (true) {
Scalar meanAtThisMoment = mean(image);
// This are the three features (it's in real one, but one vector of 3).
// We are going to compare the three to be more clear.
if (meanAtThisMoment[0] == meanOfImage[0]
&&
meanAtThisMoment[1] == meanOfImage[1]
&&
meanAtThisMoment[2] == meanOfImage[2]) {
cout << endl << "The image hasn't been changed yet.";
image = cvLoadImage("ayman.jpg", 1); // added!! forgot this, sorry
} else {
cout << endl << "The image has been changed!!!";
// and now we set the meanOfImage (the main mean) of the new image to compare with the future images.
meanOfImage = meanAtThisMoment;
}
}