I am using gstreamer version 1.14.5 with its Python bindings. I have implemented a new plugin, the plugin will draw geometric shapes (such as rectangles) on the video frames. I am using PyCairo to draw these shapes.
To implement this plugin I am overriding the do_transform_ip function of the parent class GstBase.BaseTransform:
class MyPlugin(GstBase.BaseTransform):
def do_transform_ip(self, buffer: Gst.Buffer) -> Gst.FlowReturn:
(result, mapinfo) = buffer.map(Gst.MapFlags.READ | Gst.MapFlags.WRITE)
assert result
try:
# use mapinfo.data here
# modify the buffer using a graphics library such as PyCairo
finally:
buffer.unmap(mapinfo)
return Gst.FlowReturn.OK
However I run into this error:
(gst-launch-1.0:638): GStreamer-CRITICAL **: 14:37:41.515: write map requested on non-writable buffer
I am not sure how to fix this error.
Related
I am new to OMNeT++ and I'm trying to implement a drone network that communicate with each other using direct messages.
I want to visualize my drone network with the 3D visualization in OMNeT using the OsgVisualizer in inet.visualizer.scene package.
In the dronenetwork.ned file, I have used the IntegratedVisualizer and the OsgGeographicCoordinateSystem. Then in the omnetpp.ini file, the map file to be used is defined and so the map loading and mobility of the drones works fine in the 3D visualization of the simulation run.
However, the message transmissions between the drones are not visualized in 3D even though this is properly visualized in the 2D canvas mode.
I tried adding both NetworkNodeOsgVisualizer and NetworkConnectionOsgVisualizer to my drone module as visualization simple modules and also I have defined the drones as a #networkNode and #networkConnectionNode. But it still hasn't been able to visualize the message transmissions.
Any help or hint regarding this would be highly appreciated.
Code used for visualizations in the simple module drone is as follows
import inet.visualizer.scene.NetworkNodeOsgVisualizer;
import inet.visualizer.scene.NetworkConnectionOsgVisualizer;
module drone
{
parameters:
#networkNode;
#networkConnection;
submodules:
networkNodeOsgVisualizer: NetworkNodeOsgVisualizer {
#display("p=207,50");
displayModuleName = true;
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
}
networkConnectionOsgVisualizer : NetworkConnectionOsgVisualizer{
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
displayNetworkConnections = true;
}
Thank you
Message passing and direct message sending visualizations are special cases implemented by the Qtenv automatically for 2D (default) visualization only. You can add custom 2D message visualization (like the one in the aloha example). OMNeT++ does not provide any 3D visualization by default. All the code must be provided by the model (INET in this case). This is also true for any transient visualization. There is an example for this in the osg-earth omnet example where communication between cows are visualized by inflating bubbles.
So, you have to implement your own visualization effect. There is something in INET which is pretty close to what you want: DataLinkOsgVisualizer and PhysicalLinkOsgVisualizer which flashes an arrow if communication on data link or physical layer has occurred. This is not the same as message passing, but close enough. Or you can implement your own animation using these visualizers as a sample.
I have trained SSD ResNet V1 model using Tensorflow 2 Object Detection API. Then I wanted to use this model with OpenCV in C++ code.
First of all, after training I had three files:
checkpoint
ckpt-101.data-00000-of-00001
ckpt-101.index
Note that I don't have .meta file because it wasn't generated.
Then I created SavedModel from these files using exporter_main_v2.py script that is in Object Detection API:
python3 exporter_main_v2.py input_type=image_tensor --pipeline_config_path /path/to/pipeline.config --trained_checkpoint_dir=/path/to/checkouts --output_directory=/path/to/output/directory
Having run this script I got saved_model.pb
I tried to use this file in OpenCV in such way:
cv::dnn::Net net = cv::dnn::readNetFromTensorflow("/path/to/saved_model.pb");
But I got the following error:
OpenCV(4.2.0) /home/andrew/opencv/modules/dnn/src/tensorflow/tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: /home/andrew/Documents/tensorflow_detection/workspace/pb_model/saved_model/saved_model.pb in function 'ReadTFNetParamsFromBinaryFileOrDie'
Then I tried to freeze saved_model.pb. But, as I understood, it is impossible in TF2.x because TF2.x doesn't support Sessions and Graphs. Also I don't have .pbtxt file.
My question: is it possible to use models trained with TF2 Object Detection API in OpenCV C++?
I will be grateful if you help me to solve this problems or give any useful advices.
It is possible to use Tensorflow 2 models with the Object Detection API and Opencv as said in the dedicated wiki : https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
So far they are more models compatible with Tensorflow 1 but it should be okay for a SSD.
To freeze your graph you have to do :
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
loaded = tf.saved_model.load('my_model')
infer = loaded.signatures['serving_default']
f = tf.function(infer).get_concrete_function(input_1=tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32))
f2 = convert_variables_to_constants_v2(f)
graph_def = f2.graph.as_graph_def()
# Export frozen graph
with tf.io.gfile.GFile('frozen_graph.pb', 'wb') as f:
f.write(graph_def.SerializeToString())
As said in this comment in OpenCV Github issues : https://github.com/opencv/opencv/issues/16582#issuecomment-603819498
You will then probably need to use the tf_text_graph_ssd.py provided in OpenCV wiki to generate the text graph representation of the frozen model and that'd be it!
Tensorflow 2 no longer supports sessions so you can’t easily export your model as a frozen graph. I found this which solved the issues I had with using Tensorflow Object Detection models with opencv. Hopefully this will help.
I am trying to use ITK's OtsuMultipleThresholdsImageFilter filter in a project but I do not have output.
My aim is to make a simple interface between OpenCV and ITK.
To convert my data from OpenCV's Mat container to itk::Image I use ITK's bridge to OpenCV and I could check that the data are properly sent to ITK.
I am even able to display thanks to QuickView.
But When I setup the filter inspired by this example the object returned by the method GetThresholds() is empty.
Here is the code I wrote:
typedef itk::Image<uchar,2> image_type;
typedef itk::OtsuMultipleThresholdsImageFilter<image_type, image_type> filter_type;
image_type::Pointer img = itk::OpenCVImageBridge::CVMatToITKImage<image_type>(src);
image_type::SizeType size = img->GetLargestPossibleRegion().GetSize();
filter_type::Pointer filter = filter_type::New();
filter->SetInput(img);
filter->SetNumberOfHistogramBins(256);
filter->SetNumberOfThresholds(K);
filter_type::ThresholdVectorType tmp = filter->GetThresholds();
std::cout<<"CHECK: "<<tmp.size()<<std::endl;
src is OpenCV's Mat of CV_8U(C1) type.
A fundamental and basic concept to using ITK is that it is a pipeline architecture. You must connect the input's and output's then update the pipeline.
You have connected the pipeline but you have not executed it. You must call filter->Update().
Please read the ITK Software Guide to understand the fundamentals of ITK:
https://itk.org/ItkSoftwareGuide.pdf
I want to get a shape/mesh object under a transform node active in Maya.
If I select and object (e.g. a poly sphere) in Maya, when calling getActiveSelectionList method, it returns a transform node, not a shape/mesh one.
I'm getting crazy reading the API classes (MDagPath, MSelectionList, MFnDependencyNode) and methods which would achieve that but I can't find a way to do it.
So, I want to get the info (vertex coordinates) of a selected/active poly object in Maya GUI through C++ API.
You want to get an MDagPath leading to the transform and then use .extendToShape or .extendToShapeDirectlyBelow() to get the shape node. Then you need to get an MFnMesh from the shape and use that to get to the vertices.
Here's the python version, which is all I have handy. Apart from syntax it will work the same way in C++ :
# make a selectionList object, populate ite
sel_list = MSelectionList()
MGlobal.getActiveSelectionList(sel_list)
# make a dagPath, fill it using the first selected item
d = MDagPath()
sel_list.getDagPath(0,d)
print d.fullPathName()
# '|pCube1" <- this is the transform
d.extendToShape()
print d.fullPathName()
# "|pCube1|pCubeShape1" < - now it points at the shape
# get the dependency node as an MFnMesh:
mesh = MFnMesh(d.node())
# now you can call MFnMesh methods to work on the object:
print mesh.numVertices()
# 8
Is there a simple way to save a KNN classifier in OpenCV by using the C++ API?
I have tried to save a KNN classifier described here after wrapping CvKNearest class inside another class.
It successfully saves to disk, but when I read from it running predict method gives me segmentation fault (core dumped) error.
My wrapper class is as follows:
class KNNWrapper
{
CvKNearest knn;
bool train(Mat& traindata, Mat& trainclasses)
{
}
void test(Mat& testdata, Mat& testclasses)
{
}
}
I've heard that Boost Serialization library is more robust and safe. Can anyone point me to proper resources where I can get this done with Boost library?
#tisch is totally right and I'd like to correct myself. The CvKNearest doesn't override the load and save functions of the CVStatModel.
But since a CvKNearest doesn't compute a model, there's no internal state to store. Of course, you want to store the training and test cv::Mat data you have passed. You can use the FileStorage class for this, a great description and tutorial is given at:
http://docs.opencv.org/modules/core/doc/xml_yaml_persistence.html
If you want to offer the same API as in the other statistical models in OpenCV (which makes sense) I suppose to subclass the CvKNearest and offer a save and load function, which respectively serialize the training/test data and deserialize it by using the FileStorage.