Recently, since M83 and M84 releases of lib-WebRTC, I face a strange error when I run my host program in Windows x64 Debug configuration (RTC_DCHECK_IS_ON) on Visual Studio :
When a video channel is creating in WebRTC library I get an exception on
_Deque_const_iterator& operator++() {
#if _ITERATOR_DEBUG_LEVEL != 0
const auto _Mycont = static_cast<const _Mydeque*>(this->_Getcont());
_STL_VERIFY(_Mycont, "cannot increment value-initialized deque iterator");
here----> _STL_VERIFY(this->_Myoff < _Mycont->_Myoff + _Mycont->_Mysize, "cannot increment deque iterator past end");**
#endif // _ITERATOR_DEBUG_LEVEL != 0
++_Myoff;
return *this;
}
Because _Myoff is NULL ...
This ++operator is called from rtc_base/thread.cc in WebRTC library here :
void ThreadManager::RegisterSendAndCheckForCycles(Thread* source,
Thread* target) {
CritScope cs(&crit_);
std::deque<Thread*> all_targets({target});
// We check the pre-existing who-sends-to-who graph for any path from target
// to source. This loop is guaranteed to terminate because per the send graph
// invariant, there are no cycles in the graph.
for (auto it = all_targets.begin(); it != all_targets.end(); ++it) {
const auto& targets = send_graph_[*it];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
...
It comes from the ++it from a std::deque< rtc::Thread* >
I don't really see what might be the problem, but it seems that the iterator has an issue.
Perhaps I have a kind of mismatch of configurations between the compiled webrtc.lib and my project but there wasn't any problem with WebRTC M79 or M81 for exemple.
And, as WebRTC is really a huge project, I don't know where to start my investigations.
Any idea ?
Please note that I also reported this bug to WebRTC team : https://bugs.chromium.org/p/webrtc/issues/detail?id=11746
The problem is from the function RegisterSendAndCheckForCycles of rtc_base/thread.cc file
for (auto it = all_targets.begin(); it != all_targets.end(); ++it) {
const auto& targets = send_graph_[*it];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
When all_targets.insert is called, the "it" gets invalid because the memory allocation changed in all_targets, so the next ++it generate an assertion failure. Working with indexes solves the problem
Here's the fixed version:
void ThreadManager::RegisterSendAndCheckForCycles(Thread* source,Thread* target) {
CritScope cs(&crit_);
std::deque<Thread*> all_targets({target});
// We check the pre-existing who-sends-to-who graph for any path from target
// to source. This loop is guaranteed to terminate because per the send graph
// invariant, there are no cycles in the graph.
for (size_t i = 0; i < all_targets.size(); i++) {
const auto& targets = send_graph_[all_targets[i]];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
RTC_CHECK_EQ(absl::c_count(all_targets, source), 0)
<< " send loop between " << source->name() << " and " << target->name();
// We may now insert source -> target without creating a cycle, since there
// was no path from target to source per the prior CHECK.
send_graph_[source].insert(target);
}
I will propose a patch directly to WebRTC team in a few days
This is basically a continuation of the accepted answer...
If you follow Microsoft's tutorial (https://learn.microsoft.com/en-us/winrtc/getting-started), they have you use the M84 release (still as of posting this in May '22). Then, they tell you to apply a pile of git patches they put together. To run those, they have you first define an environmental variable called WEBRTCM84_ROOT which is the absolute path to the webrtc\src directory. If you didn't do all that, simply execute this in a Command Prompt window (filling in YOUR actual path):
set WEBRTCM84_ROOT=C:\abs\path\to\webrtc\src
Now, create a git patch file somewhere, containing the following content. I'll just assume you put in it on a path directly adjacent to the webrtc repo.
ThreadManager.patch
diff --git a/rtc_base/thread.cc b/rtc_base/thread.cc
index 0fb2e813e0..a8cb022fa9 100644
--- a/rtc_base/thread.cc
+++ b/rtc_base/thread.cc
## -168,8 +168,8 ## void ThreadManager::RegisterSendAndCheckForCycles(Thread* source,
// We check the pre-existing who-sends-to-who graph for any path from target
// to source. This loop is guaranteed to terminate because per the send graph
// invariant, there are no cycles in the graph.
- for (auto it = all_targets.begin(); it != all_targets.end(); ++it) {
- const auto& targets = send_graph_[*it];
+ for (size_t i = 0; i < all_targets.size(); i++) {
+ const auto& targets = send_graph_[all_targets[i]];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
RTC_CHECK_EQ(absl::c_count(all_targets, source), 0)
Then, apply it, like so:
pushd "%WEBRTCM84_ROOT%"
git apply "..\..\ThreadManager.patch"
git commit -a -m "Applied ThreadManager patch."
popd
Note: I'm still experiencing some other bad behaviors in debug mode, that aren't present in release, but SamT's solution got me beyond this particular issue.
Related
I am trying to fix this issue:
https://github.com/gitahead/gitahead/issues/380
The problem is that the tree used in the model does not contain any untracked files and therefore the view has nothing to show. When I stage on file it is shown.
Is there a way to track in the tree also the untracked files?
I created a small test application to find the problem. When one file is staged, count is unequal to zero, otherwise it is always zero.
Testsetup
new git repository (TestRepository) with the following untracked files:
testfile.txt
testfolder/testfile2.txt
d
#include <git2.h>
#include <stdio.h>
int main() {
git_libgit2_init();
git_repository *repo = NULL;
int error = git_repository_open(&repo, "/TestRepository");
if (error < 0) {
const git_error *e = git_error_last();
printf("Error %d/%d: %s\n", error, e->klass, e->message);
exit(error);
}
git_tree *tree = nullptr;
git_index* idx = nullptr;
git_repository_index(&idx, repo);
git_oid id;
if (git_index_write_tree(&id, idx)) {
const git_error *e = git_error_last();
printf("Error %d/%d: %s\n", error, e->klass, e->message);
exit(error);
}
git_tree_lookup(&tree, repo, &id);
int count = git_tree_entrycount(tree);
printf("%d", count);
git_repository_free(repo);
printf("SUCCESS");
return 0;
}
If I understood correctly, what you're seeing is normal: as the file is untracked/new, the index has no knowledge of it, so if you ask the index, it has no "staged" changes to compare with, hence no diff.
If you want a diff for a yet-to-be tracked file, you'll have to provide it another way, usually by asking git_diff to do the work of comparing the worktree version with /dev/null, the empty blob, etc.
Since you're after a libgit2 solution, the way I'm trying to do that in GitX is via the git_status_list_new API, which gives a somewhat filesystem-independent way of generating both viewable diffs (staged & unstaged) on-the-fly, using git_patch_from_blobs/git_patch_from_blobs_and_buffer. In retrospect, maybe that should live in the library as git_status_entry_generate_patch or something…
I am trying to serve the following gitrepo in opencv: https://github.com/una-dinosauria/3d-pose-baseline and the checkpoint data can be found at the following link: https://drive.google.com/file/d/0BxWzojlLp259MF9qSFpiVjl0cU0/view
I have already constructed a frozen graph which I can serve in python and was generated using the following script:
meta_path = 'checkpoint-4874200.meta' # Your .meta file
output_node_names = ['linear_model/add_1'] # Output nodes
export_dir=os.path.join('export_dir')
graph=tf.Graph()
with tf.Session(graph=graph) as sess:
# Restore the graph
loader=tf.train.import_meta_graph(meta_path)
loader.restore(sess,'checkpoint-4874200')
builder=tf.saved_model.builder.SavedModelBuilder(export_dir)
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.SERVING],
strip_default_attrs=True)
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('C:\\Users\\FrozenGraph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
Then I optimized the graph by running:
optimized_graph_def=optimize_for_inference_lib.optimize_for_inference(
frozen_graph_def,
['inputs/enc_in'],
['linear_model/add_1'],
tf.float32.as_datatype_enum)
g=tf.gfile.FastGFile('optimized_inference_graph.pb','wb')
g.write(optimized_graph_def.SerializeToString())
and the optimized frozen graph can be found at: https://github.com/alecda573/frozen_graph/blob/master/optimized_inference_graph.pb
When I try to run in opencv the following I get this runtime error:
OpenCV(4.3.0) Error: Unspecified error (More than one input is Const op) in cv::dnn::dnn4_v20200310::`anonymous-namespace'::TFImporter::getConstBlob, file C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\tensorflow\tf_importer.cpp, line 570
Steps to reproduce
To reproduce problem you just need to download the frozen graph from the above link or create yourself from the checkpoint data and then call the following in opencv with the below headers:
#include <iostream>
#include <vector>
#include <cmath>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include "opencv2/dnn.hpp"
string pbFilePath = "C:/Users/optimized_inferene_graph.pb";
//Create 3d-pose-baseline model
cv::dnn::Net inputNet;
inputNet = cv::dnn::readNetFromTensorflow(pbFilePath);
Would love to know if anyone has any thoughts on how to address this error.
You can see the frozen graph and optimize graph I generated with tensorboard from the attached photos.
I have a feeling the error is arising from the training flag inputs but I am not certain, and I do not want to go trying to edit the graph if that is not the problem.
I am attaching the function in opencv that is causing the issue:
const tensorflow::TensorProto& TFImporter::getConstBlob(const tensorflow::NodeDef &layer, std::map<String, int> const_layers,
int input_blob_index, int* actual_inp_blob_idx) {
if (input_blob_index == -1) {
for(int i = 0; i < layer.input_size(); i++) {
Pin input = parsePin(layer.input(i));
if (const_layers.find(input.name) != const_layers.end()) {
if (input_blob_index != -1)
CV_Error(Error::StsError, "More than one input is Const op");
input_blob_index = i;
}
}
}
if (input_blob_index == -1)
CV_Error(Error::StsError, "Const input blob for weights not found");
Pin kernel_inp = parsePin(layer.input(input_blob_index));
if (const_layers.find(kernel_inp.name) == const_layers.end())
CV_Error(Error::StsError, "Input [" + layer.input(input_blob_index) +
"] for node [" + layer.name() + "] not found");
if (kernel_inp.blobIndex != 0)
CV_Error(Error::StsError, "Unsupported kernel input");
if(actual_inp_blob_idx) {
*actual_inp_blob_idx = input_blob_index;
}
int nodeIdx = const_layers.at(kernel_inp.name);
if (nodeIdx < netBin.node_size() && netBin.node(nodeIdx).name() == kernel_inp.name)
{
return netBin.node(nodeIdx).attr().at("value").tensor();
}
else
{
CV_Assert_N(nodeIdx < netTxt.node_size(),
netTxt.node(nodeIdx).name() == kernel_inp.name);
return netTxt.node(nodeIdx).attr().at("value").tensor();
}
}
As you pointed out, the error originates in getConstBlob (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L570). getConstBlobis called several times in populateNet (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L706), which is called in all overloaded definitions of readNetFromTensor (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L2278). Those may be starting points for where to place breakpoints if you want to step through with a debugger.
The other thing I noticed is that the definition of populateNet which I believe you're using (supplying a std::string: https://docs.opencv.org/master/d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d) requires two arguments - both the model path (model) and a configuration (config`), which is optional and defaults to an empty string. In the unit tests, it looks like there are both cases - with and without configuration provided (https://github.com/opencv/opencv/blob/master/modules/dnn/test/test_tf_importer.cpp). I'm not sure if that would have an impact.
Lastly, in the script you provided to replicate the results, I believe the model file name is misspelled - it says optimized_inferene_graph.pb, but the file you point to in the github repo is spelled optimized_inference_graph.pb.
Just a few suggestions, I hope this may help!
I am looking for tips on how to debug tensor flow graph execution written in c++. In particular calls to session::Run produce the error "specified in either feed_devices or fetch_devices was not found in the Graph" and I am looking for ways to debug my graph.
// data is a std::vector
std::vector names = {"feature1","feature2","feature3","feature4"};
std::vector<std::pair<tf::string, tf::Tensor>> input_tensor;
std::vector<tf::Tensor> output_tensors;
// create the Run input tensor
for (uint i=0; i < gamma_transformed_data.size();i++)
{
tf::Tensor tensorValue(tf::DT_FLOAT, tf::TensorShape({1,1}));
tensorValue.flat<float>().data()[0] = data[i];
std::pair <tf::string, tf::Tensor> pair = std::make_pair (names[i],tensorValue);
input_tensor.push_back(pair);
}
//execute the graph
check_status(ReadBinaryProto(tf::Env::Default(),protobuf_model_filepath, &graph));
check_status(tf::NewSession(tf::SessionOptions(), &session));
check_status(session->Create(graph));
check_status(session->Run(input_tensor, {"results"}, {}, &output_tensors));
session->Run returns false and status.ToString() = //"Invalid argument: Tensor xyz:0, specified in either feed_devices or fetch_devices was not found in the Graph"
I have some part of code with which the debugger , when entering , starts stopping in lines with { braces.. suddenly jumps back to a blank line, and apparently is doing something (variables change), but the positions are different (there is some kind of weird offset back, skipping blank lines) and obviously i cannot see the contents of any variable.
Some facts:
I'm compiling on DEBUG
The code that fails falls inside more code which executes perfectly before.
The code does not work properly, but i double checked and it should. They are just 10 lines of code, exactly the same lines that the ones before, just changing variables names.
The debugguer stays crazy there, then out of the function in the caller function, and then returns to a normal state in the third parent function.
This code uses Qt 4.7 and QDomDocument functionality but works perfectly on other parts of the code. I added it to precompiled headers. (QXML)
I tried these with same errors:
Cleaning solution by hand or by visual
Cleaning all related Qt files (moc). Removed them, recompile, add again.
Changed the function to other part of the class.
Changed that piece of code to other part of the function.
Deleted file, removed from folder, compile, add it again and include the MOC.
Checked other threads. It is on the good thread.
Commenting the code makes it work perfect.
Here's the cursed code:
(...loaded file, check if worked)
// Assign file to dom document
QDomDocument doc("XML");
if (!doc.setContent(file)) {
file->close();
return;
}
// Root element (object)
QDomElement root = doc.documentElement();
QDomElement elt;
QDomElement elt2;
QDomElement elt3;
// NAME
elt = root.firstChildElement("name"); //-- works and debugs ok
if (!elt.isNull())
obj->setNameInfo(elt.text());
// TYPE
elt = root.firstChildElement("type"); //-- works and debugs ok
if (!elt.isNull())
obj->setTypeInfo(elt.text());
// REF NUMBER
elt = root.firstChildElement("ref"); //-- works and debugs ok
if (!elt.isNull())
obj->setRefNumberInfo( elt.text() );
// COLLECTION <collection><english>Text</english>...
elt = root.firstChildElement("collection"); //-- works and debugs ok
if (!elt.isNull())
{
elt2 = elt.firstChildElement("english");
if (!elt2.isNull())
obj->setCollectionInfo( elt2.text() );
}
// BRAND <mainBrand><id>id</id><web>url</web></brand>
elt = root.firstChildElement("mainBrand"); //-- works and debugs ok
if (!elt.isNull())
{
elt2 = elt.firstChildElement("id");
if (!elt2.isNull())
obj->setMainBrandIdInfo(elt2.text());
elt2 = elt.firstChildElement("web");
if (!elt2.isNull())
obj->setMainBrandUrlInfo(elt2.text());
}
// BRAND LIST <brands><brand><id>2</id><url>google</url></brand>...</brands>
elt = root.firstChildElement("brands");
{
QDomNodeList brands = elt.childNodes(); // AFTER THIS LINE, STARTS GOING WEIRD
if ( ! brands.isEmpty() )
{
elt2 = brands.at(0).toElement();
for ( ; !elt2.isNull(); elt2 = elt2.nextSiblingElement() )
{
QString id= "";
elt3 = elt2.firstChildElement("id");
if (!elt3.isNull())
id = elt3.text();
QString url= "";
elt3 = elt2.firstChildElement("url");
if (!elt3.isNull())
url = elt3.text();
obj->addBrandInfo(id, url);
}
}
}
// DESCRIPTION // THIS IS EXECUTED PERFECTLY BUT DEBUGGER IS STILL JUMPING AROUND
elt = root.firstChildElement("description");
if (!elt.isNull())
{
elt2 = elt.firstChildElement("english");
if (!elt2.isNull())
obj->setDescriptionInfo( elt2.text() );
}
... MORE CODE HERE. UNTIL THE END THE DEBUGGER WORKS WITH SOME WEIRD OFSET...
I discovered the answer after long hours of trials:
Reading the dissasembling of the code, we detected that Comment lines generated code, lines were not in the correct place, etcetera.
The problem comes with the encoding of the line breaks. The encoding of the file was not WINDOWS encoding. We had this set like this because we worked with MAC configurations, and we had some error messages concerning line breaks style. We choose one encoding that didn't fail since then, but now this error appeared.
The solution is to Save As... the file, selecting other options for the saving, and choose change encoding to Windows or whatever encoding you want to use.
EDIT: I'm using std::tr2::sys not boost's filesytem
I'm working on a simple app that scans directories on the computer and returns statistics about those files. There are two ways that scan can be called: recursively and non-recursively. A path is passed into the scan method along with a flag for whether or not the scan should be done recursively. Given the path folder the two implementations look something like this:
directory_iterator start( folder ), end;
for( ; start != end; ++start ) {
//Some code in here
// get the path like this:
auto path = start->path();
}
or
recursive_directory_iterator start( folder ), end;
for( ; start != end; ++start ) {
//Some code in here
// get the path like this:
auto path = start->path();
}
The problem I'm encountering is that in the block that uses the regular directory_iterator when I try and grab the path it will return only the file name ie. "myTextFile.txt" where as getting it from the recursive_directory_iterator returns the full path to the file on the system. In each case the value of folder is the same (a full file path as well). Is there a reason the front part of the path is getting chopped off or is there some way I can get the full file path from just the abbreviated one that I'm getting back from directory_iterator?
For full path you need to use start->path().directory_string()
directory_iterator end;
for (directory_iterator start(p);
start != end;
++start)
{
if (is_directory(start->status()))
{
//A directory
}
if (is_regular_file(start->status()))
{
std::cout << start->path().directory_string()<< "\n";
}
}
Play around with following based on your need, refer boost docs for more details, however names their suggest all.
start->path().branch_path();
start->path().directory_string();
start->path().file_string();
start->path().root_path();
start->path().parent_path();
start->path().string();
start->path().basename();
start->path().filename();
start->path().root_name();
start->path().relative_path();
Use the complete function:
auto full_path = std::tr2::sys::complete(path, folder);
Another simple way to do it would be be to simply append the filename to the path (tested on MSVC2012 with the <filesystem> in the std::tr2::sys namespace):
for (; start != end; ++start) {
auto path = start->path();
auto full_path = folder / path;
}