Git tree show untracked files - c++

I am trying to fix this issue:
https://github.com/gitahead/gitahead/issues/380
The problem is that the tree used in the model does not contain any untracked files and therefore the view has nothing to show. When I stage on file it is shown.
Is there a way to track in the tree also the untracked files?
I created a small test application to find the problem. When one file is staged, count is unequal to zero, otherwise it is always zero.
Testsetup
new git repository (TestRepository) with the following untracked files:
testfile.txt
testfolder/testfile2.txt
d
#include <git2.h>
#include <stdio.h>
int main() {
git_libgit2_init();
git_repository *repo = NULL;
int error = git_repository_open(&repo, "/TestRepository");
if (error < 0) {
const git_error *e = git_error_last();
printf("Error %d/%d: %s\n", error, e->klass, e->message);
exit(error);
}
git_tree *tree = nullptr;
git_index* idx = nullptr;
git_repository_index(&idx, repo);
git_oid id;
if (git_index_write_tree(&id, idx)) {
const git_error *e = git_error_last();
printf("Error %d/%d: %s\n", error, e->klass, e->message);
exit(error);
}
git_tree_lookup(&tree, repo, &id);
int count = git_tree_entrycount(tree);
printf("%d", count);
git_repository_free(repo);
printf("SUCCESS");
return 0;
}

If I understood correctly, what you're seeing is normal: as the file is untracked/new, the index has no knowledge of it, so if you ask the index, it has no "staged" changes to compare with, hence no diff.
If you want a diff for a yet-to-be tracked file, you'll have to provide it another way, usually by asking git_diff to do the work of comparing the worktree version with /dev/null, the empty blob, etc.
Since you're after a libgit2 solution, the way I'm trying to do that in GitX is via the git_status_list_new API, which gives a somewhat filesystem-independent way of generating both viewable diffs (staged & unstaged) on-the-fly, using git_patch_from_blobs/git_patch_from_blobs_and_buffer. In retrospect, maybe that should live in the library as git_status_entry_generate_patch or something…

Related

WebRTC std::deque iterator exception when RTC_DCHECK_IS_ON

Recently, since M83 and M84 releases of lib-WebRTC, I face a strange error when I run my host program in Windows x64 Debug configuration (RTC_DCHECK_IS_ON) on Visual Studio :
When a video channel is creating in WebRTC library I get an exception on
_Deque_const_iterator& operator++() {
#if _ITERATOR_DEBUG_LEVEL != 0
const auto _Mycont = static_cast<const _Mydeque*>(this->_Getcont());
_STL_VERIFY(_Mycont, "cannot increment value-initialized deque iterator");
here----> _STL_VERIFY(this->_Myoff < _Mycont->_Myoff + _Mycont->_Mysize, "cannot increment deque iterator past end");**
#endif // _ITERATOR_DEBUG_LEVEL != 0
++_Myoff;
return *this;
}
Because _Myoff is NULL ...
This ++operator is called from rtc_base/thread.cc in WebRTC library here :
void ThreadManager::RegisterSendAndCheckForCycles(Thread* source,
Thread* target) {
CritScope cs(&crit_);
std::deque<Thread*> all_targets({target});
// We check the pre-existing who-sends-to-who graph for any path from target
// to source. This loop is guaranteed to terminate because per the send graph
// invariant, there are no cycles in the graph.
for (auto it = all_targets.begin(); it != all_targets.end(); ++it) {
const auto& targets = send_graph_[*it];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
...
It comes from the ++it from a std::deque< rtc::Thread* >
I don't really see what might be the problem, but it seems that the iterator has an issue.
Perhaps I have a kind of mismatch of configurations between the compiled webrtc.lib and my project but there wasn't any problem with WebRTC M79 or M81 for exemple.
And, as WebRTC is really a huge project, I don't know where to start my investigations.
Any idea ?
Please note that I also reported this bug to WebRTC team : https://bugs.chromium.org/p/webrtc/issues/detail?id=11746
The problem is from the function RegisterSendAndCheckForCycles of rtc_base/thread.cc file
for (auto it = all_targets.begin(); it != all_targets.end(); ++it) {
const auto& targets = send_graph_[*it];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
When all_targets.insert is called, the "it" gets invalid because the memory allocation changed in all_targets, so the next ++it generate an assertion failure. Working with indexes solves the problem
Here's the fixed version:
void ThreadManager::RegisterSendAndCheckForCycles(Thread* source,Thread* target) {
CritScope cs(&crit_);
std::deque<Thread*> all_targets({target});
// We check the pre-existing who-sends-to-who graph for any path from target
// to source. This loop is guaranteed to terminate because per the send graph
// invariant, there are no cycles in the graph.
for (size_t i = 0; i < all_targets.size(); i++) {
const auto& targets = send_graph_[all_targets[i]];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
RTC_CHECK_EQ(absl::c_count(all_targets, source), 0)
<< " send loop between " << source->name() << " and " << target->name();
// We may now insert source -> target without creating a cycle, since there
// was no path from target to source per the prior CHECK.
send_graph_[source].insert(target);
}
I will propose a patch directly to WebRTC team in a few days
This is basically a continuation of the accepted answer...
If you follow Microsoft's tutorial (https://learn.microsoft.com/en-us/winrtc/getting-started), they have you use the M84 release (still as of posting this in May '22). Then, they tell you to apply a pile of git patches they put together. To run those, they have you first define an environmental variable called WEBRTCM84_ROOT which is the absolute path to the webrtc\src directory. If you didn't do all that, simply execute this in a Command Prompt window (filling in YOUR actual path):
set WEBRTCM84_ROOT=C:\abs\path\to\webrtc\src
Now, create a git patch file somewhere, containing the following content. I'll just assume you put in it on a path directly adjacent to the webrtc repo.
ThreadManager.patch
diff --git a/rtc_base/thread.cc b/rtc_base/thread.cc
index 0fb2e813e0..a8cb022fa9 100644
--- a/rtc_base/thread.cc
+++ b/rtc_base/thread.cc
## -168,8 +168,8 ## void ThreadManager::RegisterSendAndCheckForCycles(Thread* source,
// We check the pre-existing who-sends-to-who graph for any path from target
// to source. This loop is guaranteed to terminate because per the send graph
// invariant, there are no cycles in the graph.
- for (auto it = all_targets.begin(); it != all_targets.end(); ++it) {
- const auto& targets = send_graph_[*it];
+ for (size_t i = 0; i < all_targets.size(); i++) {
+ const auto& targets = send_graph_[all_targets[i]];
all_targets.insert(all_targets.end(), targets.begin(), targets.end());
}
RTC_CHECK_EQ(absl::c_count(all_targets, source), 0)
Then, apply it, like so:
pushd "%WEBRTCM84_ROOT%"
git apply "..\..\ThreadManager.patch"
git commit -a -m "Applied ThreadManager patch."
popd
Note: I'm still experiencing some other bad behaviors in debug mode, that aren't present in release, but SamT's solution got me beyond this particular issue.

More than one input is Const Op

I am trying to serve the following gitrepo in opencv: https://github.com/una-dinosauria/3d-pose-baseline and the checkpoint data can be found at the following link: https://drive.google.com/file/d/0BxWzojlLp259MF9qSFpiVjl0cU0/view
I have already constructed a frozen graph which I can serve in python and was generated using the following script:
meta_path = 'checkpoint-4874200.meta' # Your .meta file
output_node_names = ['linear_model/add_1'] # Output nodes
export_dir=os.path.join('export_dir')
graph=tf.Graph()
with tf.Session(graph=graph) as sess:
# Restore the graph
loader=tf.train.import_meta_graph(meta_path)
loader.restore(sess,'checkpoint-4874200')
builder=tf.saved_model.builder.SavedModelBuilder(export_dir)
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.SERVING],
strip_default_attrs=True)
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('C:\\Users\\FrozenGraph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
Then I optimized the graph by running:
optimized_graph_def=optimize_for_inference_lib.optimize_for_inference(
frozen_graph_def,
['inputs/enc_in'],
['linear_model/add_1'],
tf.float32.as_datatype_enum)
g=tf.gfile.FastGFile('optimized_inference_graph.pb','wb')
g.write(optimized_graph_def.SerializeToString())
and the optimized frozen graph can be found at: https://github.com/alecda573/frozen_graph/blob/master/optimized_inference_graph.pb
When I try to run in opencv the following I get this runtime error:
OpenCV(4.3.0) Error: Unspecified error (More than one input is Const op) in cv::dnn::dnn4_v20200310::`anonymous-namespace'::TFImporter::getConstBlob, file C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\tensorflow\tf_importer.cpp, line 570
Steps to reproduce
To reproduce problem you just need to download the frozen graph from the above link or create yourself from the checkpoint data and then call the following in opencv with the below headers:
#include <iostream>
#include <vector>
#include <cmath>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include "opencv2/dnn.hpp"
string pbFilePath = "C:/Users/optimized_inferene_graph.pb";
//Create 3d-pose-baseline model
cv::dnn::Net inputNet;
inputNet = cv::dnn::readNetFromTensorflow(pbFilePath);
Would love to know if anyone has any thoughts on how to address this error.
You can see the frozen graph and optimize graph I generated with tensorboard from the attached photos.
I have a feeling the error is arising from the training flag inputs but I am not certain, and I do not want to go trying to edit the graph if that is not the problem.
I am attaching the function in opencv that is causing the issue:
const tensorflow::TensorProto& TFImporter::getConstBlob(const tensorflow::NodeDef &layer, std::map<String, int> const_layers,
int input_blob_index, int* actual_inp_blob_idx) {
if (input_blob_index == -1) {
for(int i = 0; i < layer.input_size(); i++) {
Pin input = parsePin(layer.input(i));
if (const_layers.find(input.name) != const_layers.end()) {
if (input_blob_index != -1)
CV_Error(Error::StsError, "More than one input is Const op");
input_blob_index = i;
}
}
}
if (input_blob_index == -1)
CV_Error(Error::StsError, "Const input blob for weights not found");
Pin kernel_inp = parsePin(layer.input(input_blob_index));
if (const_layers.find(kernel_inp.name) == const_layers.end())
CV_Error(Error::StsError, "Input [" + layer.input(input_blob_index) +
"] for node [" + layer.name() + "] not found");
if (kernel_inp.blobIndex != 0)
CV_Error(Error::StsError, "Unsupported kernel input");
if(actual_inp_blob_idx) {
*actual_inp_blob_idx = input_blob_index;
}
int nodeIdx = const_layers.at(kernel_inp.name);
if (nodeIdx < netBin.node_size() && netBin.node(nodeIdx).name() == kernel_inp.name)
{
return netBin.node(nodeIdx).attr().at("value").tensor();
}
else
{
CV_Assert_N(nodeIdx < netTxt.node_size(),
netTxt.node(nodeIdx).name() == kernel_inp.name);
return netTxt.node(nodeIdx).attr().at("value").tensor();
}
}
As you pointed out, the error originates in getConstBlob (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L570). getConstBlobis called several times in populateNet (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L706), which is called in all overloaded definitions of readNetFromTensor (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L2278). Those may be starting points for where to place breakpoints if you want to step through with a debugger.
The other thing I noticed is that the definition of populateNet which I believe you're using (supplying a std::string: https://docs.opencv.org/master/d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d) requires two arguments - both the model path (model) and a configuration (config`), which is optional and defaults to an empty string. In the unit tests, it looks like there are both cases - with and without configuration provided (https://github.com/opencv/opencv/blob/master/modules/dnn/test/test_tf_importer.cpp). I'm not sure if that would have an impact.
Lastly, in the script you provided to replicate the results, I believe the model file name is misspelled - it says optimized_inferene_graph.pb, but the file you point to in the github repo is spelled optimized_inference_graph.pb.
Just a few suggestions, I hope this may help!

In ArduinoJson how can one check if an error occured when creating a JSON document?

In the ArduinoJson library, it is easy to create JSON entries as shown below.
StaticJsonDocument<512> json_doc;
String some_string = "Hello there!";
json_doc["some_string"] = some_string;
The question is what is the best way to check whether the entry was successfully created? This would allow error handling to be implemented and the error to be found quickly if the entries that are created change and grow over time.
Simply test to see whether the added node has a non-null value. If after you've tried to create a node, that node has a null value, the node was not created.
Here's a simple Sketch to illustrate this test:
#include <ArduinoJson.h>
StaticJsonDocument<100> json_doc;
int nodeNumber = 0;
boolean ranOut = false;
void setup() {
Serial.begin(9600);
}
void loop() {
if (ranOut) return;
String nodeName(nodeNumber++);
String nodeContent = nodeName + " thing";
json_doc[nodeName] = nodeContent;
if (!json_doc[nodeName]) {
ranOut = true;
Serial.print("Ran out at ");
Serial.println(nodeNumber);
}
}
When I ran this Sketch on my Arduino Uno, it produced:
Ran out at 6
That is, it created successfully created nodes json_doc["0"] through json_doc["5"] and ran out of space when it tried to create json_doc["6"].

Compare relationship of two folder

SCENARIO
PROCEDURE A gathers files from a webservice, and copies them to a root folder. Sometimes files are copied in a subfolder of the root, for example:
c:\root\file1
c:\root\file2
c:\root\filea
c:\root\<unique random name>\fileA
I suspect (but I'm not sure) that the webservice runs on linux-system and file names are case-sensitive. So, files are copied on a Windows file-system, and when uppercase/lowercase conflicts occure, files are copied in a subfolder. The subfolder has an unique randomly generated name.
PROCEDURE B scans files in the root and sub-folders in order to archive them. Files correctly archived are deleted. PROCEDURE A and PROCEDURE B don't run simultaneously.
And now my task ... I've to delete empty subfolder of the root.
FIRST SOLUTION (the easiest one)
When procedure B ends, I can scan empty subfolders of the root, end then delete them. Well ...
DWORD DeleteEmptySubFolder(LPCSTR szRootFolder)
{
DWORD dwError = 0;
CString sFolder(szRootFolder);
sFolder += "*.*";
CFileFind find_folder;
BOOL bWorking = find_folder.FindFile(sFolder);
while (bWorking)
{
bWorking = find_folder.FindNextFile();
if(find_folder.IsDots())
continue;
if(find_folder.IsDirectory())
{
if(PathIsDirectoryEmpty(find_folder.GetFilePath()))
if(!RemoveDirectory(find_folder.GetFilePath()))
dwError = GetLastError();
}
}
return dwError;
}
and now here are the problems: I haven't got any control on PROCEDURE B and I don't know when it ends. PROCEDURE B can call a user function after archiving each individual file.
SECOND SOLUTION (adequate but not too efficient)
I can still call the above function
DWORD DeleteEmptySubFolder(LPCSTR szRootFolder)
It's not efficient for sure, it will scan all subfolders of the root for each archived file, but it will delete only empty subfolders.
THIRD SOLUTION (it should work)
When procedure B call user function, I know the root folder and the full path of the archived file. So I can check if the folder of the file is a sub-folder of the root:
#define EQUAL_FOLDER 0
#define A_SUBFOLDER_OF_B 1
#define B_SUBFOLDER_OF_A 2
#define UNRELATED_FOLDER 3
int CompareFolderHiearachy(LPCSTR szFolderA, LPCSTR szFolderB)
{
if(_stricmp(szFolderA, szFolderB))
{
// StrStrI - Windows function (from shlwapi.dll) which finds the first occurrence of a substring within a string (the comparison is not case-sensitive).
if(StrStrI(szFolderA, szFolderB) == szFolderA)
return A_SUBFOLDER_OF_B;
else if(StrStrI(szFolderB, szFolderA) == szFolderB)
return B_SUBFOLDER_OF_A;
else
return UNRELATED_FOLDER;
}
else
return EQUAL_FOLDER;
}
Maybe this solution could work fine in my scenario, but it can only handle cases where folder/file names are consistent. For example:
local disk:
root: C:\folder\
filename: c:\folder\subfolder\fileA
mapped disk:
root: Z:\folder\
filename: Z:\folder\subfolder\fileA
UNC:
root: \\SERVER\folder\
filename: \\SERVER\folder\subfolder\fileA
and now my too generic and abstract question, can I check the hierarchy/realtionship of two folders in the worst scenario ?
\\server\folder1\folder2 (UNC)
z:\folder2 (network drive).
or even worst ....
\\MYPC\folder1\folder2
c:\folder2
Maybe I'm asking a bit perverse question ... but it's quite challenging and intriguing, isn't it ?
Thank you very much.
I improved the thrid solution; it can solve severale situations, but nfortunately it can't handle all the possible cases.
#define ERROR_OCCURED -1
#define EQUAL_FOLDER 0
#define A_SUBFOLDER_OF_B 1
#define B_SUBFOLDER_OF_A 2
#define UNRELATED_FOLDER 3
int CompareFolderHiearachy(LPCSTR szFolderA, LPCSTR szFolderB)
{
char pBuffer[32767];
DWORD dwBufferLength = 32767;
UNIVERSAL_NAME_INFO * unameinfo = reinterpret_cast< UNIVERSAL_NAME_INFO *>(pBuffer);
DWORD dwRetVal = WNetGetUniversalName(szFolderA, UNIVERSAL_NAME_INFO_LEVEL, reinterpret_cast<LPVOID>(pBuffer), &dwBufferLength);
if(dwRetVal != NO_ERROR && dwRetVal != ERROR_NOT_CONNECTED && dwRetVal != ERROR_BAD_DEVICE)
return ERROR_OCCURED;
CString sFolderA(unameinfo->lpUniversalName ? unameinfo->lpUniversalName : szFolderA);
ZeroMemory(pBuffer, dwBufferLength);
dwRetVal = WNetGetUniversalName(szFolderB, UNIVERSAL_NAME_INFO_LEVEL, reinterpret_cast<LPVOID>(pBuffer), &dwBufferLength);
if(dwRetVal != NO_ERROR && dwRetVal != ERROR_NOT_CONNECTED && dwRetVal != ERROR_BAD_DEVICE)
return ERROR_OCCURED;
CString sFolderB(unameinfo->lpUniversalName ? unameinfo->lpUniversalName : szFolderB);
if(_stricmp(sFolderA, sFolderB))
{
// StrStrI - Windows function (from shlwapi.dll) which finds the first occurrence of a substring within a string (the comparison is not case-sensitive).
if(StrStrI(sFolderA, sFolderB) == static_cast<LPCSTR>(sFolderA))
return A_SUBFOLDER_OF_B;
else if(StrStrI(szFolderB, sFolderA) == static_cast<LPCSTR>(sFolderB))
return B_SUBFOLDER_OF_A;
else
return UNRELATED_FOLDER;
}
else
return EQUAL_FOLDER;
}
It can't solve the following:
folder A: \\MY_PC\shared_folder\folderA
folder B: C:\shared_folder
(\\MY_PC\shared_folder and C:\shared_folder are the sane folder)
and:
folder A: \\SERVER\shared_folderA\shared_folderB
folder B: \\SERVER\shared_folderB

Segmentation Fault error when trying to compare two videos with pHash library and its ruby bindings

I have set up my system with the latest ffmpeg and pHash libraries (ffmpeg-2.2.1 and pHash-0.9.6) as well as the pHash ruby gem (https://github.com/toy/pHash).
I am using ruby and attempting to compare two video files like this:
require 'phash/video'
video1 = Phash::Video.new('video1.mp4')
video2 = Phash::Video.new('video2.mp4')
video1 % video2
Executing this script results in a Segmentation fault:
..../gems/pHash-1.1.4/lib/phash/video.rb:20: [BUG] Segmentation fault
ruby 1.9.3p545 (2014-02-24 revision 45159) [x86_64-darwin13.1.0]
-- Control frame information -----------------------------------------------
c:0008 p:---- s:0029 b:0029 l:000028 d:000028 CFUNC :ph_dct_videohash
c:0007 p:0042 s:0024 b:0024 l:000023 d:000023 METHOD .../gems/pHash-1.1.4/lib/phash/video.rb:20
c:0006 p:0038 s:0017 b:0017 l:000016 d:000016 METHOD .../gems/pHash-1.1.4/lib/phash.rb:43
c:0005 p:0025 s:0014 b:0014 l:000013 d:000013 METHOD .../gems/pHash-1.1.4/lib/phash.rb:39
c:0004 p:0011 s:0011 b:0011 l:000010 d:000010 METHOD .../gems/pHash-1.1.4/lib/phash.rb:48
c:0003 p:0050 s:0006 b:0006 l:000128 d:0011b8 EVAL video_test_phash.rb:3
c:0002 p:---- s:0004 b:0004 l:000003 d:000003 FINISH
c:0001 p:0000 s:0002 b:0002 l:000128 d:000128 TOP
-- Ruby level backtrace information ----------------------------------------
video_test_phash.rb:3:in `<main>'
.../gems/pHash-1.1.4/lib/phash.rb:48:in `similarity'
.../gems/pHash-1.1.4/lib/phash.rb:39:in `phash'
.../gems/pHash-1.1.4/lib/phash.rb:43:in `compute_phash'
.../gems/pHash-1.1.4/lib/phash/video.rb:20:in `video_hash'
.../gems/pHash-1.1.4/lib/phash/video.rb:20:in `ph_dct_videohash'
...
Abort trap: 6
It appears that the crash happens in the ph_dct_videohash function which is part of the pHash library. The function is in file pHash.cpp. I am copying it here in case it would make sense to someone:
ulong64* ph_dct_videohash(const char *filename, int &Length){
CImgList<uint8_t> *keyframes = ph_getKeyFramesFromVideo(filename);
if (keyframes == NULL)
return NULL;
Length = keyframes->size();
ulong64 *hash = (ulong64*)malloc(sizeof(ulong64)*Length);
CImg<float> *C = ph_dct_matrix(32);
CImg<float> Ctransp = C->get_transpose();
CImg<float> dctImage;
CImg<float> subsec;
CImg<uint8_t> currentframe;
for (unsigned int i=0;i < keyframes->size(); i++){
currentframe = keyframes->at(i);
currentframe.blur(1.0);
dctImage = (*C)*(currentframe)*Ctransp;
subsec = dctImage.crop(1,1,8,8).unroll('x');
float med = subsec.median();
hash[i] = 0x0000000000000000;
ulong64 one = 0x0000000000000001;
for (int j=0;j<64;j++){
if (subsec(j) > med)
hash[i] |= one;
one = one << 1;
}
}
keyframes->clear();
delete keyframes;
keyframes = NULL;
delete C;
C = NULL;
return hash;
}
Any help is very much appreciated!
In the latest versions of ffmpeg, some functions (like "avformat_open_input" in this case) segfault when given an uninitialized pointer. Someone on the pHash support mailing list has shown how to modify the pHash source in order to initialize the pointers, and prevent the segfaults.
To fix the segmentation faults, lines 365 and 411 in pHash-0.9.6/src/cimgffmpeg.cpp must be changed from AVFormatContext *pFormatCtx; to AVFormatContext *pFormatCtx = NULL;, and then the source code must be recompiled and installed.
Note that there still seem to be some problems with video hashes: for example, many (non-.mp4) video formats are unsupported, and cause segmentation faults.