I have a string in a C++ Qt application (on Ubuntu) which contains valid GraphViz/dot graph syntax. I want to generate an image file/object from this text, similar to the images that various online tools (like this one: http://www.webgraphviz.com/) spit out. Maybe I'm using wrong search terms, but I can't seem to find relevant help with this.
What I basically want is something like this:
generate_dot_graph_image(std::string dot_text, std::string image_file_path)
Additional details: I have a Dijkstra solver whose solution (basically the original graph after removing non-used edges) I want to visualize inside my application. The solver already includes an option to convert the solution to a string that can be parsed as a dot graph using a utility such as the one I linked above. But what I want is to be able to do this from inside C++.
So I was able to do exactly what I wanted using the GraphViz libraries. You can install them on Ubuntu using sudo apt-get install graphviz-lib and sudo apt-get install libgraphviz-dev. Once that's done:
#include <graphviz/gvc.h>
bool DotGraphGenerator::saveImage()
{
std::string o_arg = std::string("-o") + "/path/to/image_file.png";
char* args[] = {const_cast<char*>("dot"), const_cast<char*>("Tpng"), const_cast<char*>("-Gsize=8,4!"), const_cast<char*>("-Gdpi=100"),
const_cast<char*>(DOT_TEXT_FILE.c_str()), //DOT_TEXT_FILE is the file path in which the graph is saved as valid DOT syntax
const_cast<char*>(o_arg.c_str()) };
const int argc = sizeof(args)/sizeof(args[0]);
Agraph_t *g, *prev = NULL;
GVC_t *gvc;
gvc = gvContext();
gvParseArgs(gvc, argc, args);
while ((g = gvNextInputGraph(gvc)))
{
if (prev)
{
gvFreeLayout(gvc, prev);
agclose(prev);
}
gvLayoutJobs(gvc, g);
gvRenderJobs(gvc, g);
prev = g;
}
return !gvFreeContext(gvc);
}
gvc is a C library, and the functions take non-const C strings as arguments, hence the const_casts in the beginning. You can also edit the image size by altering the -Gsize=8,4 and -Gdpi=100 args. With the current configuration you'll get an 8*100 x 4*100 = 800x400 image file. Anyway, these arguments are the same as you would apply when running the dot command from bash.
Other than that, this code is basically copied from one of the examples in the graphViz as a library manual: http://www.graphviz.org/pdf/libguide.pdf
I found a way, I used the following function and it works:
bool saveImageGV(std::string file_path){
GVC_t *gvc;
Agraph_t *g;
FILE *fp;
gvc = gvContext();
fp = fopen((file_path+".dot").c_str(), "r");
g = agread(fp, 0);
gvLayout(gvc, g, "dot");
gvRender(gvc, g, "png", fopen((file_path+".png").c_str(), "w"));
gvFreeLayout(gvc, g);
agclose(g);
return (gvFreeContext(gvc));
}
Related
I'm trying to get the name of the current printer using the libcups library in Linux, but I can't find such a method. I found only how to get a complete list of printers, but how to find out which one will print is not clear.
#include <cups/cups.h>
QStringList getPrinters()
{
QStringList printerNames;
cups_dest_t *dests;
int num_dests = cupsGetDests(&dests);
for (int pr = 0; pr < num_dests; ++pr) {
QString printerName = QString::fromUtf8(dests[pr].name);
printerNames.append(printerName);
}
cupsFreeDests(num_dests, dests);
return printerNames;
}
Once you have a valid destination (cups_dest_t), one can retrieve the informations via: cupsGetOption
Example (from https://openprinting.github.io/cups/doc/cupspm.html#basic-destination-information):
const char *model = cupsGetOption("printer-make-and-model",
dest->num_options,
dest->options);
To find the default printer one can use:
cupsGetDest (param name: NULL for the default destination)
cupsGetDests2 (param http: Connection to server or CUPS_HTTP_DEFAULT)
Other suggestion would be:
https://openprinting.github.io/cups/doc/cupspm.html#finding-available-destinations
Last but not least:
CUPS Programming Manual
Sidenote:
Since you're using Qt, doesn't Qt have printer support?
E.g.
QPrinter::QPrinter(const QPrinterInfo &printer, QPrinter::PrinterMode mode = ScreenResolution);
(see https://doc.qt.io/qt-6/qprinter.html#QPrinter-1)
and
bool QPrinterInfo::isDefault() const;
(see https://doc.qt.io/qt-6/qprinterinfo.html#isDefault)
I am trying to fix this issue:
https://github.com/gitahead/gitahead/issues/380
The problem is that the tree used in the model does not contain any untracked files and therefore the view has nothing to show. When I stage on file it is shown.
Is there a way to track in the tree also the untracked files?
I created a small test application to find the problem. When one file is staged, count is unequal to zero, otherwise it is always zero.
Testsetup
new git repository (TestRepository) with the following untracked files:
testfile.txt
testfolder/testfile2.txt
d
#include <git2.h>
#include <stdio.h>
int main() {
git_libgit2_init();
git_repository *repo = NULL;
int error = git_repository_open(&repo, "/TestRepository");
if (error < 0) {
const git_error *e = git_error_last();
printf("Error %d/%d: %s\n", error, e->klass, e->message);
exit(error);
}
git_tree *tree = nullptr;
git_index* idx = nullptr;
git_repository_index(&idx, repo);
git_oid id;
if (git_index_write_tree(&id, idx)) {
const git_error *e = git_error_last();
printf("Error %d/%d: %s\n", error, e->klass, e->message);
exit(error);
}
git_tree_lookup(&tree, repo, &id);
int count = git_tree_entrycount(tree);
printf("%d", count);
git_repository_free(repo);
printf("SUCCESS");
return 0;
}
If I understood correctly, what you're seeing is normal: as the file is untracked/new, the index has no knowledge of it, so if you ask the index, it has no "staged" changes to compare with, hence no diff.
If you want a diff for a yet-to-be tracked file, you'll have to provide it another way, usually by asking git_diff to do the work of comparing the worktree version with /dev/null, the empty blob, etc.
Since you're after a libgit2 solution, the way I'm trying to do that in GitX is via the git_status_list_new API, which gives a somewhat filesystem-independent way of generating both viewable diffs (staged & unstaged) on-the-fly, using git_patch_from_blobs/git_patch_from_blobs_and_buffer. In retrospect, maybe that should live in the library as git_status_entry_generate_patch or something…
Stanford SNAP is a well-known package for graph mining, and has both Python implementation and C++ implementation.
I have some code in python to do graph mining using SNAP. I also have a C++ function process the snap graph. Now I need to write a wrapper so that this C++ function can be invoked from Python.
The problem is that I don't know how to parse/dereference the snap graph object from Python to C++.
The python code looks like: (More explanations come after the code examples)
import my_module;
import snap;
G = snap.GenRndGnm(snap.PUNGraph, 100, 1000);
print(type(G));
A = my_module.CppFunction(G); # customized function
print(A);
The CPP wrapper my_module_in_cpp.cpp looks like:
// module name: my_module, defined in the setup file
// function to be called from python: CppFunction
#include <Python.h>
//#include "my_file.h" // can be ignored in this minimal working example.
#include "Snap.h"
#include <iostream>
static PyObject *My_moduleError;
// module_name_function, THE CORE FUNCTION OF THIS QUESTION
static PyObject *my_module_CppFunction(PyObject *self, PyObject *args) {
PUNGraph G_py;
int parseOK = PyArg_ParseTuple(args, "O", &G_py);
if (!parseOK) return NULL;
std::cout << "N: " << G_py->GetNodes() << ", E: " << G_py->GetEdges() << std::endl;
fflush(stdout);
if ((G_py->GetNodes()!=100)||(G_py->GetEdges()!=1000)) {
PyErr_SetString(My_moduleError, "Graph reference incorrect.");
return NULL;
}
PyObject *PList = PyList_New(0);
PyList_Append(PList,Py_BuildValue("i",G_py->GetNodes()));
PyList_Append(PList,Py_BuildValue("i",G_py->GetEdges()));
return PList;
}
// To register the core function to python
static PyMethodDef CppFunctionMethod[] = {
{"CppFunction", my_module_CppFunction, METH_VARARGS, "To call CppFunction in C++"},
{NULL,NULL,0,NULL}
};
extern "C" PyMODINIT_FUNC initmy_module(void) {
PyObject *m = Py_InitModule("my_module",CppFunctionMethod);
if (m==NULL) return;
My_moduleError = PyErr_NewException("my_module.error", NULL, NULL);
Py_INCREF(My_moduleError);
PyModule_AddObject(m, "error", My_moduleError);
}
I'm using Ubuntu, python-2.7. In case someone may want to re-produce the problem, the setup.py file is also provided.
from distutils.core import setup, Extension
module1 = Extension('my_module',\
include_dirs = ['/usr/include/python2.7/','/users/<my_local>/Snap-3.0/','/users/<my_local>/Snap-3.0/snap-core','/users/<my_local>/Snap-3.0/glib-core'],
library_dirs = ['/users/<my_local>/Snap-3.0/snap-core/'],
extra_objects = ['/users/<my_local>/Snap-3.0/snap-core/Snap.o'],
extra_compile_args=['-fopenmp','-std=c++11'],
extra_link_args=['-lgomp'],
sources = ['my_module_in_cpp.cpp'])
setup (name = 'NoPackageName', version = '0.1',\
description = 'No description.', ext_modules = [module1])
Every time I run the python code above, the error message "Graph reference incorrect." is displayed.
Apparently G_py->GetNodes() and G_py->GetEdges() cause the problem. This must result from G_py not pointing to the right address/in right format. I tried using TUNGraph in the cpp code as well, it still does not point to the correct address. Is there any way that the pointer in C++ can point to the correct address of the original C++ object?
Although in general it is hard to dereference a PythonObject from C++, but in this case I think it is doable since Snap-Python is also implemented in C++. We just need to unwrap its python wrapper. And the snap authors also provided the SWIG files.
Of course we can write the graph file in the disk, and read from that, but this will result in I/O and incur extra time consumption. And the snap-user-group does not have as much user traffic as stackoverflow.
BTW, there are networkx and stanford-nlp tags, but no stanford-snap or similar tag referring to that tool. Can someone create such a tag?
I want to use the DCMTK 3.6.1 library in an existing project that can create DICOM image. I want to use this library because I want to make the compression of the DICOM images. In a new solution (Visual Studio 2013/C++) Following the example in the DCMTK official documentation, I have this code, that works properly.
using namespace std;
int main()
{
DJEncoderRegistration::registerCodecs();
DcmFileFormat fileformat;
/**** MONO FILE ******/
if (fileformat.loadFile("Files/test.dcm").good())
{
DcmDataset *dataset = fileformat.getDataset();
DcmItem *metaInfo = fileformat.getMetaInfo();
DJ_RPLossless params; // codec parameters, we use the defaults
// this causes the lossless JPEG version of the dataset
//to be created EXS_JPEGProcess14SV1
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
// check if everything went well
if (dataset->canWriteXfer(EXS_JPEGProcess14SV1))
{
// force the meta-header UIDs to be re-generated when storing the file
// since the UIDs in the data set may have changed
delete metaInfo->remove(DCM_MediaStorageSOPClassUID);
delete metaInfo->remove(DCM_MediaStorageSOPInstanceUID);
metaInfo->putAndInsertString(DCM_ImplementationVersionName, "New Implementation Version Name");
//delete metaInfo->remove(DCM_ImplementationVersionName);
//dataset->remove(DCM_ImplementationVersionName);
// store in lossless JPEG format
fileformat.saveFile("Files/carrellata_esami_compresso.dcm", EXS_JPEGProcess14SV1);
}
}
DJEncoderRegistration::cleanup();
return 0;
}
Now I want to use the same code in an existing C++ application where
if (infoDicom.arrayImgDicom.GetSize() != 0) //Things of existing previous code
{
//I have added here the registration
DJEncoderRegistration::registerCodecs(); // register JPEG codecs
DcmFileFormat fileformat;
DcmDataset *dataset = fileformat.getDataset();
DJ_RPLossless params;
dataset->putAndInsertUint16(DCM_Rows, infoDicom.rows);
dataset->putAndInsertUint16(DCM_Columns, infoDicom.columns,);
dataset->putAndInsertUint16(DCM_BitsStored, infoDicom.m_bitstor);
dataset->putAndInsertUint16(DCM_HighBit, infoDicom.highbit);
dataset->putAndInsertUint16(DCM_PixelRepresentation, infoDicom.pixelrapresentation);
dataset->putAndInsertUint16(DCM_RescaleIntercept, infoDicom.rescaleintercept);
dataset->putAndInsertString(DCM_PhotometricInterpretation,"MONOCHROME2");
dataset->putAndInsertString(DCM_PixelSpacing, "0.086\\0.086");
dataset->putAndInsertString(DCM_ImagerPixelSpacing, "0.096\\0.096");
BYTE* pData = new BYTE[sizeBuffer];
LPBYTE pSorg;
for (int nf=0; nf<iNumberFrames; nf++)
{
//this contains all the PixelData and I put it into the dataset
pSorg = (BYTE*)infoDicom.arrayImgDicom.GetAt(nf);
dataset->putAndInsertUint8Array(DCM_PixelData, pSorg, sizeBuffer);
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
//and I put it in my data set
//but this IF return false so che canWriteXfer fails...
if (dataset->canWriteXfer(EXS_JPEGProcess14SV1))
{
dataset->remove(DCM_MediaStorageSOPClassUID);
dataset->remove(DCM_MediaStorageSOPInstanceUID);
}
//the saveFile fails too, and the error is "Pixel
//rappresentation non found" but I have set the Pixel rep with
//dataset->putAndInsertUint16(DCM_PixelRepresentation, infoDicom.pixelrapresentation);
OFCondition status = fileformat.saveFile("test1.dcm", EXS_JPEGProcess14SV1);
DJEncoderRegistration::cleanup();
if (status.bad())
{
int error = 0; //only for test
}
thefile.Write(pSorg, sizeBuffer); //previous code
}
Actually I made test with image that have on one frame, so the for cycle is done only one time. I don't understand why if I choose dataset->chooseRepresentation(EXS_LittleEndianImplicit, ¶ms); or dataset->chooseRepresentation(EXS_LittleEndianEXplicit, ¶ms); works perfectly but not when I choose dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
If I use the same image in the first application, I can compress the image without problems...
EDIT: I think the main problem to solve is the status = dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &rp_lossless) that return "Tag not found". How can I know wich tag is missed?
EDIT2: As suggest in the DCMTK forum I have added the tag about the Bits Allocated and now works for few images, but non for all. For some images I have again "Tag not found": how can I know wich one of tags is missing? As a rule it's better insert all the tags?
I solve the problem adding the tags DCM_BitsAllocated and DCM_PlanarConfiguration. This are the tags that are missed. I hope that is useful for someone.
At least you should call the function chooseRepresentation, after you have applied the data.
**dataset->putAndInsertUint8Array(DCM_PixelData, pSorg, sizeBuffer);**
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
I have set up my system with the latest ffmpeg and pHash libraries (ffmpeg-2.2.1 and pHash-0.9.6) as well as the pHash ruby gem (https://github.com/toy/pHash).
I am using ruby and attempting to compare two video files like this:
require 'phash/video'
video1 = Phash::Video.new('video1.mp4')
video2 = Phash::Video.new('video2.mp4')
video1 % video2
Executing this script results in a Segmentation fault:
..../gems/pHash-1.1.4/lib/phash/video.rb:20: [BUG] Segmentation fault
ruby 1.9.3p545 (2014-02-24 revision 45159) [x86_64-darwin13.1.0]
-- Control frame information -----------------------------------------------
c:0008 p:---- s:0029 b:0029 l:000028 d:000028 CFUNC :ph_dct_videohash
c:0007 p:0042 s:0024 b:0024 l:000023 d:000023 METHOD .../gems/pHash-1.1.4/lib/phash/video.rb:20
c:0006 p:0038 s:0017 b:0017 l:000016 d:000016 METHOD .../gems/pHash-1.1.4/lib/phash.rb:43
c:0005 p:0025 s:0014 b:0014 l:000013 d:000013 METHOD .../gems/pHash-1.1.4/lib/phash.rb:39
c:0004 p:0011 s:0011 b:0011 l:000010 d:000010 METHOD .../gems/pHash-1.1.4/lib/phash.rb:48
c:0003 p:0050 s:0006 b:0006 l:000128 d:0011b8 EVAL video_test_phash.rb:3
c:0002 p:---- s:0004 b:0004 l:000003 d:000003 FINISH
c:0001 p:0000 s:0002 b:0002 l:000128 d:000128 TOP
-- Ruby level backtrace information ----------------------------------------
video_test_phash.rb:3:in `<main>'
.../gems/pHash-1.1.4/lib/phash.rb:48:in `similarity'
.../gems/pHash-1.1.4/lib/phash.rb:39:in `phash'
.../gems/pHash-1.1.4/lib/phash.rb:43:in `compute_phash'
.../gems/pHash-1.1.4/lib/phash/video.rb:20:in `video_hash'
.../gems/pHash-1.1.4/lib/phash/video.rb:20:in `ph_dct_videohash'
...
Abort trap: 6
It appears that the crash happens in the ph_dct_videohash function which is part of the pHash library. The function is in file pHash.cpp. I am copying it here in case it would make sense to someone:
ulong64* ph_dct_videohash(const char *filename, int &Length){
CImgList<uint8_t> *keyframes = ph_getKeyFramesFromVideo(filename);
if (keyframes == NULL)
return NULL;
Length = keyframes->size();
ulong64 *hash = (ulong64*)malloc(sizeof(ulong64)*Length);
CImg<float> *C = ph_dct_matrix(32);
CImg<float> Ctransp = C->get_transpose();
CImg<float> dctImage;
CImg<float> subsec;
CImg<uint8_t> currentframe;
for (unsigned int i=0;i < keyframes->size(); i++){
currentframe = keyframes->at(i);
currentframe.blur(1.0);
dctImage = (*C)*(currentframe)*Ctransp;
subsec = dctImage.crop(1,1,8,8).unroll('x');
float med = subsec.median();
hash[i] = 0x0000000000000000;
ulong64 one = 0x0000000000000001;
for (int j=0;j<64;j++){
if (subsec(j) > med)
hash[i] |= one;
one = one << 1;
}
}
keyframes->clear();
delete keyframes;
keyframes = NULL;
delete C;
C = NULL;
return hash;
}
Any help is very much appreciated!
In the latest versions of ffmpeg, some functions (like "avformat_open_input" in this case) segfault when given an uninitialized pointer. Someone on the pHash support mailing list has shown how to modify the pHash source in order to initialize the pointers, and prevent the segfaults.
To fix the segmentation faults, lines 365 and 411 in pHash-0.9.6/src/cimgffmpeg.cpp must be changed from AVFormatContext *pFormatCtx; to AVFormatContext *pFormatCtx = NULL;, and then the source code must be recompiled and installed.
Note that there still seem to be some problems with video hashes: for example, many (non-.mp4) video formats are unsupported, and cause segmentation faults.