Import DXF blocks from one file to another using GDAL - c++

I'm working on a CAD program with the GDAL library version 1.11.4.
I have two DXF files: a.dxf and b.dxf. a.dxf is a template file. The file has a block layer. It contains some features (symbol information). b.dxf contains some point coordinates. I should display the points (b.dxf) using the symbols (a.dxf).
My thoughts: export the blocks from a.dxf and import them into b.dxf.
But b.dxf can't open on the CAD. Hers is my code:
enter code here
#include "stdafx.h"
#include "gdal_priv.h"
#include "ogrsf_frmts.h"
#include "gdal.h"
#include "stdio.h"
int main()
{
const char *pszDriverName = "DXF";
OGRSFDriver *poDriver = nullptr;
RegisterOGRDXF();
CPLSetConfigOption("GDAL_DATA", "./debug/data");
CPLSetConfigOption("DXF_INLINE_BLOCKS", "false");
poDriver = OGRSFDriverRegistrar::GetRegistrar()->GetDriverByName("DXF");
if (poDriver == NULL)
{
printf("%s driver not available.\n", pszDriverName);
exit(1);
}
OGRDataSource* poDS = OGRSFDriverRegistrar::Open("a.dxf", true, &poDriver);
//the block layer
OGRLayer* blockLayer = poDS->GetLayer(0);
OGRFeature* copy = blockLayer->GetFeature(0);
OGRDataSource* poDS1 = poDriver->CreateDataSource("b.dxf");
OGRLayer* blockLayer1 = poDS1->CreateLayer("blocks");
OGRLayer* entityLayer1 = poDS1->CreateLayer("entites");
auto err1 = blockLayer1->CreateFeature(copy);
OGRFeature::DestroyFeature(copy);
OGRDataSource::DestroyDataSource(poDS);
OGRDataSource::DestroyDataSource(poDS1);
}
Does anybody know what the problem is?

i solved this problem. add {copy->SetFID(1)}, the fid equals zero default. i don't know why.

Related

Quicktime 7 API for Windows - couldNotResolveDataRef

I'm trying to use the Quicktime 7 API for Windows (I know) because eventually I'm going to try to change the audio channel layout flags in Quicktime files, but right now I'm simply trying to create a "Movie" object.
I'm very familiar with languages like Python and JavaScript, but very new to C++. Despite that, I'm able to get the following code to all link up and compile nicely:
#include <iostream>
#include <Movies.h>
#include <QTML.h>
int main()
{
std::string mystring = "D:/CodingProjects/testfiles/mytestfile.mov";
OSErr initerr = InitializeQTML(0L);
OSErr entererr = EnterMovies();
Movie myMovie;
short myResID;
Size mySize = (Size)strlen(mystring.c_str()) + 1;
Handle myHandle = NewHandleClear(mySize);
BlockMove(mystring.c_str(), *myHandle, mySize);
OSErr newmovieerr = NewMovieFromDataRef(&myMovie, 0, &myResID, myHandle, URLDataHandlerSubType);
}
It all seems to run well, with initerr and entererr returning 0, and the entire program also exiting with 0. The problem is in the NewMovieFromDataRef function. newmovieerr seems to be returning code -2000 and not assigning anything (0x00000000) to myMovie. After looking this up, it turns out that this error code is a Quicktime error that means "couldNotResolveDataRef".
I've also tried creating a Movie using the function NewMovieFromHandle and got the same error code.
Can anyone help me figure out what I'm doing wrong?
For anyone interested. I finally got it to work with this code.
#include <iostream>
#include <Movies.h>
#include <QTML.h>
int main()
{
std::string mystring = "D:\\CodingProjects\\_ffmpeg\\test.mov";
OSErr initerr = InitializeQTML(0L);
OSErr entererr = EnterMovies();
CFStringRef inPath = CFStringCreateWithCString(CFAllocatorGetDefault(), mystring.c_str(), CFStringGetSystemEncoding());
Movie myMovie;
short myResID;
Size mySize = (Size)strlen(mystring.c_str()) + 1;
Handle myHandle = NewHandle(mySize);
OSType myDataRefType = NULL;
OSErr datareferr = QTNewDataReferenceFromFullPathCFString(inPath, kQTWindowsPathStyle, 0, &myHandle, &myDataRefType);
OSErr newmovieerr = NewMovieFromDataRef(&myMovie, 0, &myResID, myHandle, myDataRefType);
}
A big part of it was using \\ in the path instead of /, and also using CFStringCreateWithCString to get a CFStringRef to pass into QTNewDataReferenceFromFullPathCFString to get the proper Handle and OSType to then pass into NewMovieFromDataRef.

Parse the .dbc file and generate C++ code to represent classes/struct for each message( for target ECU )

I am trying to generate C++ code from a .dbc file.
e.g. A message is defined like following in .dbc file
BO_ 500 IO_DEBUG: 5 IO
SG_ IO_DEBUG_test_unsigned : 0|8#1+ (1,0) [0|0] "" DBG
SG_ IO_DEBUG_test_signed : 8|8#1- (1,-128) [0|0] "" DBG
SG_ IO_DEBUG_test_float1 : 16|8#1+ (0.1,0) [0|0] "" DBG
SG_ IO_DEBUG_test_float2 : 24|12#1+ (0.01,-20.48) [-20.48|20.47] "" DBG
SG_ IO_DEBUG_test_enum : 38|2#1+ (1,0) [0|0] "" DBG
BA_ "FieldType" SG_ 500 IO_DEBUG_test_enum "IO_DEBUG_test_enum";
VAL_ 500 IO_DEBUG_test_enum 2 "IO_DEBUG_test2_enum_two" 1 "IO_DEBUG_test2_enum_one" ;
I am trying to generate C++ code something like this. Message name will become the Class name and all signals should become the members of the class along with data-types.
//IoDebug.h -- ProcessMessageInterface is an interface.
class IoDebug : public ProcessMessageInterface {
pubic:
// ProcessMessageInterface implementation
void processMessage();
private:
uint8_t testUnSigned;
int8_t testSigned;
float testFloat1;
float testFloat2;
IO_DEBUG_test_enum testEnum;
};
//IoDebug.cpp
#include "IoDebug.h"
IoDebug::processMessage()
{
}
Is there any dbc parser and code generation tool(s) exists which can generate code like above?
This is the closest thing I have found:
https://github.com/astand/c-coderdbc
It seems to generate code in roughly the same format as you desire.
There is also a website associated with it:
https://coderdbc.com/ccoder/uploaddbc
There is also this other similar project:
https://github.com/xR3b0rn/dbcppp
But I personally did not like the generated code, since it doesn't create structs for each CAN message, and instead parses each signal individually. This approach probably works fine, but isn't quite what you are looking for.
Here is a python script which generates C++ code. You need to install cantools packages to run following script.
import cantools
import math
def build_name(name):
nodes = name.split("_")
nodes[0] = nodes[0].title()
return "".join(nodes)
def signal_variable_name(signal_name):
return "m_" + build_name(signal_name)
def isFloat(signal):
return True if isinstance(signal.scale, float) else False
def signal_data_type(signal):
if not signal.choices:
if isFloat(signal):
return "float"
else:
return "int" if signal.is_signed else "uint" + str((math.floor((signal.length - 1) / 8) + 1) * 8) + "_t"
else:
return signal.name
def initial_signal_value(signal):
initial = 0
if signal.initial:
initial = signal.initial
print("initial: "+str(initial))
print(signal.choices)
if signal.choices:
return signal.name + "_" + signal.choices[initial]
else:
return initial
cpp_template = """
#include <string>
#include "{messagename}.h"
using namespace std;
{messagename}::{messagename}()
{{
}}
"""
header_template = """
#ifndef {message_h}
#define {message_h}
#include <stdint.h>
#include <iostream>
class {messagename} : public {messageparent} {{
public:
{messagename}();
bool processMessage();
private:
"""
# dbc file
db = cantools.database.load_file("path_to_dummy.dbc")
# We can grow following list, add those messages for which we want to generate the code.
messages_list=["IO_DEBUG"]
for message_name in messages_list:
# massaging message_name here.
good_message_name = build_name(message_name)
message = db.get_message_by_name(message_name)
message_cpp_file = good_message_name+".cpp"
context = {"messagename": good_message_name, "dbc_message_name": message_name}
# writing code for C++ file.
f = open(message_cpp_file, "w")
f.write(cpp_template.format(**context))
f.write("bool {}::processMessage() {{\n return true;\n}}\n".format(good_message_name))
# we can add more code here to auto-generate code inside above fucntion to process the signals.
f.close()
# writing code for header file.
message_header_file = good_message_name+".h"
f = open(message_header_file, "w")
context["message_h"] = message_name.upper()+"_H"
context["messageparent"] = "ProcessMessageInterface"
f.write(header_template.format(**context))
for signal in message.signals:
f.write(" {} {};\n".format(signal_data_type(signal), signal_variable_name(signal.name)))
f.write("\n};\n\n#endif // " + context["message_h"])
f.write("\n")
f.close()
run it as
python3 script.py
Above script will generate following header and cpp files.
IoDEBUG.h
#ifndef IO_DEBUG_H
#define IO_DEBUG_H
#include <stdint.h>
#include <iostream>
class IoDEBUG : public ProcessMessageInterface {
public:
IoDEBUG();
bool processMessage();
private:
uint8_t m_IoDEBUGtestunsigned;
int m_IoDEBUGtestsigned;
float m_IoDEBUGtestfloat1;
float m_IoDEBUGtestfloat2;
IO_DEBUG_test_enum m_IoDEBUGtestenum;
};
#endif // IO_DEBUG_H
IoDEBUG.cpp
#include <string>
#include "IoDEBUG.h"
using namespace std;
IoDEBUG::IoDEBUG()
{
}
bool IoDEBUG::processMessage() {
return true;
}
Please have a look at Soureforge's comFramework, too. It's likely close to what you need. Here, the concept is to make the output controlled by templates; the powerful template engine StringTemplate V4 is fed with the parsed DBC file. You can bring virtually everything from the DBC into C/C++ (messages, signals, attributes, enumerations, node names, etc.) Unfortunately, all samples still produce C. Migration to C++ is however trivial.
See https://sourceforge.net/projects/comframe/
Convert .dbc files to header files and c files for microcontroller using this windows software:
https://github.com/HamidBakhtiary/DBC_to_header
in description there is link to download software.
du to 25 MB limit it is not possible to put it in github.

More than one input is Const Op

I am trying to serve the following gitrepo in opencv: https://github.com/una-dinosauria/3d-pose-baseline and the checkpoint data can be found at the following link: https://drive.google.com/file/d/0BxWzojlLp259MF9qSFpiVjl0cU0/view
I have already constructed a frozen graph which I can serve in python and was generated using the following script:
meta_path = 'checkpoint-4874200.meta' # Your .meta file
output_node_names = ['linear_model/add_1'] # Output nodes
export_dir=os.path.join('export_dir')
graph=tf.Graph()
with tf.Session(graph=graph) as sess:
# Restore the graph
loader=tf.train.import_meta_graph(meta_path)
loader.restore(sess,'checkpoint-4874200')
builder=tf.saved_model.builder.SavedModelBuilder(export_dir)
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.SERVING],
strip_default_attrs=True)
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('C:\\Users\\FrozenGraph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
Then I optimized the graph by running:
optimized_graph_def=optimize_for_inference_lib.optimize_for_inference(
frozen_graph_def,
['inputs/enc_in'],
['linear_model/add_1'],
tf.float32.as_datatype_enum)
g=tf.gfile.FastGFile('optimized_inference_graph.pb','wb')
g.write(optimized_graph_def.SerializeToString())
and the optimized frozen graph can be found at: https://github.com/alecda573/frozen_graph/blob/master/optimized_inference_graph.pb
When I try to run in opencv the following I get this runtime error:
OpenCV(4.3.0) Error: Unspecified error (More than one input is Const op) in cv::dnn::dnn4_v20200310::`anonymous-namespace'::TFImporter::getConstBlob, file C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\tensorflow\tf_importer.cpp, line 570
Steps to reproduce
To reproduce problem you just need to download the frozen graph from the above link or create yourself from the checkpoint data and then call the following in opencv with the below headers:
#include <iostream>
#include <vector>
#include <cmath>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include "opencv2/dnn.hpp"
string pbFilePath = "C:/Users/optimized_inferene_graph.pb";
//Create 3d-pose-baseline model
cv::dnn::Net inputNet;
inputNet = cv::dnn::readNetFromTensorflow(pbFilePath);
Would love to know if anyone has any thoughts on how to address this error.
You can see the frozen graph and optimize graph I generated with tensorboard from the attached photos.
I have a feeling the error is arising from the training flag inputs but I am not certain, and I do not want to go trying to edit the graph if that is not the problem.
I am attaching the function in opencv that is causing the issue:
const tensorflow::TensorProto& TFImporter::getConstBlob(const tensorflow::NodeDef &layer, std::map<String, int> const_layers,
int input_blob_index, int* actual_inp_blob_idx) {
if (input_blob_index == -1) {
for(int i = 0; i < layer.input_size(); i++) {
Pin input = parsePin(layer.input(i));
if (const_layers.find(input.name) != const_layers.end()) {
if (input_blob_index != -1)
CV_Error(Error::StsError, "More than one input is Const op");
input_blob_index = i;
}
}
}
if (input_blob_index == -1)
CV_Error(Error::StsError, "Const input blob for weights not found");
Pin kernel_inp = parsePin(layer.input(input_blob_index));
if (const_layers.find(kernel_inp.name) == const_layers.end())
CV_Error(Error::StsError, "Input [" + layer.input(input_blob_index) +
"] for node [" + layer.name() + "] not found");
if (kernel_inp.blobIndex != 0)
CV_Error(Error::StsError, "Unsupported kernel input");
if(actual_inp_blob_idx) {
*actual_inp_blob_idx = input_blob_index;
}
int nodeIdx = const_layers.at(kernel_inp.name);
if (nodeIdx < netBin.node_size() && netBin.node(nodeIdx).name() == kernel_inp.name)
{
return netBin.node(nodeIdx).attr().at("value").tensor();
}
else
{
CV_Assert_N(nodeIdx < netTxt.node_size(),
netTxt.node(nodeIdx).name() == kernel_inp.name);
return netTxt.node(nodeIdx).attr().at("value").tensor();
}
}
As you pointed out, the error originates in getConstBlob (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L570). getConstBlobis called several times in populateNet (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L706), which is called in all overloaded definitions of readNetFromTensor (https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/tf_importer.cpp#L2278). Those may be starting points for where to place breakpoints if you want to step through with a debugger.
The other thing I noticed is that the definition of populateNet which I believe you're using (supplying a std::string: https://docs.opencv.org/master/d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d) requires two arguments - both the model path (model) and a configuration (config`), which is optional and defaults to an empty string. In the unit tests, it looks like there are both cases - with and without configuration provided (https://github.com/opencv/opencv/blob/master/modules/dnn/test/test_tf_importer.cpp). I'm not sure if that would have an impact.
Lastly, in the script you provided to replicate the results, I believe the model file name is misspelled - it says optimized_inferene_graph.pb, but the file you point to in the github repo is spelled optimized_inference_graph.pb.
Just a few suggestions, I hope this may help!

CrerdWriteW storing credentials in Mandarin on Windows

I used the answer here to add credentials programmatically to Windows Credential Manager. The code is inspired by the code in the answer. When I run it however, the credentials in the cred manager show up in Mandarin. I am not sure what am I doing wrong. Would appreciate any pointers. TIA .
For references this is the code I have
#include <iostream>
#include "windows.h"
#include "wincred.h"
#pragma hdrstop
using namespace std;
int main()
{
const char* password = "testpass";
CREDENTIALW creds = { 0 };
creds.Type = CRED_TYPE_GENERIC;
creds.TargetName = (LPWSTR)("testaccount");
creds.CredentialBlobSize = strlen(password) + 1;
creds.CredentialBlob = (LPBYTE)password;
creds.Persist = CRED_PERSIST_LOCAL_MACHINE;
creds.UserName = (LPWSTR)("testuser");
BOOL result = CredWriteW(&creds, 0);
if (result != TRUE)
{
cout << "Some error occurred" << endl;
}
else
{
cout << "Stored the password successfully" << endl;
}
return 0;
}
To ensure there is no default language problem, I manually created a credential from within the credential manager for test.com and had no problems with it. Snapshot of the Cred Manager -
Appearantly, TargetName needs to refer to a mutable array, i.e. not a string literal. It also needs to be a wide string, or else the characters will be interpreted wrongly, in this case resulting in Chinese characters.
The solution is to define a mutable array that is initialized with a wide string, and have TargetName point to it:
WCHAR targetName [] = L"testuser";
creds.TargetName = targetName;
This way, no suspicious cast is needed to make it compile. When you want to input non-hardcoded strings (e.g. from user input or a file), you need to make sure they are correctly encoded and convert appropriately.

How to transfer and parse snap graph from python to c++

Stanford SNAP is a well-known package for graph mining, and has both Python implementation and C++ implementation.
I have some code in python to do graph mining using SNAP. I also have a C++ function process the snap graph. Now I need to write a wrapper so that this C++ function can be invoked from Python.
The problem is that I don't know how to parse/dereference the snap graph object from Python to C++.
The python code looks like: (More explanations come after the code examples)
import my_module;
import snap;
G = snap.GenRndGnm(snap.PUNGraph, 100, 1000);
print(type(G));
A = my_module.CppFunction(G); # customized function
print(A);
The CPP wrapper my_module_in_cpp.cpp looks like:
// module name: my_module, defined in the setup file
// function to be called from python: CppFunction
#include <Python.h>
//#include "my_file.h" // can be ignored in this minimal working example.
#include "Snap.h"
#include <iostream>
static PyObject *My_moduleError;
// module_name_function, THE CORE FUNCTION OF THIS QUESTION
static PyObject *my_module_CppFunction(PyObject *self, PyObject *args) {
PUNGraph G_py;
int parseOK = PyArg_ParseTuple(args, "O", &G_py);
if (!parseOK) return NULL;
std::cout << "N: " << G_py->GetNodes() << ", E: " << G_py->GetEdges() << std::endl;
fflush(stdout);
if ((G_py->GetNodes()!=100)||(G_py->GetEdges()!=1000)) {
PyErr_SetString(My_moduleError, "Graph reference incorrect.");
return NULL;
}
PyObject *PList = PyList_New(0);
PyList_Append(PList,Py_BuildValue("i",G_py->GetNodes()));
PyList_Append(PList,Py_BuildValue("i",G_py->GetEdges()));
return PList;
}
// To register the core function to python
static PyMethodDef CppFunctionMethod[] = {
{"CppFunction", my_module_CppFunction, METH_VARARGS, "To call CppFunction in C++"},
{NULL,NULL,0,NULL}
};
extern "C" PyMODINIT_FUNC initmy_module(void) {
PyObject *m = Py_InitModule("my_module",CppFunctionMethod);
if (m==NULL) return;
My_moduleError = PyErr_NewException("my_module.error", NULL, NULL);
Py_INCREF(My_moduleError);
PyModule_AddObject(m, "error", My_moduleError);
}
I'm using Ubuntu, python-2.7. In case someone may want to re-produce the problem, the setup.py file is also provided.
from distutils.core import setup, Extension
module1 = Extension('my_module',\
include_dirs = ['/usr/include/python2.7/','/users/<my_local>/Snap-3.0/','/users/<my_local>/Snap-3.0/snap-core','/users/<my_local>/Snap-3.0/glib-core'],
library_dirs = ['/users/<my_local>/Snap-3.0/snap-core/'],
extra_objects = ['/users/<my_local>/Snap-3.0/snap-core/Snap.o'],
extra_compile_args=['-fopenmp','-std=c++11'],
extra_link_args=['-lgomp'],
sources = ['my_module_in_cpp.cpp'])
setup (name = 'NoPackageName', version = '0.1',\
description = 'No description.', ext_modules = [module1])
Every time I run the python code above, the error message "Graph reference incorrect." is displayed.
Apparently G_py->GetNodes() and G_py->GetEdges() cause the problem. This must result from G_py not pointing to the right address/in right format. I tried using TUNGraph in the cpp code as well, it still does not point to the correct address. Is there any way that the pointer in C++ can point to the correct address of the original C++ object?
Although in general it is hard to dereference a PythonObject from C++, but in this case I think it is doable since Snap-Python is also implemented in C++. We just need to unwrap its python wrapper. And the snap authors also provided the SWIG files.
Of course we can write the graph file in the disk, and read from that, but this will result in I/O and incur extra time consumption. And the snap-user-group does not have as much user traffic as stackoverflow.
BTW, there are networkx and stanford-nlp tags, but no stanford-snap or similar tag referring to that tool. Can someone create such a tag?