How to call tensorflow::NewSession in c++ - c++

I am using c++ to load and run tensorflow graph. The tensorflow version is 2.1 cpu. I have included necessary header files and the libary _pywrap_tensorflow_internal.lib
I use
unique_ptr<tensorflow::Session> session_inception(tensorflow::NewSession(SessionOptions()));
to create a new session. The compiling has no error.
But when I build the executable, there is a link error saying that NewSession is an unresolved external symbol. I guess the function "NewSession" is not in the library file "_pywrap_tensorflow_internal.lib".
How to call NewSession in c++ environment? or maybe using new APIs to run the graph (instead of using session)? Thank you very much.

I assume you're using this code from 2016: https://gist.github.com/kyrs/9adf86366e9e4f04addb which I found by googling for "session_inception". This code predates TensorFlow 2.1 and 2.0 (which was released in September 2019).
The TensorFlow 2.1 documentation does not list any function or type named Session or NewSession - I'm not a TensorFlow user but I think ClientSession is the current type you want.
Here's the sample code from the TensorFlow 2.1 documentation
Scope root = Scope::NewRootScope();
auto a = Placeholder(root, DT_INT32);
auto c = Add(root, a, {41});
ClientSession session(root);
std::vector outputs;
Status s = session.Run({ {a, {1}} }, {c}, &outputs);
if (!s.ok()) { ... }
That said, if you want to run this code from March 2016 you should just download the closest version of TensorFlow released around that date, which is version v0.7.1 available here: https://github.com/tensorflow/tensorflow/releases/tag/v0.7.1 - this is probably inadvisable as TensorFlow's documentation website only has documentation for versions released after v1.0
As an aside, I assume you're relatively new to working in C++ because I found this information quickly in a matter of seconds - had you done the same you wouldn't have posted this question to SO. If you are inexperienced working with C++ or TensorFlow then I advise you that it would be a bad idea to go wading around obsolete and unsupported C++ libraries as you'll be SOL if anything goes wrong - and things will go wrong, especially with prelease versions of TensorFlow.

Related

Unexpected error using grpc arena allocation

I am writing a c++ grpc based service. As recommended here, I installed grpc in my local directpry. I am using linux ubuntu 22.04.
Now, at he begginig of a service call implementation, i have these lines of code:
google::protobuf::Arena arena;
ResponseStatus * response_status =
google::protobuf::Arena::CreateMessage<ResponseStatus>(&arena);
response->set_allocated_status(response_status);
When I invoke the service, from a ruby client, the last line causes an exception with the following error message
[libprotobuf FATAL /home/lrleon/grpc/third_party/protobuf/src/google/protobuf/generated_message_util.cc:764] CHECK failed: (submessage_arena) == (nullptr):
I'll be honest: I do not have an good idea about why this problem could be happening. My best hypothesis is that somewhere a different version of the required grpc/protobuf libraries is intervening. I do not believe it because I am almost sure I am not using any system library related to grpc, but eventually another app could have installed some library.
I spent a full day searching in forums without getting a some related post that could help me solve mine.
That said, I kindly ask if some expert in grpc could help me.
Thanls in advance

Calling a c++ function on Rstudio on a MAC and getting (clang: error: unsupported option '-fopenmp')

I know how to code but I really do not know my way around a computer.
I have a program that I have to run for my master thesis. It is a code with multiple collabs and runs perfectly on Linux. However, it is a very complex simulational code and therefore it takes time to run for multiple parameters. I've been using my Linux at the university to run it but would like to run some of it on my personal computer (MAC OS). It works by using the R language to call upon c++ functions as follows (being filename a code on c++).
On a Rstudio script:
Sys.setenv("PKG_CPPFLAGS" = "-fopenmp -DPARALLEL")
system("rm filename.so")
system("rm filename.o")
system ("R CMD SHLIB filename.cpp")
dyn.load("filename.so")
After system ("R CMD SHLIB filename.cpp") I get error:
clang: error: unsupported option '-fopenmp'
make: *** [filename.o] Error 1
I've researched on the subject and found this
Enable OpenMP support in clang in Mac OS X (sierra & Mojave)
I've Installed LLVM, yet I do not know how to use it in this case.
How do I use it in this case?
Thank you in advance.
"Don't do it that way." Read up on R and Rcpp and use the proper tools (especially for packaging and/or compiling) which should pick up OpenMP where possible. In particular,
scan at least the Rcpp Introduction vignette
also look at the Rcpp Attributes vignette
"Just say no" to building the compilation commands by hand unless you know what you are doing with R and have read Writing R Extensions carefully a few times. It can be done, I used to show how in tutorials and workshops (see old slides from 12-15 years ago on my website) but we first moved to package inline which helps here, and later relied on the much better Rcpp Attributes.
Now, macOS has some extra hurdles in which tools work and which ones don't. The rcpp-devel mailing list may be of help, the default step is otherwise to consult the tutorial by James.
Edit: And of course if you "just want the above to work" try the obvious step of removing the part causing the error, i.e. use
Sys.setenv("PKG_CPPFLAGS" = "")
as your macOS box appears to have a compiler but not OpenMP (which, as I understand it, is the default thanks to some "surprising" default choices at Apple -- see the aforementioned tutorial for installation help.)

F tensorflow/core/common_runtime/device_factory.cc:77] Duplicate registration of device factory for type GPU with the same priority 210

When excute my binary c++ file, this error happened.
I compile my c++ file with tensorflow_cc.so by using make all, and the version of tensorflow is 1.8.
Does anyone have met this problem?
We recently encountered this error. It happened when we inadvertently linked against both libtensorflow.so (-ltensorflow) and libtensorflow_cc.so (-ltensorflow_cc). It went away when we picked one.

Debugging XSLT with Intellij and Saxon - Unsupported Transformer

I'm currently trying to convert xml files to a completely different format, using IntelliJ Community Edition +Saxon to write and debug the stylesheet.
I have already Saxon-HE 9.7.0-5 as the top-most module dependency.
Running the stylesheets with the XSLT-Runner works just fine, but when I try to debug it, I get some errors.
When I specify no VM arguments I get:
java.lang.UnsupportedOperationException: Unsupported Transformer: net.sf.saxon.jaxp.TransformerImpl
at org.intellij.plugins.xsltDebugger.rt.engine.local.LocalDebugger.prepareTransformer(LocalDebugger.java:98)
at org.intellij.plugins.xsltDebugger.rt.engine.local.LocalDebugger.<init>(LocalDebugger.java:51)
at org.intellij.plugins.xsltDebugger.rt.engine.remote.DebuggerServer$1.<init>(DebuggerServer.java:55)
at org.intellij.plugins.xsltDebugger.rt.engine.remote.DebuggerServer.<init>(DebuggerServer.java:55)
at org.intellij.plugins.xsltDebugger.rt.engine.remote.DebuggerServer.create(DebuggerServer.java:71)
at org.intellij.plugins.xsltDebugger.rt.XSLTDebuggerMain.start(XSLTDebuggerMain.java:53)
at org.intellij.plugins.xslt.run.rt.XSLTRunner.main(XSLTRunner.java:143)
When I specify the VM-Arguments
-Dxslt.transformer.type=saxon
as recommended here, I get the following error:
javax.xml.transform.TransformerException: The URI http://www.w3.org/2005/xpath-functions does not identify an external Java class
Has anyone else experienced this?
IntelliJ IDEA Supports Saxon 9 Debugging up to Saxon 9.3.0.11.
As of Saxon 9.4.0.0 the net.sf.saxon.lib.TraceListener interface introduced braking changes net.sf.saxon.lib.TraceListener#open() => net.sf.saxon.lib.TraceListener#open(Controller) that got never adapted by JetBrains.
Since Maven artifacts for 9.3 are hard to come by you might want to manually get it from from SF.
https://sourceforge.net/projects/saxon/files/Saxon-HE/9.3/
Regarding your stack trace; It has a different story but ultimately results in going back to 9.3.
As of today, 9.3 seems to be the latest version that works with the current IntelliJ 2017.1.2 version.

Sitecore 6.6 Lucene.Net upgrade issue

I recently upgraded my Sitecore installation from 6.5 to 6.6. Part of this upgrade also upgrades the Lucene.Net library from 2.3.1.3 to 2.9.4.1, which introduces some breaking changes. The code base used a lot of custom code around the Lucene.Net search engine, which had to be removed for the installation to work. Now that I've done that, I'm trying to re-implement the search functions, but I can't get the simplest search to compile. For example, this code:
using (var sc = SearchManager.GetIndex("system").CreateSearchContext())
{
var query = new FullTextQuery("health");
SearchHits hits = sc.Search(query);
}
produces this error:
Error 104 The type 'Lucene.Net.Search.Query' is defined in an assembly
that is not referenced. You must add a reference to assembly
'Lucene.Net, Version=2.3.1.3, Culture=neutral,
PublicKeyToken=null'.
I've confirmed that I only have the 2.9.4.1 version of Lucene.Net referenced in my project. Why is this code looking for the 2.3.1.3 version?
#MarkCassidy nailed it - I did the upgrade on the server, but I was developing locally, so my local copy of the Sitecore.Kernal DLL was still at 6.5. Copying the 6.6 version down locally cleared up the compile error and let me know that my code example is obsolete, which is more along the lines of what I was expecting.