I have been looking at ODB ORM for some time now and had some practice with it. My problem is switching between different DBMS recompiling the code. From my Java background, I can simply change a config file and the ORM works e.g Hibernate. This far I can compile the 'hello' example under 'odb-examples-2.2.0.tar.gz' and connect to MySQL and PostgreSQL successfully.
Please share your ways of resolving this. Code samples would also be very helpful. I would like to simply change databases by say changing a config file. This far, referring to the manual has not helped yet. My system needs to be cross-platform.
Thanks.
If dynamic support is sufficient for you, then the following example will do the trick.
The following command line is needed before compiling the other files :
odb --std c++11 --multi-database dynamic -d common -d mysql -d sqlite \
--generate-query --generate-schema person.hxx
In my example I'm using the command line as they did in the manual (2.10). From what I've read, while you're using the odb::query, odb::transaction, you don't need to do anything else special to work with multiple databases.
Related
Is it possible to
db2 connect somedb user myuser using mypwd
db2 precompile myapp.sqx OUTPUT myapp.cxx
when I only have read permission to the REMOTE DB2 database? I'm ONLY trying to select I'm not trying to write to the database, yet the precompile command is complaining that I don't have permission to "create in" ... What can I do differently such that I can query the database using c++ (I already have a ton of code I inherited that uses embedded sql precompiling, but the person who wrote it has write permission to the table, and I don't, so I'm hoping to adapt existing code somehow)
You need to use the BINDFILE option for the PRECOMPILE command if you do not have the ability to create packages in the database:
db2 "precompile myapp.sqx BINDFILE USING myapp.bnd OUTPUT myapp.cxx"
This will generate a file, myapp.bnd that you can use (or provide to your DBA) to create the package at a later date (along with the myapp.cxx file).
Please make sure that you track your bind files carefully with your precompiled code and binaries. The bind files and generated source code are paired, so if you supply the wrong bind file with your binary you'll end up with version mismatch errors.
Does anyone know how to chain two MapReduce with Pipes API?
I already chain two MapReduce in a previous project with JAVA, but today I need to use C++. Unfortunately, I haven't seen any examples in C++.
Has someone already done it? Is it impossible?
Use Oozie Workflow. It allows you to use Pipes along with usual MapReduce jobs.
I finally manage to make Hadoop Pipes works. Here some steps to make works the wordcount examples available in src/examples/pipes/impl/.
I have a working Hadoop 1.0.4 cluster, configured following the steps described in the documentation.
To write a Pipes job I had to include the pipes library that is already compiled in the initial package. This can be found in C++ folder for both 32-bit and 64-bit architecture. However, I had to recompile it, which can be done following those steps:
# cd /src/c++/utils
# ./configure
# make install
# cd /src/c++/pipes
# ./configure
# make install
Those two commands will compile the library for our architecture and create a ’install’ directory in /src/c++ containing the compiled files.
Moreover, I had to add −lssl and −lcrypto link flags to compile my program. Without them I encountered some authentication exception at the running time.
Thanks to those steps I was able to run wordcount−simple that can be found in src/examples/pipes/impl/ directory.
However, to run the more complex example wordcount−nopipe, I had to do some other points. Due to the implementation of the record reader and record writer, we are directly reading or writing from the local file system. That’s why we have to specify our input and output path with file://. Moreover, we have to use a dedicated InputFormat component. Thus, to launch this job I had to use the following command:
# bin/hadoop pipes −D hadoop.pipes.java.recordreader=false −D hadoop.pipes.java.recordwriter=false −libjars hadoop−1.0.4/build/hadoop−test−1.0.4.jar −inputformat org.apache.hadoop.mapred.pipes.WordCountInputFormat −input file:///input/file −output file:///tmp/output −program wordcount−nopipe
Furthermore, if we look at org.apache.hadoop.mapred.pipes.Submitter.java of 1.0.4 version, the current implementation disables the ability to specify a non java record reader if you use InputFormat option.
Thus you have to comment the line setIsJavaRecordReader(job,true); to make it possible and recompile the core sources to take into account this change (http://web.archiveorange.com/archive/v/RNVYmvP08OiqufSh0cjR).
if(results.hasOption("−inputformat")) {
setIsJavaRecordReader(job, true);
job.setInputFormat(getClass(results, "−inputformat", job,InputFormat.class));
}
There have been a couple of threads on this topic in the past that claim Sphinx doesn't support this at all. I had my doubts but either it has been updated since or the documentation for it was quite well hidden, because here is a link on the website stating otherwise:
http://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#cpp-domain
Anyway, I'm new to Sphinx but am trying to use it to (eventually) automate documentation using some text from some source C++ code. So far I haven't been able to get anywhere when using the sphinx-apidoc -o ... command. An almost blank document is created. I'm probably not using the right directives, since I don't know how - the supporting documentation hasn't been able to help me.
Can anyone provide some assistance with the basic steps needed to get it working? If it is not possible to auto-generate documentation from C++, what are the C++ domains for and how to use them?
On auto-generating C++ documentation:
After reading up on how to use sphinx at all, you should have a look into breathe:
Breathe provides a bridge between the Sphinx and Doxygen documentation
systems.
It is an easy way to include Doxygen information in a set of
documentation generated by Sphinx. The aim is to produce an autodoc
like support for people who enjoy using Sphinx but work with languages
other than Python. The system relies on the Doxygen’s xml output.
So additionally, you'll need to follow Doxygen commenting style and even setup an doxygen project. But I tried that and it works really well after the initial setup took place. Here is an excerpt of our CMakeLists.txt which might give you an idea on how sphinx and doxygen work together:
macro(add_sphinx_target TARGET_NAME BUILDER COMMENT_STR)
add_custom_target(${TARGET_NAME}
COMMAND sphinx-build -b ${BUILDER} . sphinx/build/${BUILDER}
WORKING_DIRECTORY docs
DEPENDS doxygen
COMMENT ${COMMENT_STR}
)
endmacro(add_sphinx_target)
add_custom_target(doxygen
COMMAND doxygen docs/doxygen.conf
COMMENT "Build doxygen xml files used by sphinx/breathe."
)
add_sphinx_target(docs-html
html
"Build html documentation"
)
So after initial setup, essentially it boils down to:
build doxygen documentation with doxygen path/to/config
cd into the directory where the sphinx configuration is.
build sphinx documentation with sphinx-build . path/to/output
On the c++ domain:
Sphinx is a „little bit“ more than a system to auto-generate documentation. I would suggest you have a look at the examples (and consider that the sphinx website itself is written in sphinx reST code). Especially click the Show Source link on many sphinx-generated pages.
So if you cannot generate documentation automatically for a project, you have to do it yourself. Basically sphinx is a reST to whatever (LaTeX, HTML, …) compiler. So you can write arbitrary text, but the advantage is that it has a lot of commands for documenting source code of different languages. Each language gets its own domain (prefix or namespace) to separate the namespaces of the different languages. So for example I can document a python function using:
.. py:function:: Timer.repeat([repeat=3[, number=1000000]])
Does something nasty with timers in repetition
(source)
I can do the same using the cpp domain:
.. cpp:function:: bool namespaced::theclass::method(int arg1, std::string arg2)
Describes a method with parameters and types.
(source)
So if you want to document your c++ project without doxygen+breathe but with sphinx, you'll have to write the restructured text files yourself. This also means that you split the documentation from your source code, which can be undesirable.
I hope that clears things up a bit. For further reading I strongly suggest that you have a good read on the sphinx tutorial and documentation until you understood what it actually does.
I need to have some of my C++ classes, functions and namespaces renamed as a part of my build script, which is runned by my CI system.
Unfortunatly a simple sad/awk/gsar/... technique is not enough, and I need smart rename refactoring, that carefully analyses my code.
Actually I found out, that CDT C/C++ rename refactoring does, what I need. But it does it from Eclipse IDE. So I need to find a way to start it from command line, and to make it a part of my CI build script.
I know that Eclipse has eclipsec executable, that allowes running some Eclipse functions from command line (see e.g. here).
But I can't find any suitable documentation for functions, CDT exports to command line. The only thing, I found is the this. But it doesn't solve my problem.
So, I need help to run CDT rename refactoring from command line (or someway like that). If it is not possible, may be someone will advice another tool, that can do rename refactoring for C++ from command line ?
Pragmatic Approach
"I need to have renamed as a part of my build script"
This sounds a bit like a design problem. However, I remember having been guilty of the same sin once writing a C++ application on AIX/Win32: most notably, I wanted to be able to link 'conflicting' versions of shared objects. I solved it using a simple preprocessor hack like this:
# makefile
#if($(ALTERNATIVE))
CPPFLAGS+=-DLIBNAMESPACE=MYLIB_ALTERNATIVE
#else
CPPFLAGS+=-DLIBNAMESPACE=MYLIB
#endif
./obj64/%.o: %cpp
xlC++ $(CPPFLAGS) $^ -o %#
Sample source/header file:
namespace MYLIB
{
class LibService :
{
};
}
As you can see, this required only a single
find -iname '*.[hc]pp' -o -iname '*.[hc]' -print0 |
xargs -0 sed -i 's/OldNamespace/MYLIB/g'
Eclipse Automation
You could have a look at eclim, which does most, if not all, of what you describe, however it targets the vim editor.
What eclim boasts, is full eclipse intergration (completion, refactoring, usage search etc.) from an external program. I'm not fully up to speed with the backend of eclim, but I do know that it works with a eclimd server process that exposes the service interface used by the vim plugin.
I suspect you should be able to reuse the code from eclimd if not just use eclim for your purposes.
We are completing a command-line rename tool for C++, that uses compiler accurate parsing and name resolution, including handling of shadowed names. Contact me (see bio) for further details or if you might be interested in a beta.
I have a C shell script that calls two
C programs - one after the another
with some file handling before,
in-between and afterwards.
Now, as such I have three different files - one C shell script and 2 .c files.
I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script.
Is there some better way to do this?
[I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]
Sounds like you're worried that your users aren't savy enough to figure out how to resolve issues like command not found errors and the like. If absolutely MUST hide "complexity" of a collection of files you could have your script create the other files. In most other circumstances I would suggest that this approach is only going to increase your support workload since semi-experienced users are less likely to know how to troubleshoot the process.
If you choose to rely on the presence of a compiler on the system that you are running on you can store the C code as a collection of cat $STRING >> file.c commands to to create your two C files, which you then compile and use.
If you would want to use pre-compiled programsn instead then the same basic process can be used except instead use xxd to both generate the strings in your script and reverse the conversion process to give you working binaries. Note: Remember to chmod the binary so that it is executable.
use shar command to create self-extracting archive.
or better yet use unzipsfx with AUTORUN option.
This provides users with ONE file, and only ONE command to execute (as opposed to one for untarring and one for execution).
NOTE: The unzip command to run should use "-n" option, that way only the first run would extract the files and the subsequent would skip the extraction.
Use a zip or tar file? And you do realize that .c files aren't executable, you need to compile & link them first?
You can include the c code inside the shell script as a here document:
#!/bin/bash
cat > code.c << EOF
line #1
line #2
...
EOF
# compile
# execute
If you want to get fancy, you can test for the existence of the executable and skip compiling them if they exists.
If you are doing much shell programming, the rest of the Advanced Bash-Scripting Guide is worth looking at as well.