Error in reading value from SV in C++ function using DPI - c++

I am trying to pass a string from SV to C++ function but the value is not getting passed properly to C++ function
Code in SV side:
import "DPI" function string mainsha(string str);
class scoreboard ;
string text_i_cplus;
string text_o_cplus;
text_i_cplus="abc";
text_o_cplus=mainsha(text_i_cplus);
That's how I am sending value to C++ . At the C++ side I am taking value as:
extern "C" string mainsha(string input)
{
string output1 = sha256(input);
cout << "sha256('"<< input << "'):" << output1 << endl;
return output1;
}
I am getting correct output when I running the C++ prog alone. But at the console I am getting the following output:
sha256(''):e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Can someone please suggest where I am going wrong or am I missing something?

The DPI in current SystemVerilog standard only supports the C language, not C++. In fact you should be using import "DPI-C". in C++, string is a class and C only has arrays of char
extern "C" const char * mainsha(const char * input)
{
string output1 = sha256(input); // not sure if you need a cast here
cout << "sha256('"<< input << "'):" << output1 << endl;
return output1.c_str();
}
If you are using Modelsim/Questa, there is a -dpiheader switch that you can use that automatically generates the DPI prototypes for you and you can include that file when you compile your C++ code. That way you will get a compiler error if there is a mismatch and not have to debug run-time behaviors.

Related

C++ Gets Identifier Not Found

In my old C++ project I use to use the gets() command. I've done my research and noticed that it is not reliable anymore and my project won't run with using it.
I use this bit of code right here:
Load(gets(new char[50]));
How would I now get this line of code to work properly? And if you could provide an explanation.
Here's a simple solution:
std::string text;
std::cout << "Enter some text to load: ";
std::getline(cin, text);
Load(text.c_str());
If you must use character arrays, here's a code fragment:
const size_t ARRAY_CAPACITY = 64U;
char text[ARRAY_CAPACITY];
std::cout << "Enter some text to load: ";
cin.getline(&text[0], ARRAY_CAPACITY);
Load(text);

Converting string into the unsigned integer

I use the following code to convert the string into the integers:
int main(){
const char* mystring="abcdefghijklmnop";
unsigned int* key = (unsigned int*)mystring;
for(int i=0;i<2;i++) {
std::cout << i << ": " << key[i] << std::endl;
}
std::cout << std::endl << "Result for:" << mystring << std::endl;
}
Result:
0: 1684234849
1: 1751606885
2: 1818978921
3: 1886350957
Result for:abcdefghijklmnop
As you can see, its working really fine, but just until the moment when the encoding is different, eg. for the string like: ®_ďÚ.J.®—Mf3Lý!® (ASCII) (see the Result for: below)
It returns:
0: 3294604994
1: 781894543
2: 2931961418
3: 1301577954
Result for:®_ÄŹĂš.J.®—Mf3LĂ˝!® // <-- notice this, its totally different as the input (`®_ďÚ.J.®—Mf3Lý!®`)
I was trying setting the encoding in my IDE (Netbeans) but without any possitive results, also I was trying to compile to source on ideone.com, setting the browser encoding - unfortunately with the same results. Is there any possibility for it to generate the real result based on the input string encoding without messing it up? Maybe there are any other possibilities to achieve what I want?
Your IDE is entering the characters encoded as UTF-8. I verified this by working backwards from your numeric output, it produced ®_ďÚ.J.®—M. By the way, calling that ASCII is not accurate at all.
Your output window is using a different encoding. By the looks of it it's code page 1250 Central/Eastern European.

Python embedded in c . Is calling PyRun_SimpleString synchronous?

The application's purpose is to translate lemmas of words present in the sentence from Russian to English. I'm doing it with help of sdict formatted vocabulary, which is queried by python script which is called by c++ program.
My purpose is to get the following output :
Выставка/exhibition::1 конгресс/congress::2 организаторами/organizer::3 которой/ which::4 являются/appear::5 РАО/NONE::6 ЕЭС/NONE::7 России/NONE::8 EESR/NONE::9 нефтяная/oil::10 компания/company::11 ЮКОС/NONE::12 YUKOS/NONE::13 и/and::14 администрация/administration::15 Томской/NONE::16 области/region::17 продлится/last::18 четыре/four::19 дня/day::20
The following code succeeded for the sentence, however for the second sentence and so on I get a wrong output:
Егор/NONE::1 Гайдар/NONE::2 возглавлял/NONE::3 первое/head::4 российское/first::5 правительство/NONE::6 которое/government::7 называли/which::8 правительством/call::9 камикадзе/government::10
Note: NONE is used for words lacking translation.
I'm running the following C++ code excerpt which actually calls PyRun_SimpleString:
for (unsigned int i = 0; i < theSentenceRows->size(); i++){
stringstream ss;
ss << (i + 1);
parsedFormattedOutput << theSentenceRows->at(i)[FORMINDEX] << "/";
getline(lemmaOutFileForTranslation, lemma);
PyObject *main_module, *main_dict;
PyObject *toTranslate_obj, *translation, *emptyString;
/* Setup the __main__ module for us to use */
main_module = PyImport_ImportModule("__main__");
main_dict = PyModule_GetDict(main_module);
/* Inject a variable into __main__, in this case toTranslate */
toTranslate_obj = PyString_FromString(lemma.c_str());
PyDict_SetItemString(main_dict, "start_word", toTranslate_obj);
/* Run the code snippet above in the current environment */
PyRun_SimpleString(pycode);
**usleep(2);**
translation = PyDict_GetItemString(main_dict, "translation");
Py_XDECREF(toTranslate_obj);
/* writing results */
parsedFormattedOutput << PyString_AsString(translation) << "::" << ss.str() << " ";
Where pycode is defined as:
const char *pycode =
"import sys\n"
"import re\n"
"import sdictviewer.formats.dct.sdict as sdict\n"
"import sdictviewer.dictutil\n"
"dictionary = sdict.SDictionary( 'rus_eng_full2.dct' )\n"
"dictionary.load()\n"
"translation = \"*NONE*\"\n"
"p = re.compile('( )([a-z]+)(.*?)( )')\n"
"for item in dictionary.get_word_list_iter(start_word):\n"
" try:\n"
" if start_word == str(item):\n"
" instance, definition = item.read_articles()[0]\n"
" translation = p.findall(definition)[0][1]\n"
" except:\n"
" continue\n";
I've noticed some delay in the second sentence's output, so I added the usleep(2); to C++ while thinking that it happens because calling PyRun_SimpleString is not synchronous. It didn't help, however and I'm not sure that this is the reason. The delay bug happens for sentences that follow and increases.
So, is the call to PyRun_SimpleString synchronous? Maybe, sharing of variable values between C++ and Python is not right?
Thank you in advance.
According to the docs, it is synchronous.
I would advise you to test the python code seperately from the C++ code, that would make debugging it much easier. One way of doing that is pasting the code in the interactive interpreter and executing it line by line. And when debugging, I would second Winston Ewert's comment to not discard exceptions.

passing char arrays from c++ to fortran

I am having trouble passing char arrays from c++ to fortran (f90).
Here is my c++ file, 'cmain.cxx':
#include <iostream>
using namespace std;
extern "C" int ftest_( char (*string)[4] );
int main() {
char string[2][4];
strcpy(string[0],"abc");
strcpy(string[1],"xyz");
cout << "c++: string[0] = '" << string[0] << "'" << endl;
cout << "c++: string[1] = '" << string[1] << "'" << endl;
ftest_(string);
return 0;
}
Here is my fortran file, 'ftest.f90':
SUBROUTINE FTEST(string)
CHARACTER*3 string(2)
CHARACTER*3 expected(2)
data expected(1)/'abc'/
data expected(2)/'xyz'/
DO i=1,2
WRITE(6,10) i,string(i)
10 FORMAT("fortran: string(",i1,") = '", a, "'" )
IF(string(i).eq.expected(i)) THEN
WRITE(6,20) string(i),expected(i)
20 FORMAT("'",a,"' equals '",a,"'")
ELSE
WRITE(6,30) string(i),expected(i)
30 FORMAT("'",a,"' does not equal '",a,"'")
END IF
ENDDO
RETURN
END
The build process is:
gfortran -c -m64 ftest.f90
g++ -c cmain.cxx
gfortran -m64 -lstdc++ -gnofor_main -o test ftest.o cmain.o
Edit: note that the executable can also be build via:
g++ -lgfortran -o test ftest.o cmain.o
Also, the -m64 flag is required as I am running OSX 10.6.
The output from executing 'test' is:
c++: string[0] = 'abc'
c++: string[1] = 'xyz'
fortran: string(1) = 'abc'
'abc' equals 'abc'
fortran: string(2) = 'xy'
'xy' does not equal 'xyz'
Declaring the 'string' and 'expected' character arrays in ftest.f90 with size 4, ie:
CHARACTER*4 string(2)
CHARACTER*4 expected(2)
and recompiling gives the following output:
c++: string[0] = 'abc'
c++: string[1] = 'xyz'
fortran: string(1) = 'abc'
'abc' does not equal 'abc '
fortran: string(2) = 'xyz'
'xyz' does not equal 'xyz '
Declaring the character arrays in 'cmain.cxx' with size 3, ie:
extern "C" int ftest_( char (*string)[3] );
int main() {
char string[2][3];
and reverting to the original size in the fortran file (3), ie:
CHARACTER*3 string(2)
CHARACTER*3 expected(2)
and recompiling gives the following output:
c++: string[0] = 'abcxyz'
c++: string[1] = 'xyz'
fortran: string(1) = 'abc'
'abc' equals 'abc'
fortran: string(2) = 'xyz'
'xyz' equals 'xyz'
So the last case is the only one that works, but here I have assigned 3 characters to a char array of size 3 which means the terminating '\0' is missing, and leads to the 'abcxyz' output - this is not acceptable for my intended application.
Any help would be greatly appreciated, this is driving me nuts!
C strings are zero terminated whereas fortran strings, by convention, are space padded but of fixed size. You shouldn't expect to be able to pass C strings to fortran without some conversion.
For example:
#include <algorithm>
void ConvertToFortran(char* fstring, std::size_t fstring_len,
const char* cstring)
{
std::size_t inlen = std::strlen(cstring);
std::size_t cpylen = std::min(inlen, fstring_len);
if (inlen > fstring_len)
{
// TODO: truncation error or warning
}
std::copy(cstring, cstring + cpylen, fstring);
std::fill(fstring + cpylen, fstring + fstring_len, ' ');
}
Which you can then use with either the 3 or 4 length version of ftest:
#include <iostream>
#include <ostream>
extern "C" int ftest_( char string[][4] );
void ConvertToFortran(char* fstring, std::size_t fstring_len,
const char* cstring);
int main()
{
char cstring[2][4] = { "abc", "xyz" };
char string[2][4];
ConvertToFortran(string[0], sizeof string[0], cstring[0]);
ConvertToFortran(string[1], sizeof string[1], cstring[1]);
std::cout << "c++: string[0] = '" << cstring[0] << "'" << std::endl;
std::cout << "c++: string[1] = '" << cstring[1] << "'" << std::endl;
ftest_(string);
return 0;
}
I recommend using the ISO C Binding on the Fortran side as suggested by "High Performance Mark". You are already using "extern C". The ISO C Binding of Fortran 2003 (currently implemented in most Fortran 95 / partial Fortan 2003 compilers) makes this a compiler and platform independent approach. Charles Bailey described the differences between strings in the two languages. This Stackoverflow question has a code example: Calling a FORTRAN subroutine from C
If you don't want to modify existing Fortran code you could write a "glue" routine in between your C++ code and the existing Fortran code. Writing the glue routine in Fortran using the ISO C Binding would be more reliable and stable since this would be based on the features of a language standard.
The examples given are far too heavyweight, as long as you don't want to pass more than one string you can make use of the "hidden" length parameter ...
extern "C" void function_( const char* s, size_t len ) {
std::string some_string( s, 0, len );
/// do your stuff here ...
std::cout << "using string " << some_string << std::endl;
/// ...
}
which you can call from fortran like
call function( "some string or other" )
You don't need to faff about with individual copy operations, since the std::string constructor can do all that for you.

Get optarg as a C++ string object

I am using getopt_long to process command line arguments in a C++ application. The examples all show something like printf("Username: %s\n", optarg) in the processing examples. This is great for showing an example, but I want to be able to actually store the values for use later. Much of the rest of the code uses string objects instead of char* so I need to cast/copy/whatever the contents of optarg into a string.
string bar;
while(1) {
c = getopt_long (argc, argv, "s:U:", long_options, &option_index);
if (c == -1) break;
switch(c)
{
case 'U':
// What do I need to do here to get
// the value of optarg into the string
// object bar?
bar.assign(optarg);
break;
}
}
The above code compiles, but when it executes I get an Illegal instruction error if I try to print out the value of bar using printf (it seems to work just fine for cout).
// Runs just fine, although I'm not certain it is actually safe!
cout << " bar: " << bar << "\n";
// 'Illegal instruction'
printf(" bar: %s\n", bar);
I do not know enough about command line debugging to better dig into what the illegal instruction might be. I had been running valgrind, but the sheer volume of memory errors that result from this error have made it difficult for me to pinpoint exactly what might be causing this error.
You told printf that you were suppling a c style string (null terminated array of chars) when specifying %s, but you provided a string class instead. Assuming you are using std::string try:
printf("bar : %s\n", bar.c_str());
printf() can't handle C++ strings. Use bar.c_str() instead.
cout << " bar: " << bar << "\n";
is perfectly safe. What makes you think it might not be?