OpenCL result changes with arbitrary code alterations that are not related - c++

This is a very strange issue. I'm working on an GPU based crypto miner and I have an issue with a SHA hash function.
1 - The initial function calls a SHA256 routine and then prints the results. I'm comparing those results to a CPU based SHA256 to make sure I get the same thing.
2 - Later on in the function, there are other operations that occur, such as adding, XOR and additional SHA rounds.
As part of the miner kernel, I wrote an auxiliary function to decompose an array of 8 uints into an array of 32 unsigned char, using AND mask and bit shift.
I'm calling the kernel with global/local work unit of 1.
So, here's where things get really strange. The part I am comparing is the very first SHA. I get a buffer of 80 bytes in, SHA it and then print the result. It matches under certain conditions. However, if I make changes to the code that is executing AFTER that SHA executes, then it doesnt match. This is what I've been able to narrow down:
1 - If I put a printf debug in the decomposition auxiliary function, the results match. Just removing that printf causes it to mismatch.
2 - There are 4 operations I use to decompose the uint into char. I tried lots of different ways to do this with the same result. However, if I remove any 1 of the 4 "for" loops in the routine, it matches. Simply removing a for loop in code that gets executed -after- the initial code, changes the result of the initial SHA.
3 - If I change my while loop to never execute then it matches. Again, this is all -after- the initial SHA comparison.
4 - If I remove all the calls to the auxiliary function, then it matches. Simply calling the function after the initial SHA causes a mismatch.
I've tried adding memory guards everywhere, however being that its 1 global and 1 local work unit, I don't see how that could apply.
I'd love to debug this, but apparently openCL cannot be debugged in VS 2019 (really?)
Any thoughts, guesses, insight would be appreciated.
Thanks!
inline void loadUintHash ( __global unsigned char* dest, __global uint* src) {
//**********if I remove this it doesn't work
printf ("src1 %08x%08x%08x%08x%08x%08x%08x%08x",
src[0],
src[1],
src[2],
src[3],
src[4],
src[5],
src[6],
src[7]
);
//**********if I take away any one of these 4 for loops, then it works
for ( int i = 0; i < 8; i++)
dest[i*4+3] = (src[i] & 0xFF000000) >> 24;
for ( int i = 0; i < 8; i++)
dest[i*4+2] = (src[i] & 0x00FF0000) >> 16;
for ( int i = 0; i < 8; i++)
dest[i*4+1] = (src[i] & 0x0000FF00) >> 8;
for ( int i = 0; i < 8; i++)
dest[i*4] = (src[i] & 0x000000FF);
//**********if I remove this it doesn't work
printf ("src2 %08x%08x%08x%08x%08x%08x%08x%08x",
src[0],
src[1],
src[2],
src[3],
src[4],
src[5],
src[6],
src[7]
);
}
#define HASHOP_ADD 0
#define HASHOP_XOR 1
#define HASHOP_SHA_SINGLE 2
#define HASHOP_SHA_LOOP 3
#define HASHOP_MEMGEN 4
#define HASHOP_MEMADD 5
#define HASHOP_MEMXOR 6
#define HASHOP_MEM_SELECT 7
#define HASHOP_END 8
__kernel void dyn_hash (__global uint* byteCode, __global uint* memGenBuffer, int memGenSize, __global uint* hashResult, __global char* foundFlag, __global unsigned char* header, __global unsigned char* shaScratch) {
int computeUnitID = get_global_id(0);
__global uint* myMemGen = &memGenBuffer[computeUnitID * memGenSize * 8]; //each memGen unit is 256 bits, or 8 bytes
__global uint* myHashResult = &hashResult[computeUnitID * 8];
__global char* myFoundFlag = foundFlag + computeUnitID;
__global unsigned char* myHeader = header + (computeUnitID * 80);
__global unsigned char* myScratch = shaScratch + (computeUnitID * 32);
sha256 ( computeUnitID, 80, myHeader, myHashResult );
//**********this is the result I am comparing
if (computeUnitID == 0) {
printf ("gpu first sha uint %08x%08x%08x%08x%08x%08x%08x%08x",
myHashResult[0],
myHashResult[1],
myHashResult[2],
myHashResult[3],
myHashResult[4],
myHashResult[5],
myHashResult[6],
myHashResult[7]
);
}
uint linePtr = 0;
uint done = 0;
uint currentMemSize = 0;
uint instruction = 0;
//**********if I change this to done == 1, then it works
while (done == 0) {
if (byteCode[linePtr] == HASHOP_ADD) {
linePtr++;
uint arg1[8];
for ( int i = 0; i < 8; i++)
arg1[i] = byteCode[linePtr+i];
linePtr += 8;
}
else if (byteCode[linePtr] == HASHOP_XOR) {
linePtr++;
uint arg1[8];
for ( int i = 0; i < 8; i++)
arg1[i] = byteCode[linePtr+i];
linePtr += 8;
}
else if (byteCode[linePtr] == HASHOP_SHA_SINGLE) {
linePtr++;
}
else if (byteCode[linePtr] == HASHOP_SHA_LOOP) {
printf ("HASHOP_SHA_LOOP");
linePtr++;
uint loopCount = byteCode[linePtr];
for ( int i = 0; i < loopCount; i++) {
loadUintHash(myScratch, myHashResult);
sha256 ( computeUnitID, 32, myScratch, myHashResult );
if (computeUnitID == 1) {
loadUintHash(myScratch, myHashResult);
... more irrelevant code...
This is how the kernel is being called:
size_t globalWorkSize = 1;// computeUnits;
size_t localWorkSize = 1;
returnVal = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, &globalWorkSize, &localWorkSize, 0, NULL, NULL);

The issue ended up being multiple things. 1 - The CPU SHA had a bug in it that was causing an incorrect result in some cases. 2 - There was a very strange syntax error which seems to have broken the compiler in a weird way:
void otherproc () {
...do stuff...
}
if (something) {/
...other code
}
That forward slash after the opening curly brace was messing up "otherproc" in a weird way, and the compiler did not throw an error. After staring at the code line by line I found that slash, removed it, and everything started working.
If anyone is interested, the working implementation of a GPU miner can be found here:
https://github.com/dynamofoundation/dyn_miner

Related

What's the fastest way to get all the hot bits in uint64_t variable? [duplicate]

I need a fast way to get the position of all one bits in a 64-bit integer. For example, given x = 123703, I'd like to fill an array idx[] = {0, 1, 2, 4, 5, 8, 9, 13, 14, 15, 16}. We can assume we know the number of bits a priori. This will be called 1012 - 1015 times, so speed is of the essence. The fastest answer I've come up with so far is the following monstrosity, which uses each byte of the 64-bit integer as an index into tables that give the number of bits set in that byte and the positions of the ones:
int64_t x; // this is the input
unsigned char idx[K]; // this is the array of K bits that are set
unsigned char *dst=idx, *src;
unsigned char zero, one, two, three, four, five; // these hold the 0th-5th bytes
zero = x & 0x0000000000FFUL;
one = (x & 0x00000000FF00UL) >> 8;
two = (x & 0x000000FF0000UL) >> 16;
three = (x & 0x0000FF000000UL) >> 24;
four = (x & 0x00FF00000000UL) >> 32;
five = (x & 0xFF0000000000UL) >> 40;
src=tab0+tabofs[zero ]; COPY(dst, src, n[zero ]);
src=tab1+tabofs[one ]; COPY(dst, src, n[one ]);
src=tab2+tabofs[two ]; COPY(dst, src, n[two ]);
src=tab3+tabofs[three]; COPY(dst, src, n[three]);
src=tab4+tabofs[four ]; COPY(dst, src, n[four ]);
src=tab5+tabofs[five ]; COPY(dst, src, n[five ]);
where COPY is a switch statement to copy up to 8 bytes, n is array of the number of bits set in a byte and tabofs gives the offset into tabX, which holds the positions of the set bits in the X-th byte. This is about 3x faster than unrolled loop-based methods with __builtin_ctz() on my Xeon E5-2609. (See below.) I am currently iterating x in lexicographical order for a given number of bits set.
Is there a better way?
EDIT: Added an example (that I have subsequently fixed). Full code is available here: http://pastebin.com/79X8XL2P . Note: GCC with -O2 seems to optimize it away, but Intel's compiler (which I used to compose it) doesn't...
Also, let me give some additional background to address some of the comments below. The goal is to perform a statistical test on every possible subset of K variables out of a universe of N possible explanatory variables; the specific target right now is N=41, but I can see some projects needing N up to 45-50. The test basically involves factorizing the corresponding data submatrix. In pseudocode, something like this:
double doTest(double *data, int64_t model) {
int nidx, idx[];
double submatrix[][];
nidx = getIndices(model, idx); // get the locations of ones in model
// copy data into submatrix
for(int i=0; i<nidx; i++) {
for(int j=0; j<nidx; j++) {
submatrix[i][j] = data[idx[i]][idx[j]];
}
}
factorize(submatrix, nidx);
return the_answer;
}
I coded up a version of this for an Intel Phi board that should complete the N=41 case in about 15 days, of which ~5-10% of the time is spent in a naive getIndices() so right off the bat a faster version could save a day or more. I'm working on an implementation for NVidia Kepler too, but unfortunately the problem I have (ludicrous numbers of small matrix operations) is not ideally suited to the hardware (ludicrously large matrix operations). That said, this paper presents a solution that seems to achieve hundreds of GFLOPS/s on matrices of my size by aggressively unrolling loops and performing the entire factorization in registers, with the caveat that the dimensions of the matrix be defined at compile-time. (This loop unrolling should help reduce overhead and improve vectorization in the Phi version too, so getIndices() will become more important!) So now I'm thinking my kernel should look more like:
double *data; // move data to GPU/Phi once into shared memory
template<unsigned int K> double doTestUnrolled(int *idx) {
double submatrix[K][K];
// copy data into submatrix
#pragma unroll
for(int i=0; i<K; i++) {
#pragma unroll
for(int j=0; j<K; j++) {
submatrix[i][j] = data[idx[i]][idx[j]];
}
}
factorizeUnrolled<K>(submatrix);
return the_answer;
}
The Phi version solves each model in a `cilk_for' loop from model=0 to 2N (or, rather, a subset for testing), but now in order to batch work for the GPU and amortize the kernel launch overhead I have to iterate model numbers in lexicographical order for each of K=1 to 41 bits set (as doynax noted).
EDIT 2: Now that vacation is over, here are some results on my Xeon E5-2602 using icc version 15. The code that I used to benchmark is here: http://pastebin.com/XvrGQUat. I perform the bit extraction on integers that have exactly K bits set, so there is some overhead for the lexicographic iteration measured in the "Base" column in the table below. These are performed 230 times with N=48 (repeating as necessary).
"CTZ" is a loop that uses the the gcc intrinsic __builtin_ctzll to get the lowest order bit set:
for(int i=0; i<K; i++) {
idx[i] = __builtin_ctzll(tmp);
lb = tmp & -tmp; // get lowest bit
tmp ^= lb; // remove lowest bit from tmp
}
Mark is Mark's branchless for loop:
for(int i=0; i<K; i++) {
*dst = i;
dst += x & 1;
x >>= 1;
}
Tab1 is my original table-based code with the following copy macro:
#define COPY(d, s, n) \
switch(n) { \
case 8: *(d++) = *(s++); \
case 7: *(d++) = *(s++); \
case 6: *(d++) = *(s++); \
case 5: *(d++) = *(s++); \
case 4: *(d++) = *(s++); \
case 3: *(d++) = *(s++); \
case 2: *(d++) = *(s++); \
case 1: *(d++) = *(s++); \
case 0: break; \
}
Tab2 is the same code as Tab1, but the copy macro just moves 8 bytes as a single copy (taking ideas from doynax and Lưu Vĩnh Phúc... but note this does not ensure alignment):
#define COPY2(d, s, n) { *((uint64_t *)d) = *((uint64_t *)s); d+=n; }
Here are the results. I guess my initial claim that Tab1 is 3x faster than CTZ only holds for large K (where I was testing). Mark's loop is faster than my original code, but getting rid of the branch in the COPY2 macro takes the cake for K > 8.
K Base CTZ Mark Tab1 Tab2
001 4.97s 6.42s 6.66s 18.23s 12.77s
002 4.95s 8.49s 7.28s 19.50s 12.33s
004 4.95s 9.83s 8.68s 19.74s 11.92s
006 4.95s 16.86s 9.53s 20.48s 11.66s
008 4.95s 19.21s 13.87s 20.77s 11.92s
010 4.95s 21.53s 13.09s 21.02s 11.28s
015 4.95s 32.64s 17.75s 23.30s 10.98s
020 4.99s 42.00s 21.75s 27.15s 10.96s
030 5.00s 100.64s 35.48s 35.84s 11.07s
040 5.01s 131.96s 44.55s 44.51s 11.58s
I believe the key to performance here is to focus on the larger problem rather than on micro-optimizing the extraction of bit positions out of a random integer.
Judging by your sample code and previous SO question you are enumerating all words with K bits set in order, and extracting the bit indices out of these. This greatly simplifies matters.
If so then instead of rebuilding the bit position each iteration try directly incrementing the positions in the bit array. Half of the time this will involve a single loop iteration and increment.
Something along these lines:
// Walk through all len-bit words with num-bits set in order
void enumerate(size_t num, size_t len) {
size_t i;
unsigned int bitpos[64 + 1];
// Seed with the lowest word plus a sentinel
for(i = 0; i < num; ++i)
bitpos[i] = i;
bitpos[i] = 0;
// Here goes the main loop
do {
// Do something with the resulting data
process(bitpos, num);
// Increment the least-significant series of consecutive bits
for(i = 0; bitpos[i + 1] == bitpos[i] + 1; ++i)
bitpos[i] = i;
// Stop on reaching the top
} while(++bitpos[i] != len);
}
// Test function
void process(const unsigned int *bits, size_t num) {
do
printf("%d ", bits[--num]);
while(num);
putchar('\n');
}
Not particularly optimized but you get the general idea.
Here's something very simple which might be faster - no way to know without testing. Much will depend on the number of bits set vs. the number unset. You could unroll this to remove branching altogether but with today's processors I don't know if it would speed up at all.
unsigned char idx[K+1]; // need one extra for overwrite protection
unsigned char *dst=idx;
for (unsigned char i = 0; i < 50; i++)
{
*dst = i;
dst += x & 1;
x >>= 1;
}
P.S. your sample output in the question is wrong, see http://ideone.com/2o032E
As a minimal modification:
int64_t x;
char idx[K+1];
char *dst=idx;
const int BITS = 8;
for (int i = 0 ; i < 64+BITS; i += BITS) {
int y = (x & ((1<<BITS)-1));
char* end = strcat(dst, tab[y]); // tab[y] is a _string_
for (; dst != end; ++dst)
{
*dst += (i - 1); // tab[] is null-terminated so bit positions are 1 to BITS.
}
x >>= BITS;
}
The choice of BITS determines the size of the table. 8, 13 and 16 are logical choices. Each entry is a string, zero-terminated and containing bit positions with 1 offset. I.e. tab[5] is "\x03\x01". The inner loop fixes this offset.
Slightly more efficient: replace the strcat and inner loop by
char const* ptr = tab[y];
while (*ptr)
{
*dst++ = *ptr++ + (i-1);
}
Loop unrolling can be a bit of a pain if the loop contains branches, because copying those branch statements doesn't help the branch predictor. I'll happily leave that decision to the compiler.
One thing I'm considering is that tab[y] is an array of pointers to strings. These are highly similar: "\x1" is a suffix of "\x3\x1". In fact, each string which doesn't start with "\x8" is a suffix of a string which does. I'm wondering how many unique strings you need, and to what degree tab[y] is in fact needed. E.g. by the logic above, tab[128+x] == tab[x]-1.
[edit]
Nevermind, you definitely need 128 tab entries starting with "\x8" since they're never the suffix of another string. Still, the tab[128+x] == tab[x]-1 rule means that you can save half the entries, but at the cost of two extra instructions: char const* ptr = tab[x & 0x7F] - ((x>>7) & 1). (Set up tab[] to point after the \x8)
Using char wouldn't help you to increase speed but in fact often needs more ANDing and sign/zero extending while calculating. Only in the case of very large arrays that should fit in cache, smaller int types should be used
Another thing you can improve is the COPY macro. Instead of copy byte-by-byte, copy the whole word if possible
inline COPY(unsigned char *dst, unsigned char *src, int n)
{
switch(n) { // remember to align dst and src when declaring
case 8:
*((int64_t*)dst) = *((int64_t*)src);
break;
case 7:
*((int32_t*)dst) = *((int32_t*)src);
*((int16_t*)(dst + 4)) = *((int32_t*)(src + 4));
dst[6] = src[6];
break;
case 6:
*((int32_t*)dst) = *((int32_t*)src);
*((int16_t*)(dst + 4)) = *((int32_t*)(src + 4));
break;
case 5:
*((int32_t*)dst) = *((int32_t*)src);
dst[4] = src[4];
break;
case 4:
*((int32_t*)dst) = *((int32_t*)src);
break;
case 3:
*((int16_t*)dst) = *((int16_t*)src);
dst[2] = src[2];
break;
case 2:
*((int16_t*)dst) = *((int16_t*)src);
break;
case 1:
dst[0] = src[0];
break;
case 0:
break;
}
Also, since tabofs[x] and n[x] is often access close to each other, try putting it close in memory to make sure they are always in cache at the same time
typedef struct TAB_N
{
int16_t n, tabofs;
} tab_n[256];
src=tab0+tab_n[b0].tabofs; COPY(dst, src, tab_n[b0].n);
src=tab0+tab_n[b1].tabofs; COPY(dst, src, tab_n[b1].n);
src=tab0+tab_n[b2].tabofs; COPY(dst, src, tab_n[b2].n);
src=tab0+tab_n[b3].tabofs; COPY(dst, src, tab_n[b3].n);
src=tab0+tab_n[b4].tabofs; COPY(dst, src, tab_n[b4].n);
src=tab0+tab_n[b5].tabofs; COPY(dst, src, tab_n[b5].n);
Last but not least, gettimeofday is not for performance counting. Use QueryPerformanceCounter instead, it's much more precise
Your code is using 1-byte (256 entries) index table. You can speed it up by factor of 2 if you use 2-byte (65536 entries) index table.
Unfortunately, you probably cannot extend that further - for 3-bytes table size would be 16MB, not likely to fit into CPU local cache, and it would only make things slower.
Assuming sparsity in number of set bits,
int count = 0;
unsigned int tmp_bitmap = x;
while (tmp_bitmap > 0) {
int next_psn = __builtin_ffs(tmp_bitmap) - 1;
tmp_bitmap &= (tmp_bitmap-1);
id[count++] = next_psn;
}
The question is what are you going to do with the collection of positions?
If you have to iterate many times over it, then yes, it might be interesting to gather them once as you are doing now, and iterate many.
But if it's for iterating just once or few times, then you might think of not creating an intermediate array of positions, and just invoke a processing block closure/function at each encountered 1 while iterating on bits.
Here is a naive example of bit iterator I wrote in Smalltalk:
LargePositiveInteger>>bitsDo: aBlock
| mask offset |
1 to: self digitLength do: [:iByte |
offset := (iByte - 1) << 3.
mask := (self digitAt: iByte).
[mask = 0]
whileFalse:
[aBlock value: mask lowBit + offset.
mask := mask bitAnd: mask - 1]]
A LargePositiveInteger is an Integer of arbitrary length composed of byte digits.
The lowBit answer the rank of lowest bit and is implemented as a lookup table with 256 entries.
In C++ 2011 you can easily pass a closure, so it should be easy to translate.
uint64_t x;
unsigned int mask;
void (*process_bit_position)(unsigned int);
unsigned char offset = 0;
unsigned char lowBitTable[16] = {0,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0}; // 0-based, first entry is unused
while( x )
{
mask = x & 0xFUL;
while (mask)
{
process_bit_position( lowBitTable[mask]+offset );
mask &= mask - 1;
}
offset += 4;
x >>= 4;
}
The example is demonstrated with a 4 bit table, but you can easily extend it to 13 bits or more if it fits in cache.
For branch prediction, the inner loop could be rewritten as a for(i=0;i<nbit;i++) with an additional tablenbit=numBitTable[mask] then unrolled with a switch (the compiler could do it?), but I let you measure how it performs first...
Has this been found to be too slow?
Small and crude, but it's all in the cache and CPU registers;
void mybits(uint64_t x, unsigned char *idx)
{
unsigned char n = 0;
do {
if (x & 1) *(idx++) = n;
n++;
} while (x >>= 1); // If x is signed this will never end
*idx = (unsigned char) 255; // List Terminator
}
It's still 3 times faster to unroll the loop and produce an array of 64 true/false values (which isn't quite what's wanted)
void mybits_3_2(uint64_t x, idx_type idx[])
{
#define SET(i) (idx[i] = (x & (1UL<<i)))
SET( 0);
SET( 1);
SET( 2);
SET( 3);
...
SET(63);
}
Here's some tight code, written for 1-byte (8-bits), but it should easily, obviously expand to 64-bits.
int main(void)
{
int x = 187;
int ans[8] = {-1,-1,-1,-1,-1,-1,-1,-1};
int idx = 0;
while (x)
{
switch (x & ~(x-1))
{
case 0x01: ans[idx++] = 0; break;
case 0x02: ans[idx++] = 1; break;
case 0x04: ans[idx++] = 2; break;
case 0x08: ans[idx++] = 3; break;
case 0x10: ans[idx++] = 4; break;
case 0x20: ans[idx++] = 5; break;
case 0x40: ans[idx++] = 6; break;
case 0x80: ans[idx++] = 7; break;
}
x &= x-1;
}
getchar();
return 0;
}
Output array should be:
ans = {0,1,3,4,5,7,-1,-1};
If I take "I need a fast way to get the position of all one bits in a 64-bit integer" literally...
I realise this is a few weeks old, however and out of curiosity, I remember way back in my assembly days with the CBM64 and Amiga using an arithmetic shift and then examining the carry flag - if it's set then the shifted bit was 1, if clear then it's zero
e.g. for an arithmetic shift left (examining from bit 64 to bit 0)....
pseudo code (ignore instruction mix etc errors and oversimplification...been a while):
move #64+1, counter
loop. ASL 64bitinteger
BCS carryset
decctr. dec counter
bne loop
exit
carryset.
//store #counter-1 (i.e. bit position) in datastruct indexed by counter
jmp decctr
...I hope you get the idea.
I've not used assembly since then but I'm wondering if we could use some C++ in-line assembly similar to the above to do something similar here. We could do the whole conversion in assembly (very few lines of code), building up an appropriate data structure. C++ could simply examine the answer.
If this is possible then I'd imagine it to be pretty fast.
A simple solution, but perhaps not the fastest, depending on the times of the log and pow functions:
#include<math.h>
void getSetBits(unsigned long num){
int bit;
while(num){
bit = log2(num);
num -= pow(2, bit);
printf("%i\n", bit); // use bit number
}
}
Complexity O(D) | D is the number of set bits.

C++ PBKDF2 Issue

I have the following function:
void PBKDF2_HMAC_SHA_512_string(const char* pass, const char* salt, int32_t iterations, uint32_t HashLength, char* out) {
unsigned int i;
HashLength = HashLength / 2;
unsigned char* digest = new unsigned char[HashLength];
PKCS5_PBKDF2_HMAC(pass, strlen(pass), (const unsigned char*)salt, strlen(salt), iterations, EVP_sha512(), HashLength, digest);
for (i = 0; i < sizeof(digest); i++) {
sprintf(out + (i * 2), "%02x", 255 & digest[i]);
}
}
When I call the function like below, I expect to get a hash back of 2400 in length, however it returns me 16:
char PBKDF2Hash[1025]; //\0 terminating space?
memset(PBKDF2Hash, 0, sizeof(PBKDF2Hash));
PBKDF2_HMAC_SHA_512_string("Password", "0123456789123456", 3500, 1024, PBKDF2Hash);
//PBKDF2Hash is now always 16 long -> strlen(PBKDF2Hash),
//while I expect it to be 2400 long?
//How is this possible and why is this happening?
//I can't figure it out
Since digest is a pointer, sizeof(digest) will not give the length of the array. Depending on different platforms, sizeof(digest) may give you 4 or 8, which is not what you want. Maybe you should use for (i = 0; i < HashLength; i++).
Another unrelated issue of your code is that, digest is not deleted in PBKDF2_HMAC_SHA_512_string, which causes memory leak

buserror given by quickfix

I was trying to get a simple demo quickfix program to run on solaris, namely http://www.codeproject.com/Articles/429147/The-FIX-client-and-server-implementation-using-Qui prior to getting it to do what I want it to.
unfortunately in main the application gives a bus error when
FIX::SocketInitiator initiator( application, storeFactory, settings, logFactory);
is called
examine the core dump with gdb and I see
(gdb) where
#0 FIX::SessionFactory::create (this=0xffbfee90, sessionID=#0x101fe8, settings=#0x100e34)
at FieldConvertors.h:113
#1 0xff2594ac in FIX::Initiator::initialize (this=0xffbff108) at stl_tree.h:246
#2 0xff25b270 in Initiator (this=0xffbff108, application=#0xffbff424,
messageStoreFactory=#0xffbff1c4, settings=#0xffbff420, logFactory=#0xffbff338)
at Initiator.cpp:61
#3 0xff25f8a8 in SocketInitiator (this=0xffbff108, application=#0xffbff3c8,
factory=#0xffbff388, settings=#0xffbff408, logFactory=#0xffbff338) at SocketInitiator.cpp:52
#4 0x0004a900 in main (argc=2, argv=0xffbff4c4) at BondsProClient.cpp:42
So I look in FieldConverters.h and we have the code
inline char* integer_to_string( char* buf, const size_t len, signed_int t )
{
const bool isNegative = t < 0;
char* p = buf + len;
*--p = '\0';
unsigned_int number = UNSIGNED_VALUE_OF( t );
while( number > 99 )
{
unsigned_int pos = number % 100;
number /= 100;
p -= 2;
*(short*)(p) = *(short*)(digit_pairs + 2 * pos);
}
if( number > 9 )
{
p -= 2;
*(short*)(p) = *(short*)(digit_pairs + 2 * number); //LINE 113 bus error line
}
else
{
*--p = '0' + char(number);
}
if( isNegative )
*--p = '-';
return p;
}
Looking at this I'm actually not surprised this crashes. It's de-referencing a char* pointer passed to the function as a short, without checking the alignment, which can't be known. This is illegal to any C / C++ standard and since the sparc processor can't perform an unaligned memory access, the thing obviously crashes. Am I being really thick here, or is this a stone cold bug of massive proportions in the quickfix headers? quickfix IS (according to their website) supposed to compile and be usable on solaris sparc. Does anyone know of any work around for this? The option of edit thew header to sprintf springs to mind, as does aligning some things. Or is the a red herring with something different causing an unaligned buffer?
If it's crashing due to misaligned loads/stores then you could replace lines such as:
*(short*)(p) = *(short*)(digit_pairs + 2 * number);
with a safer equivalent using memcpy:
memcpy((void *)p, (const void *)(digit_pairs + 2 * number), sizeof(short));

char to int conversion in host device with CUDA

I have been having trouble converting from a single character to an integer while in the host function of my CUDA program. After the line -
token[j] = token[j] * 10 + (buf[i] - '0' );
I use cuda-gdb check the value for token[j], and I always get different numbers that do not seem to have a pattern. I have also tried simple casting, not multiplying by ten (which I saw in another thread), not subtracting '0', and I always seem to get a different result. Any help would be appreciated. This is my first time posting on stack overflow, so give me a break if my formatting is awful.
-A fellow struggling coder
__global__ void rread(unsigned int *table, char *buf, int *threadbytes, unsigned int *token) {
int i = 0;
int j = 0;
*token = NULL;
int tid = threadIdx.x;
unsigned int key;
char delim = ' ';
for(i = tid * *threadbytes; i <(tid * *threadbytes) + *threadbytes ; i++)
{
if (buf[i] != delim) { //check if its not a delim
token[j] = token[j] * 10 + (buf[i] - '0' );
There's a race condition on writing to token.
If you want to have a local array per block you can use shared memory. If you want a local array per thread, you will need to use local per-thread memory and declare the array on the stack. In the first case you will have to deal with concurrency inside the block as well. In the latter you don't have to, although you might potentially waste a lot more memory (and reduce collaboration).

While loop fails in CUDA kernel

I am using GPU to do some calculation for processing words.
Initially, I used one block (with 500 threads) to process one word.
To process 100 words, I have to loop the kernel function 100 times in my main function.
for (int i=0; i<100; i++)
kernel <<< 1, 500 >>> (length_of_word);
My kernel function looks like this:
__global__ void kernel (int *dev_length)
{
int length = *dev_length;
while (length > 4)
{ //do something;
length -=4;
}
}
Now I want to process all 100 words at the same time.
Each block will still have 500 threads, and processes one word (per block).
dev_totalwordarray: store all characters of the words (one after another)
dev_length_array: store the length of each word.
dev_accu_length: stores the accumulative length of the word (total char of all previous words)
dev_salt_ is an array of of size 500, storing unsigned integers.
Hence, in my main function I have
kernel2 <<< 100, 500 >>> (dev_totalwordarray, dev_length_array, dev_accu_length, dev_salt_);
to populate the cpu array:
for (int i=0; i<wordnumber; i++)
{
int length=0;
while (word_list_ptr_array[i][length]!=0)
{
length++;
}
actualwordlength2[i] = length;
}
to copy from cpu -> gpu:
int* dev_array_of_word_length;
HANDLE_ERROR( cudaMalloc( (void**)&dev_array_of_word_length, 100 * sizeof(int) ) );
HANDLE_ERROR( cudaMemcpy( dev_array_of_word_length, actualwordlength2, 100 * sizeof(int),
My function kernel now looks like this:
__global__ void kernel2 (char* dev_totalwordarray, int *dev_length_array, int* dev_accu_length, unsigned int* dev_salt_)
{
tid = threadIdx.x + blockIdx.x * blockDim.x;
unsigned int hash[N];
int length = dev_length_array[blockIdx.x];
while (tid < 50000)
{
const char* itr = &(dev_totalwordarray[dev_accu_length[blockIdx.x]]);
hash[tid] = dev_salt_[threadIdx.x];
unsigned int loop = 0;
while (length > 4)
{ const unsigned int& i1 = *(reinterpret_cast<const unsigned int*>(itr)); itr += sizeof(unsigned int);
const unsigned int& i2 = *(reinterpret_cast<const unsigned int*>(itr)); itr += sizeof(unsigned int);
hash[tid] ^= (hash[tid] << 7) ^ i1 * (hash[tid] >> 3) ^ (~((hash[tid] << 11) + (i2 ^ (hash[tid] >> 5))));
length -=4;
}
tid += blockDim.x * gridDim.x;
}
}
However, kernel2 doesn't seem to work at all.
It seems while (length > 4) causes this.
Does anyone know why? Thanks.
I am not sure if the while is the culprit, but I see few things in your code that worry me:
Your kernel produces no output. The optimizer will most likely detect this and convert it to an empty kernel
In almost no situation you want arrays allocated per-thread. That will consume a lot of memory. Your hash[N] table will be allocated per-thread and discarded at the end of the kernel. If N is big (and then multiplied by the total amount of threads) you may run out of GPU memory. Not to mention, that accessing the hash will be almost as slow as accessing global memory.
All threads in a block will have the same itr value. Is it intended?
Every thread initializes only a single field within its own copy of hash table.
I see hash[tid] where tid is a global index. Be aware that even if hash was made global, you may hit concurrency problems. Not all blocks within a grid will run at the same time. While one block will initialize a portion of hash, another block might not even start!