Is there a way to loop through text box values in C++? - visual-studio-2017

I have 50 text boxes that I want to get text from. I want to loop through these texts instead of getting each text individually.
What i currently have
getting values:
array <System::String ^, 49> s;
s[0]=this->TextBox0->Text;
s[1]=this->TextBox1->Text;
...
s[49]=this->TextBox49->Text;
//do stuff with s
What Im looking for
array <System::String ^, 49> s;
for (int i = 0; i <= 49; i++)
{
s[i]=this->TextBox[i]->Text;
}
// do stuff with s
Im not sure how to iterate through the textboxes.

Based on the code provided here; It is reasonable to assume that 'textbox1' and so forth, reference some large number of identically typed objects within the user-defined subclass of "form." A better solution would involve the following:
private std::array<TextBox> TextBoxes(49);
Otherwise, if the objects can be simply be a regular distance apart in memory, the following work-around could also function.
The following code SHOULD NOT be used in a professional code-base
In addition to issue of relying upon a consistent arrangement of the boxes in memory, this solution also risks writing and reading from invalid memory locations, as there are no guaranteed boundaries, as is the case with a standard array.
size_t stepSize = static_cast<char*>(&(this->TextBox1)) - static_cast<char*>(&(this->TextBox0));
s[i] = (static_cast<TextBox*>(static_cast<char*>(&(this->TextBox0) + stepSize * i)))->Text;
This simply takes the distance between the first two textboxes in the form's memory (in bytes, to avoid alignment issues), and uses that distance to extrapolate the address of the other textboxes.

Related

Is there a way to manipulate function arguments by their position number?

I wish to be able to manipulate function arguments by the order in which they are given. So,
void sum(int a, int b, int c)
std::cout<< arguments[0]+arguments[1];
sum(1,1,4);
should print 2. This feature is in JavaScript.
I need it to implement a numerical scheme. I'd create a function that takes 4 corner values and tangential direction as input. Then, using the tangential direction, it decides which corners to use. I wish to avoid an 'if' condition as this function would be called several times.
EDIT - The reason why I do not wish to use an array as input is for potential optimization and readability reasons. I would explain my situation a bit more. solution is a 2D array. We would be running this double for loop several times
for (int i = 0;i<N_x;i++)
for (int j = 0;j<N_y;j++)
update_solution(solution[i,j],solution[i+1,j],solution[i-1,j],...);
Optimization: N_x,N_y are large enough for me to be concerned about whether or not adding a step like variables_used = {solution(i,j),solution(i+1,j),...} in every single loop will increase the cost.
Readability The arguments of update_solution indicate which indices were used to update the solution. Putting that in the previous line is slightly non-standard, judging by the codes I have read.

Converting arrays of one type to another

Basically I have an array of doubles. I want to pass this array to a function (ProcessData) which will treat them as short ints. Is creating a short pointer and pointing it to the array, then passing this pointer to the function ok (code 1) ?
Is this in effect the same as creating a short array, iterating through each element and converting each element of the double array to a short and then passing the short array pointer (code 2) ? Thanks
//code 1
//.....
short* shortPtr = (short*)doubleArr;
ProcessData(shortPtr);
..
//code 2
//...
short shortArr [ARRSIZE];
int i;
for (i = 0; i < ARRSIZE; i++)
{
shortArr[i] = (short)doubleArr[i];
}
ProcessData(shortArr);
You can't just cast, as the various comments have said. But if you use iterators you can get more or less the same effect:
void do_something_with_short(short i) {
/* whatever */
}
template <class Iter>
void do_something(Iter first, Iter last) {
while (first != last)
do_something_with_short(*first++);
}
You can call that template function with iterators into an array of any arithmetic type (in fact, any type that's implicitly convertible to short or, if you add a cast at the point of the call to do_something_with_short, with a type that requires a cast):
double data[10]; // needs to be initialized, of course
do_something(std::begin(data), std::end(data));
No you can't do that. Here's at least one reason why:
An array is a contiguous sequence of several memory allocations accessed by way of an index, like so
[----][----][----]
Note the four dashes inside the square brackets. That is to indicate that in most situations in C/C++, an int is four bytes long. Arrays cells can be accessed by their index because if we know the memory address of the first cell (m) and we know how big each cell is meant to be (c) - in this case, four bytes, we can easily find the memory location of any index by doing m + index * c
[----][----][----]
^ array[0]
[----][----][----]
---- ---- ^ array[2]
Fundamentally, this is why pointers can be treated like arrays in C/C++, because when you are accessing arrays, you are basically doing pointer arithmetic anyway.
In most cases in C/C++, a short is 2 bytes long, so to represent it in the same way
[--][--][--]
If you create a short pointer, and try to use it as an array, it is expected to point to something which is arranged like the above. If you try to index it, it is going to have problems: if you were dealing with an array of shorts, the location of array[2] is the same as m + 2 * index, as shown below
[--][--][--]
-- -- ^ array[2] (note moving along four bytes)
But since we are in reality dealing with an array of integers, the following will happen
[----][----][----]
---- ^ array[2] (again moving along four bytes)
Which is clearly wrong
No, because ++ptr actually does something like ptr = (char*)ptr + sizeof *ptr (with sizeof (char) being 1 by definition). So incrementing a double pointer moves it by (usually) 8 bytes, while incrementing a short pointer moves it by only 2 bytes.
Suppose that your kids study piano and occasionally ask you to scan for them a stack of sheet music given to them by their teacher who was born in the 20th century (just like yourself). You take those sheets to your office and feed them to the photocopier. It creates decent digital scans that your kids can use on their piano equipped with a touch screen. All goes well until one day the child brings to you an old rare set of vinyl records. She's desperate of finding those melodies in sheet music form but asks you to at least copy the records. Inexperienced in musical matters, you take those disks to your office, load them in the automatic document feeder of the scanner and realize that you are deep in ... um... crap only as you hear the sounds of the vinyl disks breaking inside the stupid machine. Even if the photocopier were not equipped with an ADF, and you had to place all the originals on its glass flatbed manually, hardly you would receive your fair share of praise when you sent the scans to your daughter.
The scanner doesn't care what you put into it - as long as it fits inside. It does its best, but the result is not up to the expectations. However, had you first taken the vinyl records to an experienced musician who would write them down as musical score, scanning those sheets would result in real delight of your child.
In C++, different types may differ to an extent that a printed sheet of paper differs from a CD. A C++ function expecting to receive an array of shorts will process any sequence of bytes/bits as an array of shorts. It doesn't care that the memory area is actually filled with values of a different type, having a completely different representation, just like the scanner didn't care about the contents of the stack on the ADF. Assuming that a function will internally convert each element of the array from double to short, is the same as believing that a photocopier includes a gramophone and a musician that will automatically transcribe vinyl recordings to sheet form. Note that the latter is a possible design for a real-world photocopier, and some other programming languages work like that. But not existing implementations of1 C++.
1 In theory, a standard compliant implementation of C/C++ is possible that would interpret all provisions of UB in the language in favor of the opposite answer to your question, rather than in favor of best performance. But that would make little sense for a language like C/C++.

efficient check for value change in array of floats in c++

i want to optimize an OpenGL application, and one hotspot is
doing expensive handling ( uploading to graphics card ) of
relatively small arrays ( 8-64 values ) where sometimes the
values change but most of the times stay constant. So most
efficient solution would be to upload the array only when
it has changed.
Of course the simplest way would be setting flags whenever
the data is changed, but this would need many code
changes, and for a quick test i would like to know the
possible performance gains, before too much work has to
be done.
So i thought of a quick check ( like a murmur hash etc )
in memory if the data has changed from frame to frame and
decide uploding after this check. so the question is, how
could i eg. XOR an array of values like
float vptr[] = { box.x1,box.y1, box.x1,box.y2, box.x2,box.y2, box.x2,box.y1 };
together to detect reliably value changes?
Best & thanks,
Heiner
If you're using intel, you could look into intel intrinsics.
http://software.intel.com/en-us/articles/intel-intrinsics-guide gives you an interactive reference where you can explore. There are a bunch of instructions for comparing multiple integers or doubles in one instruction, which is a nice speed-up.
#Ming, thank you for the intrinsic speedup, i will have a look into this.
float vptr[] = { box.x1,box.y1, box.x1,box.y2, box.x2,box.y2, box.x2,box.y1 };
unsigned hashval h = 0;
for(int i=...)
{
h ^= (unsigned&) vptr[i];
}
dead simple, worked for the really tiny arrays. compiler should be able to auto-vectorize, size of array is known. have to test for larger arrays.
origin: Hash function for floats

Fast code for searching bit-array for contiguous set/clear bits?

Is there some reasonably fast code out there which can help me quickly search a large bitmap (a few megabytes) for runs of contiguous zero or one bits?
By "reasonably fast" I mean something that can take advantage of the machine word size and compare entire words at once, instead of doing bit-by-bit analysis which is horrifically slow (such as one does with vector<bool>).
It's very useful for e.g. searching the bitmap of a volume for free space (for defragmentation, etc.).
Windows has an RTL_BITMAP data structure one can use along with its APIs.
But I needed the code for this sometime ago, and so I wrote it here (warning, it's a little ugly):
https://gist.github.com/3206128
I have only partially tested it, so it might still have bugs (especially on reverse). But a recent version (only slightly different from this one) seemed to be usable for me, so it's worth a try.
The fundamental operation for the entire thing is being able to -- quickly -- find the length of a run of bits:
long long GetRunLength(
const void *const pBitmap, unsigned long long nBitmapBits,
long long startInclusive, long long endExclusive,
const bool reverse, /*out*/ bool *pBit);
Everything else should be easy to build upon this, given its versatility.
I tried to include some SSE code, but it didn't noticeably improve the performance. However, in general, the code is many times faster than doing bit-by-bit analysis, so I think it might be useful.
It should be easy to test if you can get a hold of vector<bool>'s buffer somehow -- and if you're on Visual C++, then there's a function I included which does that for you. If you find bugs, feel free to let me know.
I can't figure how to do well directly on memory words, so I've made up a quick solution which is working on bytes; for convenience, let's sketch the algorithm for counting contiguous ones:
Construct two tables of size 256 where you will write for each number between 0 and 255, the number of trailing 1's at the beginning and at the end of the byte. For example, for the number 167 (10100111 in binary), put 1 in the first table and 3 in the second table. Let's call the first table BBeg and the second table BEnd. Then, for each byte b, two cases: if it is 255, add 8 to your current sum of your current contiguous set of ones, and you are in a region of ones. Else, you end a region with BBeg[b] bits and begin a new one with BEnd[b] bits.
Depending on what information you want, you can adapt this algorithm (this is a reason why I don't put here any code, I don't know what output you want).
A flaw is that it does not count (small) contiguous set of ones inside one byte ...
Beside this algorithm, a friend tells me that if it is for disk compression, just look for bytes different from 0 (empty disk area) and 255 (full disk area). It is a quick heuristic to build a map of what blocks you have to compress. Maybe it is beyond the scope of this topic ...
Sounds like this might be useful:
http://www.aggregate.org/MAGIC/#Population%20Count%20%28Ones%20Count%29
and
http://www.aggregate.org/MAGIC/#Leading%20Zero%20Count
You don't say if you wanted to do some sort of RLE or to simply count in-bytes zeros and one bits (like 0b1001 should return 1x1 2x0 1x1).
A look up table plus SWAR algorithm for fast check might gives you that information easily.
A bit like this:
byte lut[0x10000] = { /* see below */ };
for (uint * word = words; word < words + bitmapSize; word++) {
if (word == 0 || word == (uint)-1) // Fast bailout
{
// Do what you want if all 0 or all 1
}
byte hiVal = lut[*word >> 16], loVal = lut[*word & 0xFFFF];
// Do what you want with hiVal and loVal
The LUT will have to be constructed depending on your intended algorithm. If you want to count the number of contiguous 0 and 1 in the word, you'll built it like this:
for (int i = 0; i < sizeof(lut); i++)
lut[i] = countContiguousZero(i); // Or countContiguousOne(i)
// The implementation of countContiguousZero can be slow, you don't care
// The result of the function should return the largest number of contiguous zero (0 to 15, using the 4 low bits of the byte, and might return the position of the run in the 4 high bits of the byte
// Since you've already dismissed word = 0, you don't need the 16 contiguous zero case.

C++, using one byte to store two variables

I am working on representation of the chess board, and I am planning to store it in 32 bytes array, where each byte will be used to store two pieces. (That way only 4 bits are needed per piece)
Doing it in that way, results in a overhead for accessing particular index of the board.
Do you think that, this code can be optimised or completely different method of accessing indexes can be used?
c++
char getPosition(unsigned char* c, int index){
//moving pointer
c+=(index>>1);
//odd number
if (index & 1){
//taking right part
return *c & 0xF;
}else
{
//taking left part
return *c>>4;
}
}
void setValue(unsigned char* board, char value, int index){
//moving pointer
board+=(index>>1);
//odd number
if (index & 1){
//replace right part
//save left value only 4 bits
*board = (*board & 0xF0) + value;
}else
{
//replacing left part
*board = (*board & 0xF) + (value<<4);
}
}
int main() {
char* c = (char*)malloc(32);
for (int i = 0; i < 64 ; i++){
setValue((unsigned char*)c, i % 8,i);
}
for (int i = 0; i < 64 ; i++){
cout<<(int)getPosition((unsigned char*)c, i)<<" ";
if (((i+1) % 8 == 0) && (i > 0)){
cout<<endl;
}
}
return 0;
}
I am equally interested in your opinions regarding chess representations, and optimisation of the method above, as a stand alone problem.
Thanks a lot
EDIT
Thanks for your replies. A while ago I created checkers game, where I was using 64 bytes board representation. This time I am trying some different methods, just to see what I like. Memory is not such a big problem. Bit-boards is definitely on my list to try. Thanks
That's the problem with premature optimization. Where your chess board would have taken 64 bytes to store, now it takes 32. What has this really boughten you? Did you actually analyze the situation to see if you needed to save that memory?
Assuming that you used one of the least optimal search method, straight AB search to depth D with no heuristics, and you generate all possible moves in a position before searching, then absolute maximum memory required for your board is going to be sizeof(board) * W * D. If we assume a rather large W = 100 and large D = 30 then you're going to have 3000 boards in memory at depth D. 64k vs 32k...is it really worth it?
On the other hand, you've increased the amount of operations necessary to access board[location] and this will be called many millions of times per search.
When building chess AI's the main thing you'll end up looking for is cpu cycles, not memory. This may vary a little bit if you're targeting a cell phone or something, but even at that you're going to worry more about speed before you'll ever reach enough depth to cause any memory issues.
As to which representation I prefer...I like bitboards. Haven't done a lot of serious measurements but I did compare two engines I made, one bitboard and one array, and the bitboard one was faster and could reach much greater depths than the other.
Let me be the first to point out a potential bug (depending on compilers and compiler settings). And bugs being why premature optimization is evil:
//taking left part
return *c>>4;
if *c is negative, then >> may repeat the negative high bit. ie in binary:
0b10100000 >> 4 == 0b11111010
for some compilers (ie the C++ standard leaves it to the compiler to decide - both whether to carry the high bit, and whether a char is signed or unsigned).
If you do want to go forward with your packed bits (and let me say that you probably shouldn't bother, but it is up to you), I would suggest wrapping the packed bits into a class, and overriding [] such that
board[x][y]
gives you the unpacked bits. Then you can turn the packing on and off easily, and having the same syntax in either case. If you inline the operator overloads, it should be as efficient as the code you have now.
Well, 64 bytes is a very small amount of RAM. You're better off just using a char[8][8]. That is, unless you plan on storing a ton of chess boards. Doing char[8][8] makes it easier (and faster) to access the board and do more complex operations on it.
If you're still interested in storing the board in packed representation (either for practice or to store a lot of boards), I say you're "doing it right" regarding the bit operations. You may want to consider inlining your accessors if you're going for speed using the inline keyword.
Is space enough of a consideration where you can't just use a full byte to represent a square? That would make accesses easier to follow on the program and additionally most likely faster as the bit manipulations are not required.
Otherwise to make sure everything goes smoothly I would make sure all your types are unsigned: getPosition return unsigned char, and qualify all your numeric literals with "U" (0xF0U for example) to make sure they're always interpreted as unsigned. Most likely you won't have any problems with signed-ness but why take chances on some architecture that behaves unexpectedly?
Nice code, but if you are really that deep into performance optimization, you should probably learn more about your particular CPU architecture.
AFAIK, you may found that storing a chess piece in as much 8 bytes will be more efficient. Even if you recurse 15 moves deep, L2 cache size would hardly be a constraint, but RAM misalignment may be. I would guess that proper handling of a chess board would include Expand() and Reduce() functions to translate between board representations during different parts of the algorithm: some may be faster on compact representation, and some vice versa. For example, caching, and algorithms involving hashing by composition of two adjacent cells might be good for the compact structure, all else no.
I would also consider developing some helper hardware, like some FPGA board, or some GPU code, if performance is so important..
As a chess player, I can tell you: There's more to a position than the mere placement of each piece. You have to take in to consideration some other things:
Which side has to move next?
Can white and/or black castle king and/or queenside?
Can a pawn be taken en passant?
How many moves have passed since the last pawn move and/or capturing move?
If the data structure you use to represent a position doesn't reflect this information, then you're in big trouble.