I have two shared global variables
int a = 0;
int b = 0;
and two threads
// thread 1
for (int i = 0; i < 10; ++i) {
EnterCriticalSection(§);
a++;
b++;
std::cout << a " " << b << std::endl;
LeaveCriticalSection(§);
}
// thread2
for (int i = 0; i < 10; ++i) {
EnterCriticalSection(§);
a--;
b--;
std::cout << a " " << b << std::endl;
LeaveCriticalSection(§);
}
The code always prints the following output
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
That is quite strange, looks like threads are working sequentally.. What's the problem with that?
Thanks.
Each thread has a specific time slice during which it executes before being preempted. In your example, the time slice seems to be longer than the time required to complete the loop.
However, you can actively yield control by calling Sleep(0) after leaving the critical section inside the loop.
IMO critical section leave/enter in your example is so fast that another thread is not fast enough to execute enter section during this moment.
Try to put some (maybe random) sleeps to slow down code to see desired effects.
Note:
Default timeout for EnterCriticalSection is like 30 days or so (means infinty) so you cannot expect that function will time out. And documentation says:
There is no guarantee about the order in which threads will obtain ownership of the critical section, however, the system will be fair to all threads.
For me it looks like the topic discussed in http://social.msdn.microsoft.com/forums/en-US/windowssdk/thread/980e5018-3ade-4823-a6dc-5ddbcc3091d5/
Please look the example from June 28, 2006
(unfortunately I cannot find the original article by Microsoft telling about the change of CriticalSection)
Could you try your code on Windows XP? What does it show?
I guess that I/O operations (cout) affect the scheduling similarly to a Sleep() call, so starting with Windows Vista a thread could cause starvation of other threads when doing I/O inside a CS.
Related
I wrote a very trivial program to try to examine the undefined behavior attached to buffer overflows. Specifically, regarding what happens when you perform a read on data outside the allocated space.
#include <iostream>
#include<iomanip>
int main() {
int values[10];
for (int i = 0; i < 10; i++) {
values[i] = i;
}
std::cout << values << " ";
std::cout << std::endl;
for (int i = 0; i < 11; i++) {
//UB occurs here when values[i] is executed with i == 10
std::cout << std::setw(2) << i << "(" << (values + i) << "): " << values[i] << std::endl;
}
system("pause");
return 0;
}
When I run this program on Visual Studio, the results aren't terribly surprising: reading index 10 produces garbage:
000000000025FD70
0(000000000025FD70): 0
1(000000000025FD74): 1
2(000000000025FD78): 2
3(000000000025FD7C): 3
4(000000000025FD80): 4
5(000000000025FD84): 5
6(000000000025FD88): 6
7(000000000025FD8C): 7
8(000000000025FD90): 8
9(000000000025FD94): 9
10(000000000025FD98): -1966502944
Press any key to continue . . .
But when I fed this program into Ideone.com's online compiler, I got extremely bizarre behavior:
0xff8cac48
0(0xff8cac48): 0
1(0xff8cac4c): 1
2(0xff8cac50): 2
3(0xff8cac54): 3
4(0xff8cac58): 4
5(0xff8cac5c): 5
6(0xff8cac60): 6
7(0xff8cac64): 7
8(0xff8cac68): 8
9(0xff8cac6c): 9
10(0xff8cac70): 1
11(0xff8cac74): -7557836
12(0xff8cac78): -7557984
13(0xff8cac7c): 1435443200
14(0xff8cac80): 0
15(0xff8cac84): 0
16(0xff8cac88): 0
17(0xff8cac8c): 1434052387
18(0xff8cac90): 134515248
19(0xff8cac94): 0
20(0xff8cac98): 0
21(0xff8cac9c): 1434052387
22(0xff8caca0): 1
23(0xff8caca4): -7557836
24(0xff8caca8): -7557828
25(0xff8cacac): 1432254426
26(0xff8cacb0): 1
27(0xff8cacb4): -7557836
28(0xff8cacb8): -7557932
29(0xff8cacbc): 134520132
30(0xff8cacc0): 134513420
31(0xff8cacc4): 1435443200
32(0xff8cacc8): 0
33(0xff8caccc): 0
34(0xff8cacd0): 0
35(0xff8cacd4): 346972086
36(0xff8cacd8): -29697309
37(0xff8cacdc): 0
38(0xff8cace0): 0
39(0xff8cace4): 0
40(0xff8cace8): 1
41(0xff8cacec): 134514984
42(0xff8cacf0): 0
43(0xff8cacf4): 1432277024
44(0xff8cacf8): 1434052153
45(0xff8cacfc): 1432326144
46(0xff8cad00): 1
47(0xff8cad04): 134514984
...
//The heck?! This just ends with a Runtime Error after like 200 lines.
So apparently, with their compiler, overrunning the buffer by a single index causes the program to enter an infinite loop!
Now, to reiterate: I realize that I'm dealing with undefined behavior here. But despite that, I'd like to know what on earth is happening behind the scenes to cause this. The code that physically performs the buffer overrun is still performing a read of 4 bytes and writing whatever it reads to a (presumably better protected) buffer. What is the compiler/CPU doing that causes these issues?
There are two execution paths leading to the condition i < 11 being evaluated.
The first is before the initial loop iteration. Since i had been initialised to 0 just before the check, this is trivially true.
The second is after a successful loop iteration. Since the loop iteration caused values[i] to be accessed, and values only has 10 elements, this can only be valid if i < 10. And if i < 10, after i++, i < 11 must also be true.
This is what Ideone's compiler (GCC) is detecting. There is no way the condition i < 11 can ever be false unless you have an invalid program, therefore it can be optimised away. At the same time, your compiler doesn't go out of its way to check whether you might have an invalid program unless you provide additional options to tell it to do so (such as -fsanitize=undefined in GCC/clang).
This is a trade off implementations must make. They can favour understandable behaviour for invalid programs, or they can favour raw speed for valid programs. Or a mix of both. GCC definitely focuses greatly on the latter, at least by default.
This question already has answers here:
Accessing an array out of bounds gives no error, why?
(18 answers)
Closed 6 years ago.
#include <iostream>
using namespace std;
int main(){
int a[3], no;
cout << "Index Value\n";
for(int i = 0; i < 100; i++){
cin >> no;
a[i] = no;
cout << i << "\t" << a[i] << endl;
}
return 0;
}
Here I initialized a[ 3 ]. In for loop, I'm feeding input 100 times to a[ ], exceeding the indices of [ 3 ].
Why don't it give segmentation error right after when i equals 4.
Input
1 2 3 4 5 6 7
Output
Index Value
0 1
1 2
2 3
4 0
5 5
6 6
7 7
Output is wrong when Index equals 4. Printed 0 . Expected 4
Unfortunately for the debugging programmer, C and C++ programs don't usually segfault when you write past the end of an array. Instead it will usually silently write over whatever the pointer arithmetic as up to -- if the OS allows it. This often overwrites other variables or even program code, causing confusing and unpredictable errors.
I have used the word "usually" here because according to the standards this is "undefined behaviour" -- that is, the compiler and runtime can do anything they like.
When developing and testing, it can be very useful to use a library such as electricfence, which puts extra checks into memory operations and would make your program fail in the way you expect.
I have a C++ program which basically performs some matrix calculations. For these I use LAPACK/BLAS and usually link to the MKL or ACML depending on the platform. A lot of these matrix calculations operate on different independent matrices and hence I use std::thread's to let these operations run in parallel. However, I noticed that I have no speed-up when using more threads. I traced the problem down to the daxpy Blas routine. It seems that if two threads are using this routine in parallel each thread takes twice the time, even though the two threads operate on different arrays.
The next thing I tried was writing a new simple method to perform vector additions to replace the daxpy routine. With one thread this new method is as fast as the BLAS routine, but, when compiling with gcc, it suffers from the same problems as the BLAS routine: doubling the number of threads running parallel also doubles the amount of time each threads needs, so no speed-up is gained. However, using the Intel C++ Compiler this problems vanishes: with increasing number of threads the time a single thread needs is constant.
However, I need to compile as well on systems where no Intel compiler is available. So my questions are: why is there no speed-up with the gcc and is there any possibility of improving the gcc performance?
I wrote a small program to demonstrate the effect:
// $(CC) -std=c++11 -O2 threadmatrixsum.cpp -o threadmatrixsum -pthread
#include <iostream>
#include <thread>
#include <vector>
#include "boost/date_time/posix_time/posix_time.hpp"
#include "boost/timer.hpp"
void simplesum(double* a, double* b, std::size_t dim);
int main() {
for (std::size_t num_threads {1}; num_threads <= 4; num_threads++) {
const std::size_t N { 936 };
std::vector <std::size_t> times(num_threads, 0);
auto threadfunction = [&](std::size_t tid)
{
const std::size_t dim { N * N };
double* pA = new double[dim];
double* pB = new double[dim];
for (std::size_t i {0}; i < N; ++i){
pA[i] = i;
pB[i] = 2*i;
}
boost::posix_time::ptime now1 =
boost::posix_time::microsec_clock::universal_time();
for (std::size_t n{0}; n < 1000; ++n){
simplesum(pA, pB, dim);
}
boost::posix_time::ptime now2 =
boost::posix_time::microsec_clock::universal_time();
boost::posix_time::time_duration dur = now2 - now1;
times[tid] += dur.total_milliseconds();
delete[] pA;
delete[] pB;
};
std::vector <std::thread> mythreads;
// start threads
for (std::size_t n {0} ; n < num_threads; ++n)
{
mythreads.emplace_back(threadfunction, n);
}
// wait for threads to finish
for (std::size_t n {0} ; n < num_threads; ++n)
{
mythreads[n].join();
std::cout << " Thread " << n+1 << " of " << num_threads
<< " took " << times[n]<< "msec" << std::endl;
}
}
}
void simplesum(double* a, double* b, std::size_t dim){
for(std::size_t i{0}; i < dim; ++i)
{*(++a) += *(++b);}
}
The outout with gcc:
Thread 1 of 1 took 532msec
Thread 1 of 2 took 1104msec
Thread 2 of 2 took 1103msec
Thread 1 of 3 took 1680msec
Thread 2 of 3 took 1821msec
Thread 3 of 3 took 1808msec
Thread 1 of 4 took 2542msec
Thread 2 of 4 took 2536msec
Thread 3 of 4 took 2509msec
Thread 4 of 4 took 2515msec
The outout with icc:
Thread 1 of 1 took 663msec
Thread 1 of 2 took 674msec
Thread 2 of 2 took 674msec
Thread 1 of 3 took 681msec
Thread 2 of 3 took 681msec
Thread 3 of 3 took 681msec
Thread 1 of 4 took 688msec
Thread 2 of 4 took 689msec
Thread 3 of 4 took 687msec
Thread 4 of 4 took 688msec
So, with the icc the time needed for one thread perform the computations is constant (as I would have expected; my CPU has 4 physical cores) and with the gcc the time for one thread increases. Replacing the simplesum routine by BLAS::daxpy yields the same results for icc and gcc (no surprise, as most time is spent in the library), which are almost the same as the above stated gcc results.
The answer is fairly simple: Your threads are fighting for memory bandwidth!
Consider that you perform one floating point addition per 2 stores (one initialization, one after the addition) and 2 reads (in the addition). Most modern systems providing multiple cpus actually have to share the memory controller among several cores.
The following was run on a system with 2 physical CPU sockets and 12 cores (24 with HT). Your original code exhibits exactly your problem:
Thread 1 of 1 took 657msec
Thread 1 of 2 took 1447msec
Thread 2 of 2 took 1463msec
[...]
Thread 1 of 8 took 5516msec
Thread 2 of 8 took 5587msec
Thread 3 of 8 took 5205msec
Thread 4 of 8 took 5311msec
Thread 5 of 8 took 2731msec
Thread 6 of 8 took 5545msec
Thread 7 of 8 took 5551msec
Thread 8 of 8 took 4903msec
However, by simply increasing the arithmetic density, we can see a significant increase in scalability. To demonstrate, I changed your addition routine to also perform an exponentiation: *(++a) += std::exp(*(++b));. The result shows almost perfect scaling:
Thread 1 of 1 took 7671msec
Thread 1 of 2 took 7759msec
Thread 2 of 2 took 7759msec
[...]
Thread 1 of 8 took 9997msec
Thread 2 of 8 took 8135msec
Thread 3 of 8 took 10625msec
Thread 4 of 8 took 8169msec
Thread 5 of 8 took 10054msec
Thread 6 of 8 took 8242msec
Thread 7 of 8 took 9876msec
Thread 8 of 8 took 8819msec
But what about ICC?
First, ICC inlines simplesum. Proving that inlining happens is simple: Using icc, I have disable multi-file interprocedural optimization and moved simplesum into its own translation unit. The difference is astonishing. The performance went from
Thread 1 of 1 took 687msec
Thread 1 of 2 took 688msec
Thread 2 of 2 took 689msec
[...]
Thread 1 of 8 took 690msec
Thread 2 of 8 took 697msec
Thread 3 of 8 took 700msec
Thread 4 of 8 took 874msec
Thread 5 of 8 took 878msec
Thread 6 of 8 took 874msec
Thread 7 of 8 took 742msec
Thread 8 of 8 took 868msec
To
Thread 1 of 1 took 1278msec
Thread 1 of 2 took 2457msec
Thread 2 of 2 took 2445msec
[...]
Thread 1 of 8 took 8868msec
Thread 2 of 8 took 8434msec
Thread 3 of 8 took 7964msec
Thread 4 of 8 took 7951msec
Thread 5 of 8 took 8872msec
Thread 6 of 8 took 8286msec
Thread 7 of 8 took 5714msec
Thread 8 of 8 took 8241msec
This already explains why the library performs badly: ICC cannot inline it and therefore no matter what else causes ICC to perform better than g++, it will not happen.
It also gives a hint as to what ICC might be doing right here... What if instead of executing simplesum 1000 times, it interchanges the loops so that it
Loads two doubles
Adds them 1000 times (or even performs a = 1000 * b)
Stores two doubles
This would increase arithmetic density without adding any exponentials to the function... How to prove this? Well, to begin let us simply implement this optimization and see what happens! To analyse, we will look at the g++ performance. Recall our benchmark results:
Thread 1 of 1 took 640msec
Thread 1 of 2 took 1308msec
Thread 2 of 2 took 1304msec
[...]
Thread 1 of 8 took 5294msec
Thread 2 of 8 took 5370msec
Thread 3 of 8 took 5451msec
Thread 4 of 8 took 5527msec
Thread 5 of 8 took 5174msec
Thread 6 of 8 took 5464msec
Thread 7 of 8 took 4640msec
Thread 8 of 8 took 4055msec
And now, let us exchange
for (std::size_t n{0}; n < 1000; ++n){
simplesum(pA, pB, dim);
}
with the version in which the inner loop was made the outer loop:
double* a = pA; double* b = pB;
for(std::size_t i{0}; i < dim; ++i, ++a, ++b)
{
double x = *a, y = *b;
for (std::size_t n{0}; n < 1000; ++n)
{
x += y;
}
*a = x;
}
The results show that we are on the right track:
Thread 1 of 1 took 693msec
Thread 1 of 2 took 703msec
Thread 2 of 2 took 700msec
[...]
Thread 1 of 8 took 920msec
Thread 2 of 8 took 804msec
Thread 3 of 8 took 750msec
Thread 4 of 8 took 943msec
Thread 5 of 8 took 909msec
Thread 6 of 8 took 744msec
Thread 7 of 8 took 759msec
Thread 8 of 8 took 904msec
This proves that the loop interchange optimization is indeed the main source of the excellent performance ICC exhibits here.
Note that none of the tested compilers (MSVC, ICC, g++ and clang) will replace the loop with a multiplication, which improves performance by 200x in the single threaded and 15x in the 8-threaded cases. This is due to the fact that the numerical instability of the repeated additions may cause wildly differing results when replaced with a single multiplication. When testing with integer data types instead of floating point data types, this optimization happens.
How can we force g++ to perform this optimization?
Interestingly enough, the true killer for g++ is not an inability to perform loop interchange. When called with -floop-interchange, g++ can perform this optimization as well. But only when the odds are significantly stacked into its favor.
Instead of std::size_t all bounds were expressed as ints. Not long, not unsigned int, but int. I still find it hard to believe, but it seems this is a hard requirement.
Instead of incrementing pointers, index them: a[i] += b[i];
G++ needs to be told -floop-interchange. A simple -O3 is not enough.
When all three criteria are met, the g++ performance is similar to what ICC delivers:
Thread 1 of 1 took 714msec
Thread 1 of 2 took 724msec
Thread 2 of 2 took 721msec
[...]
Thread 1 of 8 took 782msec
Thread 2 of 8 took 1221msec
Thread 3 of 8 took 1225msec
Thread 4 of 8 took 781msec
Thread 5 of 8 took 788msec
Thread 6 of 8 took 1262msec
Thread 7 of 8 took 1226msec
Thread 8 of 8 took 820msec
Note: The version of g++ used in this experiment is 4.9.0 on a x64 Arch linux.
Ok, I came to the conclusion that the main problem is that the processor acts on different parts of the memory in parallel and hence I assume that one has to deal with lots of cache misses which slows the process further down. Putting the actual sum function in a critical section
summutex.lock();
simplesum(pA, pB, dim);
summutex.unlock();
solves the problem of the cache missses, but of course does not yield optimal speed-up. Anyway, because now the other threads are blocked the simplesum method might as well use all available threads for the sum
void simplesum(double* a, double* b, std::size_t dim, std::size_t numberofthreads){
omp_set_num_threads(numberofthreads);
#pragma omp parallel
{
#pragma omp for
for(std::size_t i = 0; i < dim; ++i)
{
a[i]+=b[i];
}
}
}
In this case all the threads work on the same chunk on memory: it should be in the processor cache and if the processor needs to load some other parts of the memory into its cache the other threads benefit from this all well (depending whether this is L1 or L2 cache, but I reckon the details do not really matter for the sake of this discussion).
I don't claim that this solution is perfect or anywhere near optimal, but it seems to work much better than the original code. And it does not rely on some loop switching tricks which I cannot do in my actual code.
I have solved the 1st of three phases of a panhellenic competition (it is now over) but i am interested in knowing whether there is any simpler complexity algorithm or not
7 9
5 7
4 2
3 6
2 3
1 7
6 2
4 6
1 5
3 4
int main()
{
ifstream in("domes.in");
ofstream out("domes.out");
int orio,z;
in>>orio;
in>>z;
int domes[orio];
for(int i=0;i<orio;i++){domes[i]=0;}
int k;
for(int i=0;i<2*z;i++)
{
in>>k;
domes[k-1]++;
}
int c=0;
for(int i=0;i<orio;i++)
{
if(domes[i]<2)
c++;
}
out<<c;
return 0;
}
it is about some places(represented by numbers)the first two numbers are number of places (orio) and the number of matches (z). The places are "connected" somehow (meaningless). You should find how many places have less than 2 connections and the output (c in this case) is the number of the places that have less than 2 connections. k is a variable used temporarily to insert each number and plusplus the times it is "seen" . If it is seen it means that it is connected to another place. I don't think there is a simpler solution, but some of my peer's programs needed less time to run and that troubled me
I want a table of four values between 1 to 6.
I'm using: rand() % 6 + 1;
This should give values between 1 and 6.
Except if rand() generates the value 0.
I keep getting 7's. I don't want any 7's
What is the range of rand? How I prevent it from generation any 0 values?
Alternative solutions are quite welcome.
My teacher gave us the clue of using "random".
We use Borland C++ Builder 5 at school.
I am using Dev-C++ 5.3.0.3 at home.
I find there are a few differences to how they work, which I find strange..
I can't use random(), it gives me not declared in scope...
int main (){
int I;
int Fasit[3];
srand (time(NULL) );
for(I=0; I<4; I++) {
Fasit[I]=rand() % 6 + 1;
}
std::cout << Fasit[0] << " " << Fasit[1] << " " << Fasit[2] << " " << Fasit[3] << " ";
return 0;
}
Some values I get:
2 6 1 7
5 2 1 4
5 2 1 4
5 2 1 4
1 3 1 6
5 3 3 7
5 3 3 7
5 3 3 7
7 shouldn't be possible, should it?
PS: I know my print is ham fisted, I will make it a bit more elegant once the number generation works.
Consdier these lines:
int Fasit[3];
for(I=0; I<4; I++) {
Fasit[I]
You declare an array of three entries, which you write to four times.
Try your program again, but with:
int Fasit[4];
You only have 3 elements in Fasit[3]; When you write to Fasit[3], you are in the realm of undefined behavior, which in this case manifests it self with apparent contradiction.
Fasit[3] allows you to access only Fasit[0], Fasit[1], and Fasit[2].
Accessing Fasit[3], either for reading and writing, is undefined behavior. Your code is both writing and reading to Fasit[3] :-). The program is accessing the array out-of-bound. Fix it!
As to why 7 is printed, that is just coincidence. Note that Fasit[0-3] is always printed in the range 1-6 as you expected.
See also:
Array Index out of bound in C
Bounds checking
int Fasit[3];
You are creating an array of size 3, which can be accessed with indexes 0, 1 or 2 only.
You are writing and reading Fasit[3], which has an undefined behaviour. When a behaviour is undefined, you are bound to obtain weird results. This is it.