MPI_Rank value is disturbed by MPI_RECV subroutine [duplicate] - fortran
This question already has an answer here:
MPI_Recv overwrites parts of memory it should not access
(1 answer)
Closed 7 years ago.
Despite having written long, heavily parallelized codes with complicated send/receives over three dimensional arrays, this simple code with a two dimensional array of integers has got me at my wits end. I combed stackoverflow for possible solutions and found one that resembled slightly with the issue I am having:
Boost.MPI: What's received isn't what was sent!
However the solutions seem to point the looping segment of code as the culprit for overwriting sections of the memory. But this one seems to act even stranger. Maybe it is a careless oversight of some simple detail on my part. The problem is with the below code:
program main
implicit none
include 'mpif.h'
integer :: i, j
integer :: counter, offset
integer :: rank, ierr, stVal
integer, dimension(10, 10) :: passMat, prntMat !! passMat CONTAINS VALUES TO BE PASSED TO prntMat
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
counter = 0
offset = (rank + 1)*300
do j = 1, 10
do i = 1, 10
prntMat(i, j) = 10 !! prntMat OF BOTH RANKS CONTAIN 10
passMat(i, j) = offset + counter !! passMat OF rank=0 CONTAINS 300..399 AND rank=1 CONTAINS 600..699
counter = counter + 1
end do
end do
if (rank == 1) then
call MPI_SEND(passMat(1:10, 1:10), 100, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr) !! SEND passMat OF rank=1 to rank=0
else
call MPI_RECV(prntMat(1:10, 1:10), 100, MPI_INTEGER, 1, 1, MPI_COMM_WORLD, stVal, ierr)
do i = 1, 10
print *, prntMat(:, i)
end do
end if
call MPI_FINALIZE(ierr)
end program main
When I compile the code with mpif90 with no flags and run it on my machine with mpirun -np 2, I get the following output with wrong values in the first four indices of the array:
0 0 400 0 604 605 606 607 608 609
610 611 612 613 614 615 616 617 618 619
620 621 622 623 624 625 626 627 628 629
630 631 632 633 634 635 636 637 638 639
640 641 642 643 644 645 646 647 648 649
650 651 652 653 654 655 656 657 658 659
660 661 662 663 664 665 666 667 668 669
670 671 672 673 674 675 676 677 678 679
680 681 682 683 684 685 686 687 688 689
690 691 692 693 694 695 696 697 698 699
However, when I compile it with the same compiler but with the -O3 flag on, I get the correct output:
600 601 602 603 604 605 606 607 608 609
610 611 612 613 614 615 616 617 618 619
620 621 622 623 624 625 626 627 628 629
630 631 632 633 634 635 636 637 638 639
640 641 642 643 644 645 646 647 648 649
650 651 652 653 654 655 656 657 658 659
660 661 662 663 664 665 666 667 668 669
670 671 672 673 674 675 676 677 678 679
680 681 682 683 684 685 686 687 688 689
690 691 692 693 694 695 696 697 698 699
This error is machine dependent. This issue turns up only on my system running Ubuntu 14.04.2, using OpenMPI 1.6.5
I tried this on other systems running RedHat and CentOS and the code ran well with and without the -O3 flag. Curiously those machines use an older version of OpenMPI - 1.4
I am guessing that the -O3 flag is performing some odd optimization that is modifying the manner in which arrays are being passed between the processes.
I also tried other versions of array allocation. The above code uses explicit shape arrays. With assumed shape and allocated arrays I am receiving equally, if not more bizarre results, with some of them seg-faulting. I tried using Valgrind to trace the origin of these seg-faults, but I still haven't gotten the hang of getting Valgrind to not give false positives when running with MPI programs.
I believe that resolving the difference in performance of the above code will help me understand the tantrums of my other codes as well.
Any help would be greatly appreciated! This code has really gotten me questioning if all the other MPI codes I wrote are sound at all.
Using the Fortran 90 interface to MPI reveals a mismatch in your call to MPI_RECV
call MPI_RECV(prntMat(1:10, 1:10), 100, MPI_INTEGER, 1, 1, MPI_COMM_WORLD, stVal, ierr)
1
Error: There is no specific subroutine for the generic ‘mpi_recv’ at (1)
This is because the status variable stVal is an integer scalar, rather than an array of MPI_STATUS_SIZE. The F77 interface (include 'mpif.h') to MPI_RECV is:
INCLUDE ’mpif.h’
MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR)
<type> BUF(*)
INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM
INTEGER STATUS(MPI_STATUS_SIZE), IERROR
Changing
integer :: rank, ierr, stVal
to
integer :: rank, ierr, stVal(mpi_status_size)
produces a program that works as expected, tested with gfortran 5.1 and OpenMPI 1.8.5.
Using the F90 interface (use mpi vs include "mpif.h") lets the compiler detect the mismatched arguments at compile time rather than producing confusing runtime problems.
Related
Clojure: Sets, order and purity
Is (vec #{1 2 3}) guaranteed to always return [1 3 2] or could the order be different? I am not so much interested in the implementation details behind this but going from unordered to ordered in general in order to keep my functions pure and easily testable.
As mentioned, standard #{} sets (both PersistentArrayMap and PersistentHashMap; depending on the size) are considered unordered. Regarding purity with respect to calling seq on a set though, the current implementation does seem to return a well-defined, consistent order; just not an easily predictable one: (let [r (range 1000) seqs (repeatedly 1000 #(seq (add-randomly #{} r)))] ; See how many different orders were produced (println (count (set seqs))) (println (first seqs))) 1 (0 893 920 558 453 584 487 637 972 519 357 716 950 275 530 929 789 389 586 410 433 765 521 451 291 443 798 779 970 249 638 299 121 734 287 65 702 70 949 218 648 812 62 74 774 475 497 580 891 164 282 769 799 273 186 430 641 529 898 370 834 233 298 188 240 110 130 982 620 311 931 882 128 399 989 377 468 259 210 229 153 621 213 670 977 343 958 887 472 7 894 59 934 473 86 756 830 613 491 154 20 224 355 592 610 806 571 466 72 454 888 463 851 770 814 859 58 964 980 205 555 552 60 835 459 175 322 510 662 27 352 493 899 416 777 694 1 631 854 69 101 24 901 547 102 788 713 385 988 135 397 773 490 752 354 884 360 998 961 55 568 797 688 763 269 676 448 527 206 966 165 715 387 652 683 85 721 862 615 681 225 865 297 39 805 274 88 217 46 682 508 149 415 239 478 878 157 345 300 743 921 4 550 204 470 646 77 106 197 405 897 726 776 940 755 902 518 232 260 823 267 119 319 534 222 603 293 95 450 329 144 504 819 818 505 723 992 176 863 471 349 512 710 192 54 92 221 141 502 871 464 801 307 935 758 290 627 517 361 264 137 356 728 976 678 327 234 856 817 104 353 15 48 945 759 242 832 969 50 956 917 557 251 394 116 585 583 75 437 516 994 930 967 687 159 848 995 709 99 540 645 749 479 890 630 916 815 281 402 669 781 740 975 429 309 458 21 388 495 952 626 875 31 113 32 811 827 407 398 136 691 847 825 139 506 396 460 483 589 581 932 174 578 855 331 363 284 208 305 955 796 708 182 256 657 514 731 619 985 485 214 193 685 804 869 836 785 635 442 561 954 656 607 241 314 782 226 235 672 420 418 262 263 304 401 673 40 129 600 729 467 445 317 294 91 810 364 987 880 515 412 553 974 341 117 665 523 172 601 108 156 358 308 908 649 531 923 223 419 365 944 181 417 979 278 56 942 33 13 867 22 618 380 257 338 500 909 993 168 833 496 947 347 501 596 872 792 90 237 826 292 109 216 191 498 829 761 375 525 367 143 742 178 640 247 328 391 990 167 707 36 41 474 187 551 996 528 971 599 376 195 889 316 668 428 303 671 794 905 368 560 565 310 366 118 522 150 886 313 384 567 238 846 962 845 196 162 393 184 219 999 461 89 100 426 604 477 844 541 351 243 131 790 963 629 873 122 933 43 231 61 654 883 598 413 29 784 800 151 369 348 575 693 44 739 258 250 674 539 301 838 424 93 6 684 951 573 408 563 850 616 866 111 997 689 28 456 374 608 737 548 538 895 411 957 134 943 64 623 465 816 334 323 189 280 198 155 295 808 248 587 285 507 227 724 476 941 911 853 494 220 842 103 697 611 170 51 25 261 768 822 201 904 590 489 778 166 447 34 252 978 775 325 594 436 828 535 813 146 741 876 228 907 306 125 276 340 148 482 622 588 17 312 606 3 520 760 720 286 279 879 536 663 12 440 332 330 382 152 544 803 642 435 342 703 783 695 973 2 948 66 484 439 236 556 373 142 359 727 371 772 444 570 757 107 532 984 23 745 719 230 625 47 526 180 786 870 537 659 158 991 350 35 849 644 881 127 927 675 383 533 910 302 564 701 566 821 787 82 76 735 492 718 771 215 97 704 277 926 751 19 335 597 938 57 609 202 68 452 200 868 11 115 946 983 339 431 462 337 698 255 503 546 9 953 857 706 632 457 427 145 5 733 624 831 244 918 824 289 112 925 730 699 712 414 839 802 860 179 344 481 732 661 245 378 913 906 658 266 324 793 680 446 524 254 404 617 283 513 572 705 959 83 634 138 346 14 455 265 449 333 650 639 569 326 746 647 45 53 559 78 924 562 542 912 664 315 914 480 132 753 900 26 766 123 203 667 392 577 807 140 321 795 441 700 268 840 16 320 133 288 381 605 163 81 120 643 79 211 38 173 126 981 421 593 636 98 422 423 614 762 582 666 554 409 574 595 124 747 171 87 169 653 679 843 160 30 400 767 896 928 696 738 809 509 736 207 874 434 690 194 511 73 486 336 96 837 937 10 660 272 499 488 903 386 270 576 717 543 271 18 395 403 469 105 185 52 545 633 114 968 253 612 628 748 209 147 655 750 852 425 864 67 296 602 318 161 651 725 372 406 438 780 711 71 939 579 877 722 42 919 80 885 986 714 677 199 841 754 791 861 591 744 960 37 183 965 892 432 379 63 212 94 362 8 686 692 764 246 190 549 922 177 915 936 820 49 858 390 84) So yes, it seems that within a single run of a program, the order of (seq #{1 2 3}) can be relied upon, and can be considered pure. The language gives no guarantees though, and this property may not always exist, so really, I wouldn't rely on it. It's an implementation detail. If you require a consistent ordering, it may be beneficial to have a vector along with the set to define the order. You could do something like: (def pair [#{} []]) (defn add [p n] (-> p (update 0 conj n) (update 1 conj n))) (-> pair (add 1) (add 2)) => [#{1 2} [1 2]] Reference the set when you want to do a membership test, and the vector when you need order. Of course, this requires twice as much memory as it otherwise would though, so this may not always be practical. Additions to both sets and vectors are essentially constant however, so additions will still be quick.
double free or corruption (fasttop): 0x000000000063d070 *** c++ sieve program
I am writing a sieve program in c++. But for every legitimate input, the program always produces output with 4 primes founded and "2 3 5", no matter how the input varies. As I try to run the program via the console, it gives an error message saying that double free or corruption (fasttop): 0x000000000063d070 ***. Btw, I am new to c++. And also, I am trying to format the output correctly, but the they are just flying around. This is the desired format. 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 383 389 397 401 409 419 421 431 433 439 443 449 457 461 463 467 479 487 491 499 503 509 521 523 541 547 557 563 569 571 577 587 593 599 601 607 613 617 619 631 641 643 647 653 659 661 673 677 683 691 701 709 719 727 733 739 743 751 757 761 769 773 787 797 809 811 821 823 827 829 839 853 857 859 863 877 881 883 887 907 911 919 929 937 941 947 953 967 971 977 983 991 997
Aside from your double-free being caused by calling the destructor explicitly as #PaulMcKenzie said in the comments, your problem with only outputting the first few primes is because of this line: int n = sizeof(is_prime_); is_prime_ is a pointer and so its size is fixed at compile time (probably 4 or 8 bytes depending on your system). You already have limit_ as a value, you should use that to work out your n.
Verifying output to "Find the numbers between 1 and 1000 whose prime factors' sum is itself a prime" from Allain's Jumping into C++ (ch7 #3)
The question: Design a program that finds all numbers from 1 to 1000 whose prime factors, when added together, sum up to a prime number (for example, 12 has prime factors of 2, 2, and 3, which sum to 7, which is prime). Implement the code for that algorithm. I modified the problem to only sum unique factors, because I don't see why you'd count a factor twice, as in his example using 12. My solution. Is there any good (read: automated) way to verify the output of my program? Sample output for 1 to 1000: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 17 19 20 22 23 24 25 26 28 29 30 31 34 37 40 41 43 44 46 47 48 49 52 53 58 59 60 61 63 67 68 70 71 73 76 79 80 82 83 88 89 92 94 96 97 99 101 103 107 109 113 116 117 118 120 121 124 127 131 136 137 139 140 142 147 148 149 151 153 157 160 163 164 167 169 171 172 173 176 179 181 184 188 189 191 192 193 197 198 199 202 207 210 211 212 214 223 227 229 232 233 239 240 241 244 251 252 257 261 263 268 269 271 272 273 274 275 277 279 280 281 283 286 289 292 293 294 297 298 306 307 311 313 317 320 325 331 332 333 334 337 347 349 351 352 353 358 359 361 367 368 369 373 376 379 382 383 384 388 389 394 396 397 399 401 404 409 412 414 419 421 423 424 425 428 431 433 439 443 449 454 457 459 461 462 463 464 467 468 472 475 478 479 480 487 491 495 499 503 509 513 521 522 523 524 529 531 538 539 541 544 546 547 548 549 550 557 560 561 562 563 567 569 571 572 575 577 587 588 593 594 599 601 603 604 605 607 612 613 617 619 621 622 628 631 639 640 641 643 646 647 651 652 653 659 661 664 668 673 677 683 684 691 692 694 701 704 709 712 714 718 719 725 726 727 733 736 738 739 741 743 751 752 756 757 759 761 764 765 768 769 772 773 775 777 783 787 792 797 798 801 809 811 821 823 825 827 828 829 833 837 838 839 841 846 847 848 850 853 856 857 859 862 863 873 877 881 883 887 891 892 903 904 907 908 909 911 918 919 922 925 928 929 932 937 941 944 947 953 954 957 960 961 966 967 971 975 977 981 983 991 997 999 Update: I have solved my problem and verified the output of my program using an OEIS given series, as suggested by #MVW (shown in the source given by my new github solution). In the future, I will aim to test my programs by doing zero or more of the following (depending on the scope/importance of the problem): google keywords for an existing solution to the problem, comparing it against my solution if I find it unit test components for correctness as they're built and integrated, comparing these tests with known correct outputs
Some suggestions: You need to check the properties of your calculated numbers. Here that means calculating the prime factors and calculating their sum and testing if that sum is a prime number. Which is what your program should do in the first place, by the way. So one nice option for checking is comparing your output with a known solution or the output of a another program which is known to work. The tricky bit is to have such a solution or program available. And I neglect that your comparison could be plagued by errors as well :-) If you just compare it with other implementations, e.g. programs from other folks here, it would turn out more of a voting, it would not be a proof. It would just give increased probability that your program is correct, if several independent implementations come up with the same result. Of course all implementations could err :-) The more agree the better. And the more diverse the implementations are, the better. E.g. you could use different programming languages, algebraic systems or a friend with time and paper and pencil and Wikipedia. :-) Another means is to add checks to your intermediate steps, to get more confidence in your result. Kind of building a chain of trust. You could output the prime factors you determined and compare it with the output of a prime factorization program which is known to work. Then you check if your summing works. Finally you could check if the primality test you apply to the candidate sums is working correctly by feeding it with known prime numbers and non prime numbers and so on. That is kind of what folks do with unit testing for example. Trying to cover most parts of the code as working, hoping if the parts work, that the whole will work. Or you could formally prove your program step by step, using Hoare Calculus for example or another formal method. But that is tricky, and you might end up shifting program errors to errors in the proof. And today, in the era of internet, of course, you could internet search for the solution: Try searching for sum of prime factors is prime in the online encyclopedia of integer sequences, which should give you series A100118. :-) It is the problem with multiplicity, but shows you what the number theory pros do, with Mathematica and program fragments to calculate the series, the argument for the case of 1 and literature. Quite impressive.
Here's the answer I get. I exclude 1 as it has no prime divisors so their sum is 0, not a prime. Haskell> filter (isPrime . sum . map fst . primePowers) [2..1000] [2,3,4,5,6,7,8,9,10,11,12,13,16,17,18,19,20,22,23,24,25,27,29,31,32,34,36,37,40, 41,43,44,47,48,49,50,53,54,58,59,61,64,67,68,71,72,73,79,80,81,82,83,88,89,96,97 ,100,101,103,107,108,109,113,116,118,121,125,127,128,131,136,137,139,142,144,149 ,151,157,160,162,163,164,165,167,169,173,176,179,181,191,192,193,197,199,200,202 ,210,211,214,216,223,227,229,232,233,236,239,241,242,243,250,251,256,257,263,269 ,271,272,273,274,277,281,283,284,288,289,293,298,307,311,313,317,320,324,328,331 ,337,343,345,347,349,352,353,358,359,361,367,373,379,382,383,384,385,389,390,394 ,397,399,400,401,404,409,419,420,421,428,431,432,433,435,439,443,449,454,457,461 ,462,463,464,467,472,478,479,484,486,487,491,495,499,500,503,509,512,521,523,529 ,538,541,544,547,548,557,561,562,563,568,569,570,571,576,577,578,587,593,595,596 ,599,601,607,613,617,619,622,625,630,631,640,641,643,647,648,651,653,656,659,661 ,665,673,677,683,691,694,701,704,709,714,715,716,719,727,729,733,739,743,751,757 ,759,761,764,768,769,773,777,780,787,788,795,797,798,800,808,809,811,819,821,823 ,825,827,829,838,839,840,841,853,856,857,858,859,862,863,864,877,881,883,885,887 ,903,907,908,911,919,922,924,928,929,930,937,941,944,947,953,956,957,961,967,968 ,971,972,977,983,991,997,1000] Haskell> primePowers 12 [(2,2),(3,1)] Haskell> primePowers 14 [(2,1),(7,1)] You could hard-code this list in and test against it. I'm pretty confident these results are without error. (read . is "of").
Qt application killed because Out Of Memory (OOM)
I am running a Qt application on embedded Linux platform. The system has 128 MB RAM, 512MB NAND, no swap. The application uses a custom library for the peripherals, the rest are all Qt and c/c++ libs. The application uses SQLITE3 as well. After 2-3 hours, the machine starts running very slow, shell commands take 10 or so seconds to respond. Eventually the machine hangs, and finally OOM killer kills the application, and the system starts behaving at normal speed. After some system memory observations using top command reveals that while application is running, the system free memory is decreasing, while slab keeps on increasing. These are the snaps of top given below. The application is named xyz. At Application start : Mem total:126164 anon:3308 map:8436 free:32456 slab:60936 buf:0 cache:27528 dirty:0 write:0 Swap total:0 free:0 PID VSZ VSZRW^ RSS (SHR) DIRTY (SHR) STACK COMMAND 776 29080 9228 8036 528 968 0 84 ./xyz -qws 781 3960 736 1976 1456 520 0 84 sshd: root#notty 786 3676 680 1208 764 416 0 88 /usr/libexec/sftp-server 770 3792 568 1948 1472 464 0 84 {sshd} sshd: root#pts/0 766 3792 568 956 688 252 0 84 /usr/sbin/sshd 388 1864 284 552 332 188 0 84 udevd --daemon 789 2832 272 688 584 84 0 84 top 774 2828 268 668 560 84 0 84 -sh 709 2896 268 556 464 80 0 84 /usr/sbin/inetd 747 2828 268 596 516 68 0 84 /sbin/getty -L ttymxc0 115200 vt100 777 2824 264 444 368 68 0 84 tee out.log 785 2824 264 484 416 68 0 84 sh -c /usr/libexec/sftp-server 1 2824 264 556 488 64 0 84 init After some time : Mem total:126164 anon:3312 map:8440 free:9244 slab:83976 buf:0 cache:27584 dirty:0 write:0 Swap total:0 free:0 PID VSZ VSZRW^ RSS (SHR) DIRTY (SHR) STACK COMMAND 776 29080 9228 8044 528 972 0 84 ./xyz -qws 781 3960 736 1976 1456 520 0 84 sshd: root#notty 786 3676 680 1208 764 416 0 88 /usr/libexec/sftp-server 770 3792 568 1948 1472 464 0 84 {sshd} sshd: root#pts/0 766 3792 568 956 688 252 0 84 /usr/sbin/sshd 388 1864 284 552 332 188 0 84 udevd --daemon 789 2832 272 688 584 84 0 84 top 774 2828 268 668 560 84 0 84 -sh 709 2896 268 556 464 80 0 84 /usr/sbin/inetd 747 2828 268 596 516 68 0 84 /sbin/getty -L ttymxc0 115200 vt100 777 2824 264 444 368 68 0 84 tee out.log 785 2824 264 484 416 68 0 84 sh -c /usr/libexec/sftp-server 1 2824 264 556 488 64 0 84 init Funnily though, I can not see any major changes in the output of top involving the application itself. Eventually the application is killed, top output after that : Mem total:126164 anon:2356 map:916 free:2368 slab:117944 buf:0 cache:1580 dirty:0 write:0 Swap total:0 free:0 PID VSZ VSZRW^ RSS (SHR) DIRTY (SHR) STACK COMMAND 781 3960 736 708 184 520 0 84 sshd: root#notty 786 3724 728 736 172 484 0 88 /usr/libexec/sftp-server 770 3792 568 648 188 460 0 84 {sshd} sshd: root#pts/0 766 3792 568 252 0 252 0 84 /usr/sbin/sshd 388 1864 284 188 0 188 0 84 udevd --daemon 819 2832 272 676 348 84 0 84 top 774 2828 268 512 324 96 0 84 -sh 709 2896 268 80 0 80 0 84 /usr/sbin/inetd 747 2828 268 68 0 68 0 84 /sbin/getty -L ttymxc0 115200 vt100 785 2824 264 68 0 68 0 84 sh -c /usr/libexec/sftp-server 1 2824 264 64 0 64 0 84 init The dmesg shows : sh invoked oom-killer: gfp_mask=0xd0, order=2, oomkilladj=0 [<c002d4c4>] (unwind_backtrace+0x0/0xd4) from [<c0073ac0>] (oom_kill_process+0x54/0x1b8) [<c0073ac0>] (oom_kill_process+0x54/0x1b8) from [<c0073f14>] (__out_of_memory+0x154/0x178) [<c0073f14>] (__out_of_memory+0x154/0x178) from [<c0073fa0>] (out_of_memory+0x68/0x9c) [<c0073fa0>] (out_of_memory+0x68/0x9c) from [<c007649c>] (__alloc_pages_nodemask+0x3e0/0x4c8) [<c007649c>] (__alloc_pages_nodemask+0x3e0/0x4c8) from [<c0076598>] (__get_free_pages+0x14/0x4c) [<c0076598>] (__get_free_pages+0x14/0x4c) from [<c002f528>] (get_pgd_slow+0x14/0xdc) [<c002f528>] (get_pgd_slow+0x14/0xdc) from [<c0043890>] (mm_init+0x84/0xc4) [<c0043890>] (mm_init+0x84/0xc4) from [<c0097b94>] (bprm_mm_init+0x10/0x138) [<c0097b94>] (bprm_mm_init+0x10/0x138) from [<c00980a8>] (do_execve+0xf4/0x2a8) [<c00980a8>] (do_execve+0xf4/0x2a8) from [<c002afc4>] (sys_execve+0x38/0x5c) [<c002afc4>] (sys_execve+0x38/0x5c) from [<c0027d20>] (ret_fast_syscall+0x0/0x2c) Mem-info: DMA per-cpu: CPU 0: hi: 0, btch: 1 usd: 0 Normal per-cpu: CPU 0: hi: 42, btch: 7 usd: 0 Active_anon:424 active_file:11 inactive_anon:428 inactive_file:3 unevictable:0 dirty:0 writeback:0 unstable:0 free:608 slab:29498 mapped:14 pagetables:59 bounce:0 DMA free:692kB min:268kB low:332kB high:400kB active_anon:0kB inactive_anon:0kB active_file:4kB inactive_file:0kB unevictable:0kB present:24384kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 103 103 Normal free:1740kB min:1168kB low:1460kB high:1752kB active_anon:1696kB inactive_anon:1712kB active_file:40kB inactive_file:12kB unevictable:0kB present:105664kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 0 0 DMA: 3*4kB 3*8kB 5*16kB 2*32kB 4*64kB 2*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 692kB Normal: 377*4kB 1*8kB 4*16kB 1*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1740kB 30 total pagecache pages 0 pages in swap cache Swap cache stats: add 0, delete 0, find 0/0 Free swap = 0kB Total swap = 0kB 32768 pages of RAM 687 free pages 1306 reserved pages 29498 slab pages 59 pages shared 0 pages swap cached Out of memory: kill process 774 (sh) score 339 or a child Killed process 776 (xyz) So it's obvious that there is a memory leak, it must be my app since my app is killed. But I am not doing any malloc s from the program. I have taken care as to limit the scope of variables so that they are deallocated after they are used. So I am at a complete loss as to why is slab increasing in the top output. I have tried http://valgrind.org/docs/manual/faq.html#faq.reports but didn't work. Currently trying to use Valgrind on desktop (since I have read it only works for arm-cortex) to check my business logic. Addittional info : root#freescale ~/Application/app$ uname -a Linux freescale 2.6.31-207-g7286c01 #2053 Fri Jun 22 10:29:11 IST 2012 armv5tejl GNU/Linux Compiler : arm-none-linux-gnueabi-4.1.2 glibc2.5 cpp libs : libstdc++.so.6.0.8 Qt : 4.7.3 libs Any pointers would be greatly appreciated...
I don't think the problem is directly in your code. The reason is obvious: your application space does not increase (both RSS and VSW do not increase). However, you do see the number of slabs increasing. You cannot use or increase the number of slabs from your application - it's a kernel-only thingie. Some obvious causes of slab size increase from the top of my head: you never really close network sockets you read many files, but never close them you use many ioctls I would run strace and look at its output for a while. strace intercepts interactions with the kernel. If you have memory issues, I'd expect repeated calls to brk(). If you have other issues, you'll see repeated calls to open without close.
If you have some data structure allocation, check for the correctness of adding children and etc.. I had similar bug in my code. Also if you make big and large queries to the database it may use more ram memory. Try to find some memory leak detector to find if there is any leak.
SQL Server 2008, numeric library, c++, LAPACK, memory question
I am trying to send a table of numbers in SQL Server 2008 like: 1att 2att 3att 4att 5att 6att 7att ... attn -------------------------------------------- 565 526 472 527 483 529 476 470 502 497 491 483 488 488 483 496 515 491 467 516 480 477 494 497 478 519 471 488 466 547 498 477 466 475 480 516 543 491 449 485 495 468 452 479 516 473 475 431 474 460 342 471 386 549 489 477 462 428 489 491 481 483 475 485 474 472 452 525 508 459 561 529 473 457 476 498 485 465 540 475 525 455 477 415 434 475 499 476 482 551 463 476 476 471 488 526 394 439 475 479 473 491 519 483 474 476 474 478 455 518 465 445 496 500 518 470 536 557 498 492 449 478 491 492 476 460 484 509 538 473 548 497 551 477 498 471 430 482 437 516 483 487 453 456 505 476 489 495 472 476 487 516 466 466 495 488 475 550 565 510 473 515 470 490 480 475 479 544 468 486 496 484 495 524 435 469 612 493 467 477 .... .... (several more rows) .... 511 471 529 553 539 501 477 474 494 via visual studio 2008 (in a c++ project) to a mathematical library LAPACK. Is it possible to pass the table in SQL Server to LAPACK (via c++ in visual studio 2008) like a memory pointer, or store all the table in RAM, and LAPACK read memory or pointer to memory, but without writing to a file and reading it Could you please suggest how to pass a table like this (maybe the location of table in memory, or something similar) to LAPACK? (so I am able to do some computing with LAPACK of the table stored in SQL Server via visual studio 2008 c++ project) ----EDIT--- #MarkD, As you said in your anwer could you please give an example of computing SVD with the idea in the example, using std::vector class ?
LAPACK requires the data sent to it, to be in a FORTRAN style (Column-order) array. You won't be able to pass the data directly from SQL to LAPACK but will need to read the data into a column-ordered contiguous memory array, and pass a pointer to the first element of the array to the LAPACK routine of interest. There are many LAPACK wrappers for C/C++ out there that make this much easier. Edit: just saw you are looking specifically for how to pass such an array. As I mentioned, there are many wrappers out there for doing this (just do a search for C/C++ LAPACK). An easy way to create your array is to use the std::vector class. You would then read the data in, column-by-column, adding the elements to your vector- So if you wanted to column-order the array you show in your exmaple, your vector would end up looking something like: //Column 1 Column 2 Column 3 ... last element [565 497 467 488 ... 526 491 516 466 ... 472 483 480 547 ... ... 494] You would then pass the LAPACK routine of interest the memory location of the first element, eg: &myVector[0] This is possible using std::vector, as the standard ensures that a vector uses contiguous memory storage. The LAPACK routines all also require the size/dimensions of the matrix/vectors you are passing to it (so you'll need to calculate/specify these values for the function call). If you can post the specific LAPACK routine you want to use, I can give a more thorough example.