SourceKitService & swift Consumes 100% CPU and hundred Giga memory - swiftui

1.
Text(String(format: "%.1fs", manager.currentTime))
.font(.caption).foregroundColor(.secondary)
2.
Text("\(manager.currentTime)")
.font(.caption).foregroundColor(.secondary)
Code #1 will consumes 100% CPU and hundred Giga memory, but code #2 will works fine. how strange?
XCode 11.3

Related

How to diagnose a visual studio project slowing down as time goes on?

Computer:
Processor: Intel Xeon Silver 4114 CPU # 2.19Ghz (2 processors)
Ram: 96 Gb 2666 Hz: 12 - 8 Gb sticks
OS: Windows 10
GPU: None
Hard drive: Samsung MZVLB512HAJQ-000H2 - 512GB M.2 PCIe NVMe
IDE:
Visual Studio 2019
I am including what I am doing in case it is relevant. I am running a visual studio code where I read data off a GSC PCI SIO4B Sync Card 256K. Using the API for this card (Documentation: http://www.generalstandards.com/downloads/GscApi.1.6.10.1.pdf) I read 150 bytes of data at a speed of 100Hz using the code below. That data is then being split into to the message structure my device. I can’t give info on the message structure but the data is then combined into the various words using a union and added to an integer array int Data[100];
Union Example:
union data_set{
unsigned int integer;
unsigned char input[2];
} word;
Example of how the data is read read:
PLX_PHYSICAL_MEM cpRxBuffer;
#define TEST_BUFFER_SIZE 0x400
//allocates memory for the buffer
cpRxBuffer.Size = TEST_BUFFER_SIZE;
status = GscAllocPhysicalMemory(BoardNum, &cpRxBuffer);
status = GscMapPhysicalMemory(BoardNum, &cpRxBuffer);
memset((unsigned char*)cpRxBuffer.UserAddr, 0xa5, sizeof(cpRxBuffer));
// start data reception:
status = GscSio4ChannelReceivePlxPhysData(BoardNum, iRxChannel, &cpRxBuffer, SetMaxBytes, &messageID);
// wait for Rx operation to complete
status = GscSio4ChannelWaitForTransfer(BoardNum, iRxChannel, 7000, messageID, &amount);
if (status)
{
// If we have an error, "bytesTransferred" will contain the number of bytes that we
// actually transmitted.
DisplayErrorMessage(status);
printf("\n\t%04X bytes out of %04X transferred", amount, SetMaxBytes);
}
My issue is that this code works fine and keeps up for around 5 minutes then randomly it stops being able to keep up and the FIFO (first in first out) register on the PCI card begins to fill up faster than the code can process the data. To me this seems like a memory leak issue since the code works fine for a long time, then starts to slow down when nothing has changed as all the code is doing it reading the data off the card. We used to save the data in a really large array but even after removing that we had the same issue.
I am unsure how to figure out exactly what is happening and I'm hopping for a way to determine if there is a memory leak and how to fix it if there is.
It being a data leak is only a guess though and it very well could be something else that is the problem so any out of the box suggestions for diagnosing the problem are also appreciated.
Similar to Paul's answer, but I like to strategically place two (or more) _CrtMemCheckpoint followed by _CrtMemDifference, to cut down the noise.
Memory leaks can be detected and reported on (in Debug builds) by calling the _CrtDumpMemoryLeaks function. When running under the debugger, this will tell you (in the output tab) how many allocations you have at the time that it is called and the file and line number that each was allocated from.
Call this right at the end of your program, after you (think you) have freed all the resources you use. Anything left over is a candidate for being a leak.

How to improve SSD I/O throughput concurrency in Linux

The program below reads in a bunch of lines from a file and parses them. It could be faster. On the other hand, if I have several cores and several files to process, that shouldn't matter much; I can just run jobs in parallel.
Unfortunately, this doesn't seem to work on my arch machine. Running two copies of the program is only slightly (if at all) faster than running one copy (see below), and less than 20% of what my drive is capable of. On an ubuntu machine with identical hardware, the situation is a bit better. I get linear scaling for 3-4 cores but I still top out at about 50% of my SSD drive's capacity.
What obstacles prevent linear scaling of I/O throughput as the number of cores increases, and what can be done to improve I/O concurrency on the software/OS side?
P.S. - For the hardware alluded to below, a single core is fast enough that reading would be I/O bound if I moved parsing to a separate thread. There are also other optimizations for improving single-core performance. For this question, however, I'd like to focus on the concurrency and how my coding and OS choices affect it.
Details:
Here are a few lines of iostat -x 1 output:
Copying a file to /dev/null with dd:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 883.00 0.00 113024.00 0.00 256.00 1.80 2.04 2.04 0.00 1.13 100.00
Running my program:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 1.00 1.00 141.00 2.00 18176.00 12.00 254.38 0.17 1.08 0.71 27.00 0.96 13.70
Running two instances of my program at once, reading different files:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 11.00 0.00 139.00 0.00 19200.00 0.00 276.26 1.16 8.16 8.16 0.00 6.96 96.70
It's barely better! Adding more cores doesn't increase throughput, in fact it starts to degrade and become less consistent.
Here's one instance of my program and one instance of dd:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 9.00 0.00 468.00 0.00 61056.00 0.00 260.92 2.07 4.37 4.37 0.00 2.14 100.00
Here is my code:
#include <string>
#include <boost/filesystem/path.hpp>
#include <boost/algorithm/string.hpp>
#include <boost/filesystem/operations.hpp>
#include <boost/filesystem/fstream.hpp>
typedef boost::filesystem::path path;
typedef boost::filesystem::ifstream ifstream;
int main(int argc, char ** argv) {
path p{std::string(argv[1])};
ifstream f(p);
std::string line;
std::vector<boost::iterator_range<std::string::iterator>> fields;
for (getline(f,line); !f.eof(); getline(f,line)) {
boost::split (fields, line, boost::is_any_of (","));
}
f.close();
return 0;
}
Here's how I compiled it:
g++ -std=c++14 -lboost_filesystem -o gah.o -c gah.cxx
g++ -std=c++14 -lboost_filesystem -lboost_system -lboost_iostreams -o gah gah.o
Edit: Even more details
I clear memory cache (free page cache, dentries and inodes) before running the above benchmarks, to keep linux from pulling in pages from cache.
My process appears to be CPU-bound; switching to mmap or changing the buffer size via pubsetbuf have no noticable effect on recorded throughput.
On the other hand, scaling is IO-bound. If I bring all files into memory cache before running my program, throughput (now measured via execution time since iostat can't see it) scales linearly with the number of cores.
What I'm really trying to understand is, when I read from disk using multiple sequential-read processes, why doesn't throughput scale linearly with the number of processes up to something close to the drive's maximum read speed? Why would I hit an I/O bound without saturating throughput, and how does when I do so depend on the OS/software stack over which I am running?
You're not comparing similar things.
You're comparing
Copying a file to /dev/dull with dd:
(I'll assume you meant /dev/null...)
with
int main(int argc, char ** argv) {
path p{std::string(argv[1])};
ifstream f(p);
std::string line;
std::vector<boost::iterator_range<std::string::iterator>> fields;
for (getline(f,line); !f.eof(); getline(f,line)) {
boost::split (fields, line, boost::is_any_of (","));
}
f.close();
return 0;
}
The first just reads raw bytes without a care as to what they are and dumps them into the bit bucket. Your code reads by lines, which need to be identified, and then splits them into vector.
The way you're reading the data, you read a line, then spend time processing it. The dd command you compare your code to never spends time doing things other than reading data - it doesn't have to read then process then read then process...
I believe there were at least three issues at play here:
1) My reads were occurring too regularly.
The file I was reading from had predictible-length lines with predictably placed delimiters. By randomly introducing a 1 microsecond delay one time in a thousand, I was able to push throughput among multiple cores up to about 45MB/s.
2) My implementation of pubsetbuf did not actually set the buffer size.
The standard only specifies that pubsetbuf turn off buffering when a buffer size of zero is specified, as described in this link (thanks, #Andrew Henle); all other behavior is implementation-defined. Apparently my implementation used a buffer size of 8191 (verified by strace), regardless of what value I set.
Being too lazy to implement my own stream buffering for testing purposes, I rewrote the code to read 1000 lines into a vector, then attempt to parse them in a second loop, then repeat the whole procedure until end of file (there were no random delays). This allowed me to scale up to about 50MB/s.
3) My I/O scheduler and settings were not appropriate for my drive and application.
Apparently arch linux defaults to using the cfq io scheduler for my SSD drive, with parameters appropriate for HDD drives. Setting slice_sync to 0, as described here (see Mikko Rantalainen's answer, and the linked article), or switching to the noop scheduler, as described here, the original code gets around 60MB/s maximum throughput, running four cores. This link was also helpful.
With noop scheduling, scaling appears to be almost linear, up to my machine's four physical cores (I have eight with hyperthreading).

Not enough space to cache rdd in memory warning

I am running a spark job, and I got Not enough space to cache rdd_128_17000 in memory warning. However, in the attached file, it obviously saying only 90.8 G out of 719.3 G is used. Why is that? Thanks!
15/10/16 02:19:41 WARN storage.MemoryStore: Not enough space to cache rdd_128_17000 in memory! (computed 21.4 GB so far)
15/10/16 02:19:41 INFO storage.MemoryStore: Memory use = 4.1 GB (blocks) + 21.2 GB (scratch space shared across 1 thread(s)) = 25.2 GB. Storage limit = 36.0 GB.
15/10/16 02:19:44 WARN storage.MemoryStore: Not enough space to cache rdd_129_17000 in memory! (computed 9.4 GB so far)
15/10/16 02:19:44 INFO storage.MemoryStore: Memory use = 4.1 GB (blocks) + 30.6 GB (scratch space shared across 1 thread(s)) = 34.6 GB. Storage limit = 36.0 GB.
15/10/16 02:25:37 INFO metrics.MetricsSaver: 1001 MetricsLockFreeSaver 339 comitted 11 matured S3WriteBytes values
15/10/16 02:29:00 INFO s3n.MultipartUploadOutputStream: uploadPart /mnt1/var/lib/hadoop/s3/959a772f-d03a-41fd-bc9d-6d5c5b9812a1-0000 134217728 bytes md5: qkQ8nlvC8COVftXkknPE3A== md5hex: aa443c9e5bc2f023957ed5e49273c4dc
15/10/16 02:38:15 INFO s3n.MultipartUploadOutputStream: uploadPart /mnt/var/lib/hadoop/s3/959a772f-d03a-41fd-bc9d-6d5c5b9812a1-0001 134217728 bytes md5: RgoGg/yJpqzjIvD5DqjCig== md5hex: 460a0683fc89a6ace322f0f90ea8c28a
15/10/16 02:42:20 INFO metrics.MetricsSaver: 2001 MetricsLockFreeSaver 339 comitted 10 matured S3WriteBytes values
This is likely to be caused by the configuration of spark.storage.memoryFraction being too low. Spark will only use this fraction of the allocated memory to cache RDDs.
Try either:
increasing the storage fraction
rdd.persist(StorageLevel.MEMORY_ONLY_SER) to reduce memory usage by serializing the RDD data
rdd.persist(StorageLevel.MEMORY_AND_DISK) to partially persist onto disk if memory limits are reached.
This could be due to the following issue if you're loading lots of avro files:
https://mail-archives.apache.org/mod_mbox/spark-user/201510.mbox/%3CCANx3uAiJqO4qcTXePrUofKhO3N9UbQDJgNQXPYGZ14PWgfG5Aw#mail.gmail.com%3E
With a PR in progress at:
https://github.com/databricks/spark-avro/pull/95
I have a Spark-based batch application (a JAR with main() method, not written by me, I'm not a Spark expert) that I run in local mode without spark-submit, spark-shell, or spark-defaults.conf. When I tried to use IBM JRE (like one of my customers) instead of Oracle JRE (same machine and same data), I started getting those warnings.
Since the memory store is a fraction of the heap (see the page that Jacob suggested in his comment), I checked the heap size: IBM JRE uses a different strategy to decide default heap size and it was too small, so I simply added appropriate -Xms and -Xmx params and the problem disappeared: now the batch works fine both with IBM and Oracle JRE.
My usage scenario is not typical, I know, however I hope this can help someone.

Hyper-threading Performance Comparison

I have written a project, which uses some basic functions in openssl such as RAND_bytes and des_ecb_encrypt.
My computer has i7-2600(4 cores and 8 logic CPU). When I run my project with 4 threads, it will costs 10 seconds. When I run it with 8 threads, it also costs 10 seconds.
What I mean is that hyper-threading doesn't give me any performance improvement. In Linux, the experiment result is same.
I found here tells me that hyper-threading doesn't give me some improvement in some situations. Also, I found here give me some intuitive results.
However, I have tried to write some simple tests and found some simple examples which will show hyper-threading won't give me apparent improvement. Sadly, I don't find it.
So, my questions is that whether there are some simple tests shows the hyper-threading won't give me any performance improvement.
You may find that hyperthreading helps more on code that is using large amounts of memory, so that the processor is regularly blocked on fetching from memory.
In my experience, it's quite hard to find "simple code" that shows benefits from hyperthreading. It tends to be more complex examples that show the benefit. Still, the benefit will most likely not be 2x that of "no hyperthreading". Count on getting perhaps 20-30% improvement.
Hyper threading takes advantage of the fact that the CPU has many components and when one is used, when there's no hyper threading, the others just sit there idle. You can try writing two types of threads, one doing integer calculations (that will hopefully use the ALU) and one doing floating point arithmetic (that will hopefully use the FPU).
I did not try this myself but it seems that in such a scenario hyper threading should improve the performance.
To show the opposite you can use only one type of the threads (either threads only doing integer operations or threads only doing floating point operations).
It may also be that your test is flawed, but in order to know if that is the case we'll need more information about that test.
I have written a project, which use some basic functions in openssl such as RAND_bytes and des_ecb_encrypt... My computer has i7-2600(4 cores and 8 logic CPU). When I run my project with 4 threads, it will costs 10 seconds. When I run it with 8 threads, it also costs 10 seconds.
When using RDRAND (which RAND_bytes will do in this case), the bus us the limiting factor. You should peak at around 800MB/sec. It does not matter how many threads you have - the bus cannot transfer data fast enough. See Intel rdrand instruction revisited.
If you used AES, then you might see a better speedup over the DES/3DES observations. Your Ivy Bridge has AES-NI and it can achieve almost 1.3 cycle/byte, and that should be about double or triple AES is software. To ensure you are using the AES-NI instructions, you have to use the EVP_* interfaces.
I found here tells me that hyper-threading doesn't give me some improvement in some situations. Also, I found here give me some intuitive results.
I think #selalerer and #Mats Petersson answered your question. The problem does not scale linearly and there's a maximum speedup you will encounter. Intel states its about 30%.
Intel's newest architecture favors of Out-Of-Order execution over Hyper-threading execution because its supposed to be more efficient. Read about the Silvermont processor cores.
But if you want a formal deep dive, then see a book on computer engineering. Here's the book we used when I studied it in college: Computer Organization and Design (its probably a bit dated now).
However, I have tried to write some simple tests and found some simple examples which will show hyper-threading won't give me apparent improvement.
OpenSSL also has a benchmarking app. See the source code in <openssl source>/apps/speed.c.
Also, benchmarking apps have their own personalities. An encryption stress test may not reveal the differences as predominantly as you hope to see them. See, for example, Benchmarking Tools.
Following are details and results of my MP benchmarks for Linux and Windows, that can behave differently. Not much HT but Linux tests include Atom (1 core 2 threads) and Windows has Core i7 results (4+4).
http://www.roylongbottom.org.uk/linux%20multithreading%20benchmarks.htm
http://www.roylongbottom.org.uk/quad%20core%208%20thread.htm
Take your pick, depending what you want to prove whether HT provides better or worse performance. Following are RandMem results on i7 (Linux seems better using this test). For such as i7, you also need to consider Turbo Boost that might be lower with multiple threads.
CPUs MBytes Per Second Using Threads Gain At Threads
/HTs 1 2 4 6 8 2 4 6 8
Serial RD
Core i7 4/8 L1 11458 22661 37039 43717 46374 2.0 3.2 3.8 4.0
930 L2 10380 20832 32853 41711 42839 2.0 3.2 4.0 4.1
#### MHz L3 8828 17743 29610 38414 40330 2.0 3.4 4.4 4.6
Win 764 RAM 4266 8712 17347 24946 25589 2.0 4.1 5.8 6.0
Serial RW
Core i7 4/8 L1 15282 13724 16240 16209 18379 0.9 1.1 1.1 1.2
930 L2 12223 18216 25326 28104 27047 1.5 2.1 2.3 2.2
#### MHz L3 10234 19266 21931 24450 26351 1.9 2.1 2.4 2.6
Win 764 RAM 4533 7656 13876 14543 13390 1.7 3.1 3.2 3.0
Random RD
Core i7 4/8 L1 11266 22548 38174 45592 47141 2.0 3.4 4.0 4.2
930 L2 6233 12463 20059 24986 25667 2.0 3.2 4.0 4.1
#### MHz L3 3499 6915 9211 10002 9531 2.0 2.6 2.9 2.7
Win 764 RAM 459 909 1241 1398 1364 2.0 2.7 3.0 3.0
Random RW
Core i7 4/8 L1 14375 3027 2780 2901 3297 0.2 0.2 0.2 0.2
930 L2 5887 4555 6117 6693 7281 0.8 1.0 1.1 1.2
#### MHz L3 3104 4604 4721 5047 4933 1.5 1.5 1.6 1.6
Win 764 RAM 428 860 899 948 1026 2.0 2.1 2.2 2.4
#### 2.8 GHz running at up to 3.06 GHz via Turbo Boost, dual channel 1066 MHz DDR3 RAM
Then the MP Whetstone benchmark that shows real gains
MWIPS MFLOP MFLOP MFLOP COS EXP FIXPT IF EQUAL
CPU MHz 1 2 3 MOPS MOPS MOPS MOPS MOPS
Core i7 1 Thrd #### 3115 1065 886 738 79.3 39.7 2447 2936 1154
Core i7 Win7 #### 21690 8676 7621 5844 531 291 16643 12027 5034
Quad Core Thread 1 1091 1027 728 66.4 36.5 2050 1501 629
Plus HT Thread 2 1089 1037 742 66.0 36.5 2090 1507 630
Thread 3 1090 946 742 66.8 36.5 2069 1534 631
Thread 4 1092 1037 727 66.6 36.6 2031 1501 630
Thread 5 1042 959 736 66.4 36.5 1912 1483 630
Thread 6 1091 874 723 66.6 36.1 2049 1507 629
Thread 7 1090 867 725 65.6 36.3 2094 1516 631
Thread 8 1091 874 722 66.3 36.3 2350 1476 624
Gain % 696 815 860 792 670 733 680 410 436

Why would an immediately destroyed shared pointer leak memory?

Is there a memory leak here?
class myclass : public boost::enable_shared_from_this<myclass>{
//...
void broadcast(const char *buf){
broadcast(new std::string(buf));
}
void broadcast(std::string *buf){
boost::shared_ptr<std::string> msg(buf);
}
//...
};
(This is the stripped down version that still shows the problem - normally I do real work in that second broadcast call!)
My assumption was that the first call gets some memory, then because I do nothing with the smart pointer the second call would immediately delete it. Simple? But, when I run my program, the memory increases over time, in jumps. Yet when I comment out the only call in the program to broadcast(), it does not!
The ps output for the version without broadcast():
%CPU %MEM VSZ RSS TIME
3.2 0.0 158068 1988 0:00
3.3 0.0 158068 1988 0:25 (12 mins later)
With the call to broadcast() (on Ubuntu 10.04, g++ 4.4, boost 1.40)
%CPU %MEM VSZ RSS TIME
1.0 0.0 158068 1980 0:00
3.3 0.0 158068 1988 0:04 (2 mins)
3.4 0.0 223604 1996 0:06 (3.5 mins)
3.3 0.0 223604 2000 0:09
3.1 0.0 223604 2000 2:21 (82 mins)
3.1 0.0 223604 2000 3:50 (120 mins)
(Seeing that jump at around 3 minutes is reproducible in the few times I've tried so far.)
With the call to broadcast() (on Centos 5.6, g++ 4.1, boost 1.41)
%CPU %MEM VSZ RSS TIME
0.0 0.0 51224 1744 0:00
0.0 0.0 51224 1744 0:00 (30s)
1.1 0.0 182296 1776 0:02 (3.5 mins)
0.7 0.0 182296 1776 0:03
0.7 0.0 182296 1776 0:09 (20 mins)
0.7 0.0 247832 1788 0:14 (34 mins)
0.7 0.0 247832 1788 0:17
0.7 0.0 247832 1788 0:24 (55 mins)
0.7 0.0 247832 1788 0:36 (71 mins)
Here is how broadcast() is being called (from a boost::asio timer) and now I'm wondering if it could matter:
void callback(){
//...
timer.expires_from_now(boost::posix_time::milliseconds(20));
//...
char buf[512];
sprintf(buf,"...");
broadcast(buf);
timer.async_wait(boost::bind(&myclass::callback, shared_from_this() ));
//...
}
(callback is in the same class as the broadcast function)
I have 4 of these timers going, and my io_service.run() is being called by a pool of 3 threads. My 20ms time-out means each timer calls broadcast() 50 times/second. I set the expiry at the start of my function, and run the timer near the end. The elided code is not doing that much; outputting debug info to std::cout is perhaps the most CPU-intensive job. I suppose it may be possible the timer triggers immediately sometimes; but, still, I cannot see how that would be a problem, let alone cause a memory leak.
(The program runs fine, by the way, even when doing its full tasks; I only got suspicious when I noticed the memory usage reported by ps had jumped up.)
UPDATE: Thanks for the answers and comments. I can add that I left the program running on each system for another couple of hours and memory usage did not increase any further. (I was also ready to dismiss this as a one-off heap restructuring or something, when the Centos version jumped for a second time.) Anyway, it is good to know that my understanding of smart pointers is still sound, and that there is no weird corner case with threading that I need to be concerned about.
If there is a leak, you allocate a std::string (20 bytes, more or less) 50 times per second.
in 1 hour you should have been allocated ... 3600*50*20 = 3,4MBytes.
Nothing to do with th 64K you see, that's probably due to the way the system allocate the memory to the process, that new sub-allocates to the variables.
The system, when something is deleted, needs to "garbage collect it" placing it back into the available memory chain for further allocations.
But since this takes time, most of the systems don't do this operation until the released memory goes over certain amounts, so that a "repack" can be done.
Well, what happens here is probably not that your program is leaking, but that for some reason the system memory allocator decided to keep another 64 kB page around for your application. If there was a constant memory leak at that point, at 50 Hz rate, that would have a much more dramatic effect!
Exactly why that is done after 3 minutes I don't know (I am not an expert in that area), but I would guess that there are some heuristics and statistics involved. Or, it could simply be that the heap has gotten fragmented.
Another thing that may have happened is that the messages you are holding in the buffer become longer over time :)