Chrome Dev Tools: Divergence/error between flame chart and memory graph in "Timeline" profiling - profiling

While trying to debug long running, memory leaking code I observed a discrepancy between the memory graph and the flame chart. I suspected that this is a "natural" reading error.
I was trying to reproduce this behaviour with very simplified code and were successful...
The above chart was recorded while profiling this code:
window.onload = function() {
var count = 0;
function addDelayed() {
count++;
if (count > 50) {
return;
}
var x = document.createElement("div");
x.addEventListener("click", function() {
});
setTimeout(function() {
addDelayed();
}, 1000);
}
setTimeout(function() {
addDelayed();
}, 10000);
};
I've zoomed in to an arbitrary listener increase to know when it has occured:
I expected the node and listener raise to be at about the half of Function call, not behind it.
Can I assume that this is a measuring error or am I forgetting to take something else into account?
This was recorded with Chrome 43.0.2357.125 (64-bit) (but the behaviour can be observed with older versions too)

Timeline captures number of event listeners/dom nodes right after timer fire event has finished. We do so for many other events so the step will be at the end of the corresponding event. Showing it somewhere in the middle of the event would be unfair and imprecise as we don't know exact moment when the number changed. On the other hand, tracking each individual node/listener creation/deletion would result in a much heavier instrumentation overhead which we want to avoid.

Related

Watch falls asleep during active HKWorkoutSession

I get data from the accelerometer (CMMotionManager) and training (HKWorkoutSession) and transfer it to the phone in real time, but at a random moment the watch falls asleep.
In the info I use WKBackgroundModes: workout-processing The strap is tightened tightly, at first I thought that he was losing contact and the reason was in it. When I wrote the same functions earlier using WatchKit, there was no such problem, but now with SwiftUI there is this a problem.
do {
let workoutConfiguration = HKWorkoutConfiguration()
workoutConfiguration.activityType = .mindAndBody
workoutConfiguration.locationType = .unknown
self.session = try HKWorkoutSession(healthStore: self.healthStore, configuration: workoutConfiguration)
self.builder = self.session?.associatedWorkoutBuilder()
self.builder?.dataSource = HKLiveWorkoutDataSource(healthStore: self.healthStore, workoutConfiguration: workoutConfiguration)
self.session?.delegate = self
self.builder?.delegate = self
// timer for update state
self.timerHealth = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(self.getHealth), userInfo: nil, repeats: true)
self.session?.startActivity(with: self.startDate)
self.builder?.beginCollection(withStart: self.startDate) { (success, error) in
guard success else {
print(error?.localizedDescription)
return
}
}
} catch {
print(error.localizedDescription)
return
}
The timer print the current time, at a random moment the output stops and is restored only after the screen is turned on
Apple's documentation write that if the workout process is enabled, the application will continue in the background, but it is not. How to set up background work? What did I miss?
You could get suspended when running in background if your app uses a lot of CPU. See https://developer.apple.com/documentation/healthkit/workouts_and_activity_rings/running_workout_sessions
To maintain high performance on Apple Watch, you must limit the amount
of work your app performs in the background. If your app uses an
excessive amount of CPU while in the background, watchOS may suspend
it. Use Xcode’s CPU report tool or the time profiler in Instruments to
test your app’s CPU usage. The system also generates a log with a
backtrace whenever it terminates your app.
It would be good to check if your SwiftUI app is doing more work than your WatchKit based one causing the suspension. You should also see a log file saved on watch that could indicate this. It'll look like a crash log but should note that CPU time was exceeded.

Lambda keeps consuming more and more memory until it kills itself

Hello I'm having a problem with lambda.
Our lambdas are generating images on demand. This done with konva.js and node-canvas (node-canvas is in a layer).
When ever we our lambda is under sustained load (calling the endpoint in a loop with await, the problem occurs no matter the concurrency) the memory usage keeps rising and rising for each invocation, until it's eventually 100% and the lambda runtime is being killed. It's like the runtime never garbage collects anything. We have tried increasing the memory all the way to 5GB, but the issue still occurs (although we can call it more times before it runs out).
Our setup consists a APIGW2 Http endpoint in front of the lambda. The lambda is placed in our vpc, in a private subnet with a nat gateway. Everything is deployed with CDK.
The function roughly does this:
Parse the url
If the image already exists in S3, we return that.
Else
Get the necessary data from our DB (MySQL aurora). The connection is a variable outside the handler.
Download the necessary fonts from s3
Generate the image. The image contains another image we give it in the URL. That we download.
Upload the image to S3
Return it as Buffer.toString('base64')
Based on this (we don't use sharp) https://github.com/lovell/sharp/issues/1710#issuecomment-494110353, we as mentioned above tried to increase the memory (from from 1gb -> 2 -> 3 -> 5gb). But it still has the same memory increase until it dies, and it starts all over.
Edit:
The function is written in typescript. The memory usage is measured in the Lambda Insights console. Where we can see that it gradually increases in percent after each invocation.
We only store the fonts in /tmp/fonts and only if they do not exist (i.e. the disk usage doesn't increase. We tested with the same fonts, so they are only downloaded on the first invocation)
The more memory the function has, the longer it takes before it hits 100% and crashes (for 5gb we can do ~170 invocations in a row, before it crashes and we can do another ~170).
There are no Lambda/S3 triggers. So it is not an infinite loop.
So I managed to narrow down the issue to the Konva library.
I found this issue: https://github.com/Automattic/node-canvas/issues/1974 and somebody commented with a link about node buffers. So I decided to test these parts.
First by purely allocating a buffer of the same size as what the image was:
const handler: APIGatewayProxyHandlerV2 = async event => {
const before = process.memoryUsage();
const buffer = Buffer.alloc(4631);
buffer.toString('base64');
const after = process.memoryUsage();
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json', },
body: JSON.stringify({
before: before.rss / 1024 / 1024 + 'MB',
after: after.rss / 1024 / 1024 + 'MB', }),
};
};
This resulted in no issues.
Next I tried with pure Konva (we use a modified version), because I found this: https://github.com/konvajs/konva/issues/1247
const handler: APIGatewayProxyHandlerV2 = async event => {
const stage = new Konva.Stage({ width: 1080, height: 1080 });
const buffer = (stage.toCanvas() as any as NodeCanvas).toBuffer(mimeType, { compressionLevel: 5, });
buffer.toString('base64');
//... rest omitted
};
Here the issue was back. Then I wanted to try using only node-canvas. Since the above issue mentioned leaks in that.
const handler: APIGatewayProxyHandlerV2 = async event => {
const canvas = new NodeCanvas(1080, 1080, 'image');
const buffer = canvas.toBuffer(mimeType, { compressionLevel: 5 });
buffer.toString('base64');
//... rest omitted
};
This didn't have the same issue. So Konva needed to be the culprit. I noticed in the above mentioned github issue, that they used stage.destroy() which I didn't do before. I simply added that and the issue seems to have gone away.
const handler: APIGatewayProxyHandlerV2 = async event => {
const stage = new Konva.Stage({ width: 1080, height: 1080 });
const buffer = (stage.toCanvas() as any as NodeCanvas).toBuffer(mimeType, { compressionLevel: 5, });
stage.destroy() // This was the magic line
buffer.toString('base64');
//... rest omitted
};
I hope other in a similar situation can find this helpful. But thanks for all the suggestions!
The connection is a variable outside the handler. this sentence makes sense for one of the problems. If you create a variable outside the handler, it becomes global and reused for the next executions. If you have same logic for other variables, you can move these into handler.
You must shut down your DB connection when everything is over with DB.
Next, you can check your high memory consuming operations and be sure if its heap memory or stack memory.

Concurrent Ethereum account creation takes lot of time and freezes the ethereum n/w

I am bit new on Ethereum Blockchain and I wanted to create several ethereum accounts concurrently something like as shown below. I am using Geth to spin up my Ethereum n/w. Till the count is 15, there is not much delay. But as i start increasing the count, it takes several minutes before it actually starts creating the accounts.
I know it has something to do with the asynchronous calls I am making. If i do it synchronously, there is no issue.
const Web3 = require('web3');
const web3 = new Web3(new Web3.providers.WebsocketProvider("ws://127.0.0.1:8545"));
const generateAccount = () => {
const count = 30;
let promises = []
for (let i=0; i<count; i++) {
promises.push(web3.eth.personal.newAccount("Hello#12345"));
}
Promise.allSettled(promises).then(
status => {
console.log(status)
process.exit(0)
}
).catch(
err => {
console.log(err)
process.exit(1)
}
)
}
generateAccount()
But, I want to understand what exactly happening behind the scene and why it's taking so much time if the count gets bigger. My actual requirement might take the count to several 1000s of account creation concurrently. If I am giving that much count, the network freezes and block generation is also stopped. Even if I terminate the script, it doesn't restores. I have to restart the n/w to bring everything back to track. So i also want to know, what's the best approach on achieving this.
Please do let me know in case of more information.

Make JProfiler ignore `Thread.sleep()` in CPU views

In JProfiler, in the Call Tree and Hot Spots views of the CPU profiler, I have found that JProfiler is showing some methods as hot spots which aren't really hot spots. These methods are skewing my profiling work, as they are dominating these CPU views and making all other CPU consumers appear insignificant.
For example one thread is performing a Thread.sleep(300_000L) (sleeping for 5 minutes), and then doing some relatively minor work -- in a while(true) loop. JProfiler is configured to update the view every 5 seconds, and I have set the thread status selector to "Runnable". Every 5 seconds when JProfiler updates the view, I would expect the total self-time for the method to remain relatively small since the thread is sleeping and not in a runnable state, however instead I see the self time increment by about 5 seconds which would indicate (incorrectly) that the entire 5-second interval, the thread was in the runnable state. My concern is that the tool will be useless for my CPU profiling purposes if I cannot filter out the sleeping (Waiting) state.
With some testing, I have found that when the Thread.sleep() call eventually terminates, the self time drops down to near-zero again, and begins climbing again with the next invocation of Thread.sleep(). So to me it seems JProfiler is counting the method stats for the current invocation of Thread.sleep() as Runnable -- until the method actually terminates and then these stats are backed out of.
Is this a bug in JProfiler? Is there a way to get JProfiler to not count Thread.sleep() towards the Runnable state, even for long-running invocations of Thread.sleep()?
I am using a licensed version of JProfiler 8.1.4. I have also tried an evaluation version of JProfiler 10.1.
Update:
Here is a simple test case which exhibits this problem for me. I discovered that if I move the Thread.sleep() call to a separate method the problem goes away (see the in-line comments). This is not a great workaround because I'm profiling a large application and don't want to update all of the places where it calls Thread.sleep().
public class TestProfileSleep {
public static void main(String... args) {
new Thread(new Runnable() {
private void sleep(long millis) throws InterruptedException {
Thread.sleep(millis);
}
public void run() {
try {
while (true) {
Thread.sleep(60_000L); // profiling this is broken
//sleep(60_000L); // profiling this works
}
}
catch (InterruptedException ie) {
}
}
}).start();
}
}

Unit-testing a simple usage of RACSignal with RACSubject

(I may be using this in a totally incorrect manner, so feel free to challenge the premise of this post.)
I have a small RACTest app (sound familiar?) that I'm trying to unit test. I'd like to test MPSTicker, one of the most ReactiveCocoa-based components. It has a signal that sends a value once per second that accumulates, iff an accumulation flag is set to YES. I added an initializer to take a custom signal for its incrementing signal, rather than being only timer-based.
I wanted to unit test a couple of behaviours of MPSTicker:
Verify that its accumulation signal increments properly (i.e. monotonically increases) when accumulation is enabled and the input incrementing signal sends a new value.
Verify that it sends the same value (and not an incremented value) when the input signal sends a value.
I've added a test that uses the built-in timer to test the first increment, and it works as I expected (though I'm seeking advice on improving the goofy RACSequence initialization I did to get a signal with the #(1) value I wanted.)
I've had a very difficult time figuring out what input signal I can provide to MPSTicker that I can manually send values to. I'm envisioning a test like:
<set up ticker>
<send a tick value>
<verify accumulated value is 1>
<send another value>
<verify accumulated value is 2>
I tried using a RACSubject so I can use sendNext: to push in values as I see fit, but it's not working like I expect. Here's two broken tests:
- (void)testManualTimerTheFirst
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
[ticker.accumulateSignal subscribeNext:^(id x) {
NSLog(#"%s value is %#", __func__, x);
}];
[controlledSignal sendNext:#(2)];
}
- (void)testManualTimerTheSecond
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
BOOL success = NO;
NSError *error = nil;
id value = [ticker.accumulateSignal asynchronousFirstOrDefault:nil success:&success error:&error];
if (!success) {
XCTAssertTrue(success, #"Signal failed to return a value. Error: %#", error);
} else {
XCTAssertNotNil(value, #"Signal returned a nil value.");
XCTAssertEqualObjects(#(1), value, #"Signal returned an unexpected value.");
}
// Send a value.
[controlledSignal sendNext:#(1)];
}
In testManualTimerTheFirst, I never see any value from controlledSignal's sendNext: come through to my subscribeNext: block.
In testManualTimerTheSecond, I tried using the asynchronousFirstOrDefault: call to get the first value from the signal, then manually sent a value on my subject, but the value didn't come through, and the test failed when asynchronousFirstOrDefault: timed out.
What am I missing here?
This may not answer your question exactly, but it may give you insights on how to effectively test your signals. I've used 2 approaches myself so far:
XCTestCase and TRVSMonitor
TRVSMonitor is a small utility which will pause the current thread for you while you run your assertions. For example:
TRVSMonitor *monitor = [TRVSMonitor monitor];
[[[self.service searchPodcastsWithTerm:#"security now"] collect] subscribeNext:^(NSArray *results) {
XCTAssertTrue([results count] > 0, #"Results count should be > 0";
[monitor signal];
} error:^(NSError *error) {
XCTFail(#"%#", error);
[monitor signal];
}];
[monitor wait];
As you can see, I'm telling the monitor to wait right after I subscribe and signal it to stop waiting at the end of subscribeNext and error blocks to make it continue executing (so other tests can run too). This approach has the benefit of not relying on a static timeout, so your code can run as long as it needs to.
Using CocoaPods, you can easily add TRVSMonitor to your project:
pod "TRVSMonitor", "~> 0.0.3"
Specta & Expecta
Specta is a BDD/TDD (behavior driven/test driven) test framework. Expecta is a framework which provides more convenient assertion matchers. It has built-in support for async tests. It enables you to write more descriptive tests with ReactiveCocoa, like so:
it(#"should return a valid image, with cache state 'new'", ^AsyncBlock {
[[cache imageForURL:[NSURL URLWithString:SECURITY_NOW_ARTWORK_URL]] subscribeNext:^(UIImage *image) {
expect(image).notTo.beNil();
expect(image.cacheState).to.equal(JPImageCacheStateNew);
} error:^(NSError *error) {
XCTFail(#"%#", error);
} completed:^{
done();
}];
});
Note the use of ^AsyncBlock {. Using simply ^ { would imply a synchronous test.
Here you call the done() function to signal the end of an asynchronous test. I believe Specta uses a 10 second timeout internally.
Using CocoaPods, you can easily add Expecta & Specta:
pod "Expecta", "~> 0.2.3"
pod "Specta", "~> 0.2.1"
See this question: https://stackoverflow.com/a/19127547/420594
The XCAsyncTestCase has some extra functionality to allow for asynchronous test cases.
Also, I haven't looked at it in depth yet, but could ReactiveCocoaTests be of some interest to you? On a glance, they appear to be using Expecta.