Running flutter/dart tests in AOT/release or profile mode? - unit-testing

If I understand it correctly by default flutter test will run with a JIT on the Dart VM, while the release mode uses an AOT compiler to native code.
I (think to) have a crash which only happens in AOT mode (release and profile), and it would be way easier to debug and reproduce it when I could isolate it by running code in AOT/profile mode.
So how can I run tests (or at least code snippets) in AOT mode? (I don't really care if it runs on an actual iOS or Android device, or is executed on the dev machine or simulators.
(I have found an article which seems to describe the compilation process, but is pretty involved. Is there some easier way for tests?)

I have found a rather simple solution which is documented on the dart website. Starting with Dart 2.3 there is a dart2aot and dartaotruntime bundled with the SDK. This allows simple execution and I was able to reproduce my crash.
Herbys-MacBook-Pro-2017:migrate$ dart2aot migrate_aot_test.dart migrate_aot_test.dart.aot
Herbys-MacBook-Pro-2017:migrate$ dartaotruntime migrate_aot_test.dart.aot
[...]
===== CRASH =====
si_signo=Segmentation fault: 11(11), si_code=1, si_addr=0x1061000410f
Abort trap: 6
Herbys-MacBook-Pro-2017:migrate$

Related

Need ideas for automating on-chip testing on Cortex-M4 using gdb and semihosting

I'm working on safety-critical software that requires extensive testing. The target processor, a Cortex-M4, has rich resources for the application but the unit and integration tests, if aggregated, would be much larger than the on-board FLASH/RAM. They are designed to be run from gdb while using semi-hosting to off-load the test results. What's needed is a way to automate the testing so it can be run without per-test human intervention.
The test programs run fine from Eclipse using both OpenOCD and Segger debugger front-ends. These require per-test configurations and then manual starting of the tests. There will be 30-50 test programs so this isn't really viable for continuous integration or simple batch runs.
I've been looking around for possible ways to do this. There are a few tricky bits to consider. The first is supporting the semihosting of the output. That uses the breakpoint system to have the I/O from the host. In this use, a couple of files get opened on the host computer for logs. Another issue is ending the program being tested and loading the next. The programs can take a long time to run and drop into an infinite loop when the main() is exited. And the development platform is Windows 10.
The two basic ideas I've had are to use the gdb client library from cygwin to create a custom program or to use OpenOCD. Running through multiple tests could be done inside the application or from a Makefile.
Question: is the semihosting done in the gdb client or server?
So... Looking for some suggestions or experiences in creating what I picture as a custom gdb client in Win10.

Release build debugging issue

I have a fairly standard C++/QT app that works fine in debug and release on my development PC. When trying out the release version on a clean PC it runs but part of the functionality (showing video via a USB connection) just doesn't run. Well, seen this before and my standard technique is to add debug information to the release build, setup remote debugging and have a look. Much to my surprise, it runs fine as a release build with debug info (.pdb)
I have never seen this before.
using dependency walker shows no problems with any dependencies. Using the profiler that is a part of Dependency Walker and it also runs perfectly.
I have run out of debugging techniques and the only thing I can think of is to add message boxes at various places which in a multi threaded application does not seem a good idea.
Is there a debugging technique that could help me find this problem? We're using VS2008 and Qt 4.7.1.
Refine your message boxes - use a log file.
From your description, it seems to me that there may be some sort of race condition/timing issue that gets solved when some thread or other gets slowed down by being observed by something. Or by adding debug info to the binary.
Using a log file with timestamps, you should be able to keep track of when things happen.
I think logging is your friend. If you have multiple threads you may want to log each thread to different log file.

How to debug using RTSM simulator in eclipse (for DS-5) for FreeRTOS?

I am new to FreeRTOS and RTSM simulator. I loaded the code of FreeRTOS and am trying to use the RTSM simulator (simulates ARM Cortex A9). When I change to DS-5 debug perspective and press Debug from Debug Configurations, the simulator seems to be running. The problem I am facing is that I am not able to step through my source code (for debugging). I put a break point in the first statement in main, and the control doesn't seem to be reaching there. (I am able to step through the assembly language code produced after compilation, but that is not what I need).
Any idea how to do this?

Same C++ code compiled on the same machine behaves differently

I have written a C++ code that uses some of the Qt static libraries.
I compile the code using MSVS2010 (on Windows 7) and then run the created .exe-file on a second machine.
I have compiled the exact same code on the same machine 2 different times and the .exe code that is generated crashes on the second machine when that machine's (Windows XP) screen saver starts, I have compiled the same code another time (nothing has changed in the code or the compiler or its settings) and the generated .exe-file does work fine.
Has anybody an idea on what can cause this?
Is there a way I can debug this issue?
Could the fact that at different times maybe different other programs are open affect the compilation?
The problem is not with the compilation process (it will always produce the same binary provided you didn't change the sources) but with the execution environment.
There seems to be something on your second machine that makes your program crash intermittently (or it could well be that it has nothing to do with that second machine, and that your program crashes intermittently everywhere). To debug that, you may end up having to install a debugging environment on the second machine and hope the problem arises again, or you could also try to reproduce the crash on your development machine.

OCUnit: How to run tests without launching iPhone simulator?

I'm following iOS Development Guide: Unit Testing Applications. However, when I attempt to build (Command+B) the LogicTests target (step 8 of "Setting Up Logic Testing"), I get the error: "The selected run destination is not valid for this action."
Since I added my application target to LogicTests's target dependencies, I'm able to run the unit tests with Command+U, but this also launches the iPhone Simulator.
To save time & resources, is it possible to run the OCUnit tests (both logic & application tests) without launching the iPhone Simulator?
I understand the annoyance of the simulator popping up in unit tests. The best remedy I've been able to find is to do Command + U, followed by Command + H when launching unit tests. (Control + H hides the simulator after it appears.) Since it appears nearly instantaneously, this can be an effective way of getting it out of your range of vision.
I've managed to run my unit tests which test my model classes without the simulator being launched as follows:
I didn't set any bundle loader or test host build settings, instead I just added the .m files I was unit testing to the Build Phases Compile Sources.
I then ran the unit tests from the command line using:
xcodebuild -verbose -target TheElementsUnitTests -configuration Debug -sdk iphonesimulator5.0 clean build
Not really sure why this didn't launch the simulator, but it definitely didn't!
Here's a small AppleScript that I set to run for Generates output in Testing behaviour configuration:
#!/usr/bin/osascript
activate application "Xcode"
It brings Xcode back immediately after pressing command + U.
P.S. I also opened a bug and Apple marked it as a duplicate. So, they're aware.
How much time/resources? Rather than focusing on reducing those, I'd focus on expanding your tests to go far beyond Apple's original "Logic Test" guidelines. Those guidelines were limiting, and written before Xcode 4. Now you can write tests without thinking, "Is this a logic test or an application test?" -- just test everything.