I have a C++ library that I want to include into my iOS application. It has unit tests. If I put it simply, it's something like:
#include <cstdio>
int main()
{
printf("Test result\n");
}
Is it possible to run such an application that uses only stdin/stdout on an arm64 based iOS device to make sure that all compiles and works correctly?
I can do it on a real android device with adb push/adb shell, so I wonder, is it possible to do the same on iOS based devices?
It's not completely clear what are you trying to archive. But let me guess:
There is a third-party library you don't have control over.
There're tests in this library. The tests are a separate console app.
And you're looking for a way to run this app in some kind of shell on iOS that can run an arbitrary executables. And you can't because of all the security.
But if you have the source code of unit tests, then with some small changes you can compile them as a library, not as an executable. And call them from a "shell" you can write yourself - a small host app the sole purpose of which is to call these tests (and probably send their results via any kind of connection that suits you).
More on this you can read here
Yes, this host app will have a GUI, but I don't really understand why do you against it... if you really are.
Also these may be helpfull:
debug bridge for iPhone / shell command prompt
ADB equivalent for iOS device
Related
Knowing from other languages and platforms we want to use Unit-Testing during the Build-Process BEFORE the code is flashed to the hardware. This should be possible for simple functions tests which have no need for the ESP32 hardware.
But as we understand yet the C++ code is compiled (and linked) for the ESP32 chip and shall not run on the developing system or in a CI/CD pipeline.
Is there any way to emulate ESP32 (for C++) or run Unit-Tests on any other way on another systems?
note: We are using 'platformio' for the build.
It is possible to do this with the qemu_esp32 emulator. You can compile your test runner and run it directly on the emulator instead of flashing it to a real esp32 chip.
Here is an example of how to do this (adapted from esp32_qemu_unity_test_action):
Compile the test runner like you normally would. Before you proceed you might want to actually flash this just to know your test runner and tests are working.
idf.py build
cd build
Normally when you flash a esp32, you're writing several binary files to specific locations. This command merges all those into a single full binary image.
esptool.py --chip esp32 merge_bin --fill-flash-size 4MB -o flash_image.bin #flash_args
Run the esp32 emulator and provide the full binary flash image.
/opt/qemu/bin/qemu-system-xtensa -nographic -no-reboot -machine esp32 -drive file=flash_image.bin,if=mtd,format=raw -serial file:output.log
By specifying -no-reboot, the emulator will simply exit instead of rebooting.
I'm working on safety-critical software that requires extensive testing. The target processor, a Cortex-M4, has rich resources for the application but the unit and integration tests, if aggregated, would be much larger than the on-board FLASH/RAM. They are designed to be run from gdb while using semi-hosting to off-load the test results. What's needed is a way to automate the testing so it can be run without per-test human intervention.
The test programs run fine from Eclipse using both OpenOCD and Segger debugger front-ends. These require per-test configurations and then manual starting of the tests. There will be 30-50 test programs so this isn't really viable for continuous integration or simple batch runs.
I've been looking around for possible ways to do this. There are a few tricky bits to consider. The first is supporting the semihosting of the output. That uses the breakpoint system to have the I/O from the host. In this use, a couple of files get opened on the host computer for logs. Another issue is ending the program being tested and loading the next. The programs can take a long time to run and drop into an infinite loop when the main() is exited. And the development platform is Windows 10.
The two basic ideas I've had are to use the gdb client library from cygwin to create a custom program or to use OpenOCD. Running through multiple tests could be done inside the application or from a Makefile.
Question: is the semihosting done in the gdb client or server?
So... Looking for some suggestions or experiences in creating what I picture as a custom gdb client in Win10.
I have to create and configure an eclipse (Mars 2) for a C project. The project is on a SVN repository, and can only be compiled on a specific linux redhat server that has the appropriate toolchain.
What I need is an IDE that would allow me to commit my changes to the repository and that would automagically synchronize them on the Linux server. I tried a few things but none of them worked. I must (to my great regret) avoid the need of a terminal while using that IDE, but of course not while configuring it.
Firstly, I used the Remote System Explorer feature in eclipse. I connected succefully to the server, created a "Remote Project" that I could open in the C/C++ perspective. However, the whole thing is impossible to use, as it has no indexation, I had to create "User Actions" in order to compile (which is on my point of vue pretty anti-ergonomic) and the SVN plugin does not detect the project as an SVN copy. Furthermore, in the C/C++ perspective, there is a 2s gap between the moment I type something, and the moment it appears on my screen.
I also tryed to mount a network filesystem on my local machine, with sshfs, and if it works far better, I still experience lags. Also, I had to write a Makefile and call my compiler via "ssh $(USER)#$(HOST) build.ksh". (one of the point of the projetc is to write a real Makefile...). But SVN is working.
I also tried to run eclipse on the host machine, with X forwarding, and if it works perfectly, there is still lags...
Finally, I tried an sftp synchronisation, but it seems I can't use my SVN plugin features and the sftp together.
I am out of solutions, and pretty frustrated as I feel that this kind of things should be pretty easy. I mean, all I want is that eclipse automatically copy my files on my remote home directory... Thanks for your help...
To me this sounds like a perfect use-case for a continuus integration (CI) system. Generally speaking, this CI system pulls the code from your repository (for example in regular intervals) and then executes the build chain, collects artifacts, informs you about the state of your build, etc.
Although it originated from the Java world, I have successfully used Jenkins for continuus integration of C-projects on a Linux server, but there are others, like TeamCity or GitLab CI (the latter would require you to switch to Git, but it's a really neat system with a YAML configuration for CI).
Of course CI systems have a learning curve - you don't something like a free meal - but it may really be worth the effort.
I'm developing a program for a specific environment. That means it needs to run on the OS and compile using its compiler. I have a different environment at home (Windows 8) is there a way Netbeans can be used to connect to the target environment and use its compiler? It is enabled for remote login.
So basically right now I write code on my home computer, connect using Putty to the target computer, copy the source code over, compile it and run it. I'm trying to simplyfy this process so I only have to use Netbeans.
Why don't I just get same compiler and do everything locally? The target computer is running Linux and the program has a lot of system calls.
I know Aptana has a simillar feature, but Aptana is so crappy in general I don't want to use it.
Let me know if my question doesn't make sense and I'll try and reword it.
Yes, you can do remote development in NetBeans. It's described in its Help subsystem:
Does any body know how can I use MS_MPI in my VC++ MFC project?
I already have a big MFC project and I only want to use parallel processing in a part of it with MPI.
(I know how to use MPI in a separate code, but I don't know how to integrate it with my VC++ MFC project)
Not sure about MS_MPI, but you wanna look at MPICH2 Windows documentation guide at the url at the bottom.
9.3 MPI apps with GUI
Many users on Windows machines want to build GUI apps that are also MPI
applications. This is completely acceptable as long as the application follows
the rules of MPI. MPI Init must be called before any other MPI function
and it needs to be called soon after each process starts. The processes must
be started with mpiexec but they are not required to be console applications.
The one catch is that MPI applications are hidden from view so any
Windows that a user application brings up will not be able to be seen.
mpiexec has an option to allow the MPI processes on the local machine
to be able to bring up GUIs. Add -localroot to the mpiexec command to
enable this capability. But even with this option, all GUIs from processes
on remote machines will be hidden.
So the only GUI application that MPICH2 cannot handle by default
would be a video-wall type application. But this can be done by running
smpd.exe by hand on each machine instead of installing it as a service. Log
on to each machine and run “smpd.exe -stop” to stop the service and then
run “smpd.exe -d 0” to start up the smpd again. As long as this process is
running you will be able to run applications where every process is allowed
to bring up GUIs.
:
http://www.mcs.anl.gov/research/projects/mpich2/documentation/files/mpich2-1.2.1-windevguide.pdf
It is possible. You use it the same way as any other MPI project.
In general, you can link against any C++ library from an MFC project. MFC is just a set of libraries, and doesn't restrict you from using other C++ libraries.