Integrating OpenTelemetry into my .netCore (with akka.net) framework: ActivitySource or Tracer? - akka

I'm completely new to OTel and tracing and have been learning a bit the past few days.
I've figured out, that Otel and .net have different names for the same things. ActivitySource in .Net is Tracer in OTel and Span in Otel is Activity in .Net. Is one of the two better practice to use?
What I'd like: I have a system where different kind of calls come in from one side and out of the other. I want to trace the calls through all the different files and classes and akka.actors and have one continuous trace with multiple child spans all the way. So essentially, when I export my data to Jaeger I need to have one trace with a big span tree per call.
(What I've tried) Now I have attempted this by creating an ActivitySource class and using static member variables to trace in multiple locations in my system, like so:
public class MyActivitySource
{
public static string Name = nameof(MyActivitySource);
public static ActivitySource Instance = new ActivitySource(Name);
}
When wanting to trace somewhere in my project, I then do:
using var activity = MyActivitySource.Instance.StartActivity("MyActivity");
activity?.AddTag("message status", message.ToString());
//do tasks
activity?.Stop();
Now this way I need to create the variable "activity" in every function I need it in, which in turn creates multiple traces.
Question: Is there a way of having only one trace/call to go through the entire system? Or is that not the way tracing is meant to be used?
Automatic Tracing - as far as I can see - is not an option, because pure .NetCore and akka is not supported. There is Phobos, but that isn't an option.
The approach I am using I have from here.

Related

VirtualTimeScheduler for testing Akka.NET?

When we are testing Rx.NET code we can use TestScheduler that is implementing VirtualTimeScheduler and it allow us to create virtual timeline of events (CreateColdObservable) that occur in system and than test the state of the system in some point of times using methods like AdvanceTo.
I’m wondering if there is some equivalent in Akka.Net for testing system using some king of virtual timeline like in Rx.Net?
While testing your actors using Akka.TestKit, your actor system is configured to make use of TestScheduler, which has Advance/AdvanceTo methods allowing moving forward in time. It's directly inspired by the VirtualTimeScheduler known from the Rx.NET.
Keep in mind that TestScheduler has limited usage: AFAIK it's only used in context of scheduling future events using i.e. Context.System.Scheduler.ScheduleTellxxx methods. It doesn't affect other time-based actions happening inside actor system.

How to register multiple Audio Units at runtime (similar to VST plugin shell)?

I just started coding VST plugins. But since I'm on a mac I would also like to build Audio Units. I managed to compile some sample code and these components showed up inside my Logic DAW.
In VST there's the possibility to create a plugin shell. This describes a single 'dll'/'vst' file which has multiple effects in it. During startup the host calls a function called getNextShellPlugin and the plugin dynamically registers its content at runtime. The effects then perfectly show up in a plugin list.
Is there a similar way I can achieve this with Audio Units?
I managed to get a plugin shell by adding another component description to the 'info.plist'. But I have to hardcode every effect in there and that's not what I want.
I also tried to use AudioComponentRegister but this didn't work properly for me. Since therefore the component has to be instanciated so I can call this function inside the constructor. But to list the components inside Logic they need to be found during the scan where the component will not get instanciated by default.
So the goal is to register multiple effects inside 1 component at runtime.
Does someone maybe have a tip or a solution? Thanks a lot!

C++ Output: GUI or Console?

I am making an application in C++ that runs a simulation for a health club. The user simply enters the simulation data at the beginning (3 integer values) and presses run. After that there is no user input - so very simple.
After starting the simulation a lot of the logic is deep down in lower classes but a lot of them need to print simple output messages to the UI. Returning a message is not possible as objects need to print to the UI but keep working.
I was going to pass a reference to the UI object to all classes that need it but I end up passing it around quite a lot - there must be a better way.
What I really need is something that can make calling the UI's printOutput(string) function as easy (or not much more difficult) than cout << string;
The UI also has a displayConnectionStatus(bool[] connections) method.
Bear in mind the UI inherits an abstract 'UserInterface' class so simple console UIs and GUIs can be changed in and out easily.
How do you suggest I implement this link to the UI?
If I were to use a global function, how can I redirect it to call methods of the UserInterface implementation that I selected to use?
Don't be afraid of globals.
Global objects hurt encapsulation, but for a targeted solution with no concern for immediate reusability, globals are just fine.
Expose a global object that processes events from your simulation. You can then choose to print the events, send them by e-mail, render them with OpenGL or whatever you fancy. Make a uniform interface that catches what happens inside the simulation via report callbacks, and then you can subclass this global object to suit your needs.
If the object wasn't global, you'd be passing a pointer around all the codebase.
I would suggest to go for logging framework i.e. your own class LogMessages, which got functions which get data and log the data, it can be to a UI, file, over network or anything.
And each class which needs logging, can use your logging class.
This way you can avoid globals and a generic solution , also have a look at http://www.pantheios.org/ which is open source C/C++ Diagnostic Logging API library, you may use that also...

Does any web site test framework facilitate fine-grained concurrent testing

I have a legacy .NET application that is implemented using Application variables and makes heavy use of Session data as well. There are some anecdotal reports of bugs that seem to point toward concurrency errors, i.e. multiple sessions clobbering shared application-level data.
I want to develop some automated tests that let me control concurrent access in a fine-grained fashion, i.e.
Create two HTTP clients with fresh sessions
Request /my/page/1 with client 1
Request /my/page/2 with client 2
POST data with client 2
POST data with client 1
Issue parallel request for /my/page/results from both clients
etc.
Are there any libraries that make this sort of testing easier or will I have to roll my own to some extent?
I'm aware of Selenium and WatiN, but have not personally used either project. From reading the docs, neither appears to be a good match.
Perhaps the best option is just plain NUnit and making good use of the .NET WebClient class?
Clarification: What you want is a well-defined series of steps to be executed synchronously. This means your test code does not need to be multithreaded, more precisely, it must not be multithreaded, or you lose control over the order the steps are executed.
The only thing you need is two browser instances and the ability to dispatch the test steps to the two browsers.
To do so, I presume you can use either of Selenium or WatiN as follows (not verified, it's only in my mind)
Selenium (with WebDriver):
using (var firefox1 = new FirefoxDriver(profile))
using (var firefox2 = new FirefoxDriver(profile))
{
requestPageOne(firefox1);
requestPageTwo(firefox2);
postPageTwo(firefox2);
postPageOne(firefox1);
//// ...
}
WatiN
using (var firefox1 = new FireFox(url1))
using (var firefox2 = new FireFox(url2))
//// ...

Keeping the GUI separate

I have a program that (amongst other things) has a command line interface that lets the user enter strings, which will then be sent over the network. The problem is that I'm not sure how to connect the events, which are generated deep inside the GUI, to the network interface. Suppose for instance that my GUI class hierarchy looks like this:
GUI -> MainWindow -> CommandLineInterface -> EntryField
Each GUI object holds some other GUI objects and everything is private. Now the entryField object generates an event/signal that a message has been entered. At the moment I'm passing the signal up the class hierarchy so the CLI class would look something like this:
public:
sig::csignal<void, string> msgEntered;
And in the c'tor:
entryField.msgEntered.connect(sigc::mem_fun(this, &CLI::passUp));
The passUp function just emits the signal again for the owning class (MainWindow) to connect to until I can finally do this in the main loop:
gui.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
Now this seems like a real bad solution. Every time I add something to the GUI I have to wire it up all through the class hierarchy. I do see several ways around this. I could make all objects public, which would allow me to just do this in the main loop:
gui.mainWindow.cli.entryField.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
But that would go against the idea of encapsulation. I could also pass a reference to the network interface all over the GUI, but I would like to keep the GUI code as seperate as possible.
It feels like I'm missing something essential here. Is there a clean way to do this?
Note: I'm using GTK+/gtkmm/LibSigC++, but I'm not tagging it as such because I've had pretty much the same problem with Qt. It's really a general question.
The root problem is that you're treating the GUI like its a monolithic application, only the gui is connected to the rest of the logic via a bigger wire than usual.
You need to re-think the way the GUI interacts with the back-end server. Generally this means your GUI becomes a stand-alone application that does almost nothing and talks to the server without any direct coupling between the internals of the GUI (ie your signals and events) and the server's processing logic. ie, when you click a button you may want it to perform some action, in which case you need to call the server, but nearly all the other events need to only change the state inside the GUI and do nothing to the server - not until you're ready, or the user wants some response, or you have enough idle time to make the calls in the background.
The trick is to define an interface for the server totally independently of the GUI. You should be able to change GUIs later without modifying the server at all.
This means you will not be able to have the events sent automatically, you'll need to wire them up manually.
Try the Observer design pattern. Link includes sample code as of now.
The essential thing you are missing is that you can pass a reference without violating encapsulation if that reference is cast as an interface (abstract class) which your object implements.
Short of having some global pub/sub hub, you aren't going to get away from passing something up or down the hierarchy. Even if you abstract the listener to a generic interface or a controller, you still have to attach the controller to the UI event somehow.
With a pub/sub hub you add another layer of indirection, but there's still a duplication - the entryField still says 'publish message ready event' and the listener/controller/network interface says 'listen for message ready event', so there's a common event ID that both sides need to know about, and if you're not going to hard-code that in two places then it needs to be passed into both files (though as global it's not passed as an argument; which in itself isn't any great advantage).
I've used all four approaches - direct coupling, controller, listener and pub-sub - and in each successor you loosen the coupling a bit, but you don't ever get away from having some duplication, even if it's only the id of the published event.
It really comes down to variance. If you find you need to switch to a different implementation of the interface, then abstracting the concrete interface as a controller is worthwhile. If you find you need to have other logic observing the state, change it to an observer. If you need to decouple it between processes, or want to plug into a more general architecture, pub/sub can work, but it introduces a form of global state, and isn't as amenable to compile-time checking.
But if you don't need to vary the parts of the system independently it's probably not worth worrying about.
As this is a general question I’ll try to answer it even though I’m “only” a Java programmer. :)
I prefer to use interfaces (abstract classes or whatever the corresponding mechanism is in C++) on both sides of my programs. On one side there is the program core that contains the business logic. It can generate events that e.g. GUI classes can receive, e.g. (for your example) “stringReceived.” The core on the other hand implements a “UI listener” interface which contains methods like “stringEntered”.
This way the UI is completely decoupled from the business logic. By implementing the appropriate interfaces you can even introduce a network layer between your core and your UI.
[Edit] In the starter class for my applications there is almost always this kind of code:
Core core = new Core(); /* Core implements GUIListener */
GUI gui = new GUI(); /* GUI implements CoreListener */
core.addCoreListener(gui);
gui.addGUIListener(core);
[/Edit]
You can decouple ANY GUI and communicate easily with messages using templatious virtual packs. Check out this project also.
In my opinion, the CLI should be independant from GUI. In a MVC architecture, it should play the role of model.
I would put a controller which manages both EntryField and CLI: each time EntryField changes, CLI gets invoqued, all of this is managed by the controller.