How to display computer vision and AR in UML use case diagrams - computer-vision

My system uses both Augmented reality and computer vision,
The first feature is: The user actor can scan a specific object and the computer vision should recognize it.
The second feature is: The user actor can view a specific place using the augmented reality.
Each feature is a use case connected to the user, but do I also connect them to some sort of AI actors? and if so what is the suitable way to do it?
Do I just say "Computer vision system", and "Augmented reality system" ?

Feature or use-case?
This is a good start. However, there is a key misconception here:
Features are characteristics or capabilities offered by the software that are valued by users because it helps them to achieve some purpose. Features are often identified with user stories.
Use-cases represent goals of the actors using the system, that corresponds to a set of behaviors and interactions with the user, without reference to the internal structure of the system.
These a two different concepts. There is of course some overlap: some higher level capabilities can be described in terms of goals. For example, an ERP can be expected to have accounting, warehouse management and sales administration features.
But features are more general: it can also describe technical capabilities that are not directly observable by the user (e.g. backup), capabilities that are not directly related to a specific set of behaviors (e.g. multilingual user interface), Or which are much more detailed (e.g. date picking feature)
If you're on features, you may consider non-UML techniques, such as a feature tree, or user-story mapping (which is a kind of feature tree constructed with user-stories).
The big picture with use-cases
In your diagram, the bulbs seem to show that the system offers, and not what the user wants to do. If you want to show the big-picture with use-cases, you need to relate the bubbles with user goals:
Does the user just want to scan objects? Or is this scanning only one step for a larger goal, such as making an inventory, recognizing and ordering spare parts, or populating a virtual world.
Does the user just want to view a place in VR? Or are the expectations more ambitious, like purchasing products that would look fine in a given place?
This might look like an unecessary philosophic debate. But it is not. Because the main benefit of use-cases is a goal-oriented approach. Framing the problem or the expectations correctly, may allow you to think more creatively at alternatives instead of locking you early in a pre-conceived solution.
The right boundaries
The actors raise another question: are these actors autonomous and independent systems and do they matter to the user? Or are they just implementation details?
Formally, actors are external to the system, and moreover, the use-case should not depend on the internal structure of the system. So if the computer vision and the virtual reality system are in fact libraries, components, sub-systems of your system, they should not appear in the diagram.
Secondly, use-cases should offer observable result of value for actors. If the external system is dependent on your system and has no value on its own, then the use-case results cannot be of value to this system. For example, a DBMS are often viewed as candidate actors, but do not pass this test: the DBMS without the main system would be useless. If the system is not independent an autonomous, just remove it from the diagram to keep things simple.
Lastly, is, does the system actor matter to the other actors? If it makes no difference for your human users if an external system-actor intervenes, keep it simple and do not show the system-actor although you could. Because then again, it's more an implementation choice to rely on an external system than a requirement.

The way you denoted it is common practice. The so-called primary actor (which is who receives the added value from the system under consideration) is placed to the left and the so-called secondary actors (which only take part in and/or support the use case) are placed to the right. Depending on who the reader of the UC diagram will be their appearance will make sense or not. If you present it to some customer they are likely not interested in IT blurb. But for system designers it would be some important information.

Related

Modeling frontend and backend in a use case diagram

I am trying to make a use case diagram for my project, the backend is going to be made using Django rest framework and the front end using react, my question is how can i model this situation in the right way, should i model the frontend and represent the backend as an actor or the opposite, since i am thinking of making a mobile application as a second front end?
The right answer here is the Standard Answer of the Business Analyst no 1: It depends.
The question is - what do you want to model and why. Then - what is the correct tool (diagram) to do it.
The goal of the Use Case diagram is to show what functionalities a system is going to offer. Now the system can be treated as a whole, in which case you show the functionalities without depicting how the system is internally organised (this is the most common scenario and most probable the best way to use Use Case diagram in your case - but it does not show the fact of having FE and BE, note that this type of diagram isn't really best suited to do so, so keep reading).
You may also tread e.g. BE as the system itself (it can make sense especially when you're preparing headless API and really separate BE from FE; even more so when your BE and FE teams are totally separate). In such case FE will become an actor (just like e.g. other system that can interact with your BE). Obviously FE can be treated in the same way (i.e. be considered the system with BE being an actor), however usually there's less reason to do so.
Now having said that, if you want to depict the distinction between BE and FE, you should consider other types of diagrams. Keep in mind that Use Case diagram is a dynamic diagram, and the internal structure of the system is static, so obviously it should be one of the static diagrams instead. One that is dedicated to show the internal structure of the system is the Component diagram and it would most likely serve best the purpose of indicating existence of FE and BE (potentially with further level of details, e.g. existing microservices).
If on the other hand you would like to show specific technology in use, Deployment diagram might be your best shot. It allows to show the actual runtime environments, artifacts and their technologies.
Keep in mind - tying to use one type of diagram, or even worse one diagram, to show everything is usually a bad idea and a mistake often made by newbies. Be smarter than that.
Use-case are about a set of behaviors with an observable result that is of value for the actors. They should not care about the internals of a system:
UseCases define the offered Behaviors of the subject without reference to its internal structure.
Therefore, you should in principle not care about the distinction between front-end and back-end, but focus on actor goals with the system.
The only situation where you'd care for the back-end in a use-case diagram, is the case where the front-end would be an independent application that is of value on its own, but can interact with actors that represent external independent systems. (More here)

Graphics in reliable systems (like airline instruments)

I wonder about what technology is used to visualise flying instruments on this little lcds that are in cockpits in planes.
I am windows applications c++ software developer, and I'm interested what what libraries are used to this highly reliable systems like aircrafts onboard systems
example of one of this lcds, probably from boeing aircraft?
https://www.khronos.org/openglsc/ OpenGL has a safety critical subset, that's worth reading up on.
MFDs (Multifunctional display) are completely separate computers themselves. They communicate with other components (to obtain the data to be displayed) conforming ARINC661 standard, which defines a binary communication format to exchange data between display and user applications (sensors, etc.). Avionic systems also use RTOS (Integrity was being used in my project), each component has a partition itself and allocated processing time by the OS. Also, as Andreas stated, OpenGL has a safety-critical subset for this purpose. Avionic codes go through elaborate reviews and certification, and are coded over-safe (e.g. we were not allowed to use "new" keyword in C++, only static memory allocation was allowed).
I'm in the aerospace industry. Glad you asked.
My experience is that the hardware setup is unique for each display unit. Commercial or custom made GPU:s are used, but drivers and libraries are always made by the display unit vendor more or less from scratch since the combo of CPU, GPU, OS and connectors in between is often unique and always a company secret of the display unit vendor. OpenGL Safety Critical profile does appear in some products but in the end the vendor only develop what the customer really need and is willing to pay for. And quite often companies buy the basics and then pay for additional functionality such as another blend operation or larger textures. Similar to addons for cars.
In general aerospace is 10-20 years behind in graphics capabilities. For displays like the one in the picture there is no need to get up to date either. More complex capabilities introduce a gruesome cost on verification without any customer actually ready to pay for it. Can't have wrong altitude presented to the pilot, so testing and documentation is immense.
Entertainment systems are in general more capable as the information displayed cannot crash the aircraft. I think they are similar to systems found in casino slot machines. As long as the hardware does not ignite itself it is safe enough.
Most what I do is either company or military confidential. I can't say much more than what is publically available or common industry knowledge. I hope this shed some light on the environment you were interested in.

services based architecture should not necessarily imply distribution?

In my workplace (and a lot of other areas), there is a lot of emphasis on building architecture around services. (I am working in an e-commerce startup). However, I think services are implicitly considered as distributed. I am a believer of the first law of distribution - "don't distribute". So, I believe that we should not un-necessarily complicate architecture. It should be an architecture which can evolve. So, one of the ways to approach the problem would be to create well defined namespaces and build code around it, but keep the communication via java api. (this keeps monitoring requirement low, and reliability/availability problems low). This can easily be evolved into a distributed architecture by wrapping modules into web service, as and when, the scale requirements kick-in. So, the question is - what are the cons of writing code as a single application and evolving into distributed services, rather than straight jumping into implementing web services based architecture? Am I right in assuming that services should imply the basic principles of design (abstraction, encapsulation etc), rather than distribution over network?
Distribution requires modularity. However, it requires more than just modularity: it also requires coarse-grained interaction between the modules.
For example, in a single-process ecommerce system, you might have separate modules for managing the user's shopping cart and calculating prices. They might interact by the cart asking the calculator to price an item, then another item, etc. That would be perfectly fine.
However, in a distributed system, that would require a torrent of small method calls, which is inefficient; you might get away with it if you used CORBA for distribution, but with SOAP, you'd be in trouble. Rather, you would want to have the cart ask the calculator to price the whole order in one go. That might be worse from a separation of concerns point of view (why should the calculator have to know about the idea of carts?), but it would be required to make the system perform adequately.
Related to granularity, there's also the problem of modules interacting via interfaces or implementations. With a single process, you can define a set of interfaces through which modules will interact; modules can pass each other objects implementing those interfaces without having to tell each other about the implementations (eg a scheduler module could be passed anything implementing interface Job { void run(); }). Across a network, the requirement for coarse grain means that any objects passed must be passed by value (because passing by reference would entail fine-grained calls back to the passing module - unless you were using mobile code, which you aren't, because nobody is), which means that both modules must know about and agree on the implementations of the objects.
So, while building a single-process system in a modular way makes it easier to implement SOA later, it doesn't make it as simple as wrapping each module in a SOAP interface. At least, not unless you build your system in a coarse-grained manner from the start, which means throwing away a number of sound and helpful good software engineering practices.

Common information model for SOA systems

We are looking at the possibility of implementing a Common Information Model for data across several systems in a SOA architecture.
Many of these services will be consumed by a composite UI, we therefore see a benefit in having common data types.
What we are wondering is if this is a feasible approach, or if we should just map to common types in the client?
This question is framed pretty broadly, so my answer is going to remain pretty broad as well.
The key consideration here would seem to be location independence - though you're working with several applications, they're all going to share certain sorts of data (though not, as far as I can see from your question, actual data). An obvious use case for this is authentication and authorization data.
If you have determined that the common data is truly cooked enough to isolate in the fashion you're describing then I think it makes perfect sense to layer it off into a service. I think the perfect example of this is Windows Identity Framework. It takes something that we as architects have always treated as data and turns it into a service.
What you lose with the location independence is a little bit of efficiency that you would otherwise have in making batches calls to the same server, though SOA applications lose this efficiency early in their design, in my experience. But the efficiency you gain from "patternizing" a section of your apps generally outweighs that enormously.
Having a common information model doesn't imply common data types or common classes. Simply defining the relationships between, for instance, Customer, Order, OrderItem and Product goes a great distance toward common business logic and the ability to have different services and applications be able to interoperate in an SOA environment.
You might consider having an actual common model in some modeling language. From this, concrete data types and classes could be generated for particular circumstances. One might use UML for this, but I personally prefer to use NORMA, an Object-Role Modeling tool. It works at the conceptual level, so creates models that are independent of the data store technology.
NORMA runs as an add-in to Visual Studio Standard edition or above, but out of the box generates artifacts for several databases, as well as LINQ to SQL classes and even PHP web services, all from the same model. It is extensible so that you can generate your own artifacts from the model. And of course, the model is represented as XML, so you can do whatever you like with it.

So am I talking about a SOA here?

For a government contract we will be proposing to build a traffic monitoring architecture. We will have the following components:
Video camera's set up around the area of interest. The cameras will be aware of their location and orientation and viewing parameters.
A GIS map server which can be queried for streets, building, etc.
An algorithm the takes in raw video and street location information and outputs car locations.
Another algorithm takes in car locations and very low level street information and provides information about which cars are driving anomalously.
Another database takes in information about car locations and anomaly reports over time and can be queried for this later.
A proxy (or perhaps more accurately, a facade) is set up over the archive database and the real-time algorithms in order to provide a unified interface to the information.
A client attaches to the proxy and to the street server and paints various representations of the traffic situation on the screen.
I'm just now learning what an SOA is. Is this an ideal candidate of a Service Oriented Architecture SOA? I had heard that SOA services should be stateless (or is that only RESTful services?) I had also heard that it was inadvisable to pipe one service to the next to the next because it increases hidden complexity, and that there was something you should do to make this situation better (an "orchestration"?). The services above do appear to be modular and reusable. For instance, there will be plenty of cameras, various types of vehicle detection and anomaly algorithms, distributed databases, and plenty of clients. I will need to have the capability to handle events: for instance, if I may want to register to a service and be notified whenever a big truck moves past this point.
If this isn't ideally implemented by a SOA, then where else should I be looking. If this is ideal for a SOA, then where should I start when designing this? (And I'm starting basically from having read Wikipedia's SOA page.) Are there any good case studies to look at here?
Yes, SOA is ideal in this case (complex, distributed system with a wide mix of technologies) but from the sound of it you need to do a whole lot more research to get your head around the concept. It is not a tough concept by any stretch, it's actually simple, but there is no one prescribed way to do it. I suggest going over SOA case studies for similarly-sized projects, successes and failures.
You mention a facade for one of your subsystems. Extend that same concept to the rest of your components. E.g. each service is a facade to a complex subsystem.
Also, I recommend implementing a couple of different web services in your choice of technologies and abstracting arbitrary different subsystems (a database should be one of the coponents.) Then write a client that makes use of them. Doing so will give you a lot of practical experience and insight into the concept.
Last thought: The one area where an SOA architecture might stumble is if you have to move video data between several different services. The stateless, transactional nature of SOAs might introduce performance issues when moving very large amounts of data or when performing bulk transactions on very large data sets. You either need to keep video localized or implement a back-end subsystem (cheat) to avoid potentially nasty bottlenecks.