Confused about title of a state and concurrency representation in state machine diagram - state

My object can be at different states at time according to its variables.
So, I decided to put these possible states into a global composite state with concurrency notation.
The object can only be at one of the each couple of states shown above -at the same time-.
So, it can never be at any state other than that global state.
How should I title it?
Or should there has been some other solution?

Yes, there is another solution. The statemachine itself can have multiple regions. So, there is no need to define a global state. The UML specification clearly allows this:
14.2.4.3 Region
A composite State or StateMachine with Regions is shown by tiling the graph Region of the State/StateMachine using
dashed lines to divide it into Regions.
Unfortunately, the tools I know don't support more than one region in a statemachine. As a workaround I would give the superfluous global state the same name as the statemachine.

Related

Is rebinding a graphics pipeline in Vulkan a guaranteed no-op?

In the simplified scenario where each object to be rendered is translated into a secondary command buffer and each of those command buffers bind a graphics pipeline initially: is a guaranteed no-op to bind the pipeline that was immediately bound before? Or the order of execution of the secondary command buffers is not guaranteed at all?
is a guaranteed no-op to bind the pipeline that was immediately bound before?
No. In fact, in the case you're outlining, you should assume precisely the opposite. Why?
Since each of your CBs is isolated from the others, the vkCmdBindPipeline function has no way to know what was bound beforehand. Remember: the state of a command buffer that has started recording is undefined. Which means that the command buffer building code cannot make any assumptions about any state which you did not set within this CB.
In order for the driver to implement the optimization you're talking about, it would have to, at vkCmdExecuteCommands time, introspect into each secondary command buffer and start ripping out anything that is duplicated across CB boundaries.
That might be viable if vkCmdExecuteCommands has to copy all of the commands out of secondary CBs into primary ones. But that would only be reasonable for systems where secondary CBs don't exist at a hardware level and thus have to be implemented by copying their commands into the primary CB. But even in this case, implementing such culling would make the command take longer to execute compared to simply copying some tokens into the primary CB's storage.
When dealing with a low-level API, do not assume that the driver is going to use information outside of its immediate purview to optimize your code. Especially when you have the tools for doing that optimization yourself.
This is (yet another) a reason why you should not give each individual object its own CB.
Or the order of execution of the secondary command buffers is not guaranteed at all?
The order of execution of commands is unchanged by their presence in CBs. However, the well-defined nature of the state these commands use is affected.
Outside of state inherited by secondary CBs, every secondary CB's state begins undefined. That's why you have to bind a pipeline for each one. Commands that rely on previously issued state only have well-defined behavior if that previously issued state is within the CB containing that command (or is inherited state).

Consistently using the value of "now" throughout the transaction

I'm looking for guidelines to using a consistent value of the current date and time throughout a transaction.
By transaction I loosely mean an application service method, such methods usually execute a single SQL transaction, at least in my applications.
Ambient Context
One approach described in answers to this question is to put the current date in an ambient context, e.g. DateTimeProvider, and use that instead of DateTime.UtcNow everywhere.
However the purpose of this approach is only to make the design unit-testable, whereas I also want to prevent errors caused by unnecessary multiple querying into DateTime.UtcNow, an example of which is this:
// In an entity constructor:
this.CreatedAt = DateTime.UtcNow;
this.ModifiedAt = DateTime.UtcNow;
This code creates an entity with slightly differing created and modified dates, whereas one expects these properties to be equal right after the entity was created.
Also, an ambient context is difficult to implement correctly in a web application, so I've come up with an alternative approach:
Method Injection + DeterministicTimeProvider
The DeterministicTimeProvider class is registered as an "instance per lifetime scope" AKA "instance per HTTP request in a web app" dependency.
It is constructor-injected to an application service and passed into constructors and methods of entities.
The IDateTimeProvider.UtcNow method is used instead of the usual DateTime.UtcNow / DateTimeOffset.UtcNow everywhere to get the current date and time.
Here is the implementation:
/// <summary>
/// Provides the current date and time.
/// The provided value is fixed when it is requested for the first time.
/// </summary>
public class DeterministicTimeProvider: IDateTimeProvider
{
private readonly Lazy<DateTimeOffset> _lazyUtcNow =
new Lazy<DateTimeOffset>(() => DateTimeOffset.UtcNow);
/// <summary>
/// Gets the current date and time in the UTC time zone.
/// </summary>
public DateTimeOffset UtcNow => _lazyUtcNow.Value;
}
Is this a good approach? What are the disadvantages? Are there better alternatives?
Sorry for the logical fallacy of appeal to authority here, but this is rather interesting:
John Carmack once said:
There are four principle inputs to a game: keystrokes, mouse moves, network packets, and time. (If you don't consider time an input value, think about it until you do -- it is an important concept)"
Source: John Carmack's .plan posts from 1998 (scribd)
(I have always found this quote highly amusing, because the suggestion that if something does not seem right to you, you should think of it really hard until it seems right, is something that only a major geek would say.)
So, here is an idea: consider time as an input. It is probably not included in the xml that makes up the web service request, (you wouldn't want it to anyway,) but in the handler where you convert the xml to an actual request object, obtain the current time and make it part of your request object.
So, as the request object is being passed around your system during the course of processing the transaction, the time to be considered as "the current time" can always be found within the request. So, it is not "the current time" anymore, it is the request time. (The fact that it will be one and the same, or very close to one and the same, is completely irrelevant.)
This way, testing also becomes even easier: you don't have to mock the time provider interface, the time is always in the input parameters.
Also, this way, other fun things become possible, for example servicing requests to be applied retroactively, at a moment in time which is completely unrelated to the actual current moment in time. Think of the possibilities. (Picture of bob squarepants-with-a-rainbow goes here.)
Hmmm.. this feels like a better question for CodeReview.SE than for StackOverflow, but sure - I'll bite.
Is this a good approach?
If used correctly, in the scenario you described, this approach is reasonable. It achieves the two stated goals:
Making your code more testable. This is a common pattern I call "Mock the Clock", and is found in many well-designed apps.
Locking the time to a single value. This is less common, but your code does achieve that goal.
What are the disadvantages?
Since you are creating another new object for each request, it will create a mild amount of additional memory usage and additional work for the garbage collector. This is somewhat of a moot point since this is usually how it goes for all objects with per-request lifetime, including the controllers.
There is a tiny fraction of time being added before you take the reading from the clock, caused by the additional work being done in loading the object and from doing lazy loading. It's negligible though - probably on the order of a few milliseconds.
Since the value is locked down, there's always the risk that you (or another developer who uses your code) might introduce a subtle bug by forgetting that the value won't change until the next request. You might consider a different naming convention. For example, instead of "now", call it "requestRecievedTime" or something like that.
Similar to the previous item, there's also the risk that your provider might be loaded with the wrong lifecycle. You might use it in a new project and forget to set the instancing, loading it up as a singleton. Then the values are locked down for all requests. There's not much you can do to enforce this, so be sure to comment it well. The <summary> tag is a good place.
You may find you need the current time in a scenario where constructor injection isn't possible - such as a static method. You'll either have to refactor to use instance methods, or will have to pass either the time or the time-provider as a parameter into the static method.
Are there better alternatives?
Yes, see Mike's answer.
You might also consider Noda Time, which has a similar concept built in, via the IClock interface, and the SystemClock and FakeClock implementations. However, both of those implementations are designed to be singletons. They help with testing, but they don't achieve your second goal of locking the time down to a single value per request. You could always write an implementation that does that though.
Code looks reasonable.
Drawback - most likely lifetime of the object will be controlled by DI container and hence user of the provider can't be sure that it always be configured correctly (per-invocation and not any longer lifetime like app/singleton).
If you have type representing "transaction" it may be better to put "Started" time there instead.
This isn't something that can be answered with a realtime clock and a query, or by testing. The developer may have figured out some obscure way of reaching the underlying library call...
So don't do that. Dependency injection also won't save you here; the issue is that you want a standard pattern for time at the start of the 'session.'
In my view, the fundamental problem is that you are expressing an idea, and looking for a mechanism for that. The right mechanism is to name it, and say what you mean in the name, and then set it only once. readonly is a good way to handle setting this only once in the constructor, and lets the compiler and runtime enforce what you mean which is that it is set only once.
// In an entity constructor:
this.CreatedAt = DateTime.UtcNow;

How many threads can be used (by compiler) to initialise global objects (before function main)

The question may be wrong in wording but the idea is simple. The order of initialisation of global objects in different translation units is not guarantied. If an application consists of two translation units - can compiler potentially generate initialisation code that will start two threads to create global objects in those translation units? This question is similar to this or this but I think the answers don't match the questions.
I will rephrase the question - if a program has several global objects - will their constructors always be called from the same thread (considering that user code doesn't start any threads before main)?
To make this question more practical - consider a case when several global objects are scattered over several translation units and each of them increments single global counter in constructor. If there is no guarantee that they are executed in single thread then access to the counter must be synchronised. For me it's definitely overkill.
UPDATE:
It looks like concurrent initialisation of globals is possible - see n2660 paper (second case)
UPDATE:
I've implemented singleton using singleton_registry. The idea is that declaration using SomeSingleton = singleton<SomeClass>; registers SomeClass for further initialisation in singleton_registry and real singleton initialisation (with all inter-dependencies) happens at the beginning of main function (before I've started any other threads) by singleton_registry (which is created on stack). In this case I don't need to use DLCP. It also allows me to prepare application configuration and disseminate it over all singletons uniformly. Another important use case is usage of singletons with TDD. Normally it's a pain to use singletons in unit tests, but with singleton_registry I can recreate application global objects for each test case.
In fact this is just a development of an idea that all singletons must be initialised at the beginning of function main. Normally it means that singleton user has to write custom initialisation function which handles all dependencies and prepare proper initialisation parameters (this is what I want to avoid).
Implementation looks good except the fact that I may have potential race conditions during singletons registration.
Several projects ago, using vxWorks and C++, a team mate used a non-thread safe pattern from the book (are any of those patterns thread safe?):
10% of system starts failed.
Lesson 1: if you don't explicitly control the when of a CTOR of a global object, it can change from build to build. (and we found no way to control it.)
Lesson 2: Controlling the when of CTOR is probably the easiest solution to lesson 1 (should it become a problem).
In my experience, if you gotta have Global objects (and it seems many would discourage it), consider limiting these globals to pointers initialized to 0:
GlobalObject* globalobject = nullptr;
And only the thread assigned to initialize it will do so. The other threads/tasks can spin wait for access.

Using Critical Sections/Semaphores in C++

I recently started using C++ instead of Delphi.
And there are some things that seem to be quite different.
For example I don't know how to initialize variables like Semaphores and CriticalSections.
By now I only know 2 possible ways:
1. Initializing a Critical Section in the constructor is stupid since every instance would be using its own critical section without synchronizing anything, right?
2. Using a global var and initializing it when the form is created seems not to be the perfect solution, as well.
Can anyone tell me how to achieve this?
Just a short explanation for what I need the Critical Section :
I'd like to fill a ListBox from different threads.
The Semaphore :
Different threads are moving the mouse, this shouldn't be interrupted.
Thanks!
Contrary to Delphi, C++ has no concept of unit initialization/finalization (but you already found out about that).
What we are left with is very little. You need to distinguish two things:
where you declare your variable (global, static class member, class member, local to a function, static in a function -- I guess that covers it all)
where you initialize your variable (since you are concerned with a C API you have to call the initialization function yourself)
Fact is, in your case it hardly matters where you declare your variable as long as it is accessible to all the other parts of your program that need it, and the only requirement as to where you should initialize it is: before you actually start using it (which implies, before you start other threads).
In your case I would probably use a singleton pattern. But C++ being what it is, singletons suffer from race condition during their initialization, there is no clean way around that. So, in addition to your singleton, you should ensure that it is correctly created before you start using it in multithreaded context. A simple call to getInstance() at the start of your main() will do the trick (or anywhere else you see fit). As you see, this takes only care of where you declare your variable, not where you initialize it, but unfortunately C++ has important limitations when it comes to multithreading (it is under-specified) so there is no way around that.
To sum it up: just do what you want (as long as it works) and stop worrying.
In my opinion you only need a critical section to synchronize updates to a list box from various threads. Mouse will keep moving. Semaphore is not fitting the solution. You initialize the critical section in you class constructor. where the list box is. Write a method to update the listbox.
//psudo code
UpdateListBox()
{
//enter critical section
//update
//leave critical section
}
All the threads will call this method to update the listbox.
information about critical section is here
http://msdn.microsoft.com/en-us/library/windows/desktop/ms683472%28v=vs.85%29.aspx

Is using clojure's stm as a global state considered a good practice?

in most of my clojure programs... and alot other clojure programs I see, there is some sort of global variable in an atom:
(def *program-state*
(atom {:name "Program"
:var1 1
:var2 "Another value"}))
And this state would be referred to occasionally in the code.
(defn program-name []
(:name #*program-state*))
Reading this article http://misko.hevery.com/2008/07/24/how-to-write-3v1l-untestable-code/ made me rethink global state but somehow, even though I completely agree with the article, I think its okay to use hash-maps in atoms because its providing a common interface for manipulating global state data (analogous to using different databases to store your data).
I would like some other thoughts on this matter.
This kind of thing can be OK, but it is also often a design smell so I would approach with caution.
Things to think about:
Consistency - can one part of the code change the program name? if so then the program-name function will behave inconsistently from the perspective of other threads. Not good!
Testability - is this easy to test? can one part of the test suite that is changing the program name safely run concurrently with another test that is reading the name?
Multiple instances - will you ever have two different parts of the application expecting to use a different program-name at the same time? If so, this is a strong hint that your mutable state should not be global.
Alternatives to consider:
Using a ref instead of an atom, you can at least ensure consistency of mutable state within transactions
Using binding you can limit mutability to a per-thread basis. This solves most of the concurrency issues and can be helpful when your global variables are being used like a set of thread-local configuration parameters.
Using immutable global state wherever you can. Does it really need to be mutable?
I think having a single global state that is occasionally updated in commutative ways is fine. When you start having two global states that need to be updated and threads start using them for communication, then I start to worry.
maintaining a count of current global users is fine:
Any thread can inc or dec this at any time without hurting another
If it changes out from under your thread nothing explodes.
maintaining the log directory is questionable:
When it changes will all threads stop writing to the old one?
if two threads change it will they converge.
Using this as a message queue is even more dubious:
I think it is fine to have such a global state (and in many cases it is required) but I would be careful about that the core logic of my application have functions that take the state as a parameter and return the updated state rather than directly accessing the global state. Basically I would prefer to have a controlled access to the global state from few set of function and everything else in my program should use these set of methods to access the state as that would allow me to abstract away the state implementation i.e initially I could start with an in memory atom, then may be move to some persistent storage.