DryIOC Sharing Singletons between Facades - dryioc

I am working on a system that processes financial data, for various financial instruments, on several different timeframes.
For example:
EUR/USD
- m1 Timeframe (1 Minute)
- m5 Timeframe (5 Minute)
- m15 Timeframe (15 Minute)
GBP/USD
- m1 Timeframe (1 Minute)
- m5 Timeframe (5 Minute)
- m15 Timeframe (15 Minute)
The timeframes each have a fairly complex processing pipeline, and I'm using DryIOC to route data through the pipeline, using an EventAggregator as described here: DryIOC Event Aggregator
DryIOC is perfect for this as its super fast and can keep up with the amount of data/events I need.
I have dependencies which are at the Instrument Level, that need to be shared between the different timeframes of that instrument.
And I also have global dependencies such as a broker connection manager that need to be shared between all instruments and all timeframes.
Containers are created at runtime; I may switch on/off different instruments and timeframes, and need to make a new container.
Facade seems perfect for this. I can start with a global container, and for any instrument that gets activated, make a Facade for the instrument. And from that container make a container for each timeframe. Resolutions in the Facade container use the local registrations defined there, and then fall back to the parent when unresolved.
However, as described in the docs Facades have their own Singletons. And when I try to resolve a global dependency from a Facade as a singleton I get a new instance.
This test fails:
[Test]
public void Test()
{
var globalContainer = new Container();
globalContainer.Register<IGlobalDependency, GlobalDependency>(Reuse.Singleton);
var EURUSD_Container = new Container(rules => rules.WithFallbackContainer(globalContainer));
EURUSD_Container.Register<IInstrumentDependency, InstrumentDependency>(Reuse.Singleton);
var EURUSD_Timeframe_1_Container = EURUSD_Container.CreateFacade();
EURUSD_Timeframe_1_Container.Register<ITimeframeDependency, TimeframeDependency>(Reuse.Singleton);
var EURUSD_Timeframe_2_Container = EURUSD_Container.CreateFacade();
EURUSD_Timeframe_2_Container.Register<ITimeframeDependency, TimeframeDependency>(Reuse.Singleton);
var globalfromTimeframe1 = EURUSD_Timeframe_1_Container.Resolve<IGlobalDependency>();
var globalfromTimeframe2 = EURUSD_Timeframe_2_Container.Resolve<IGlobalDependency>();
Assert.AreSame(globalfromTimeframe1, globalfromTimeframe2);
}
I've spent three days battling with Facades, Scopes, NamedScopes and combinations of all these things. Scopes don't work because if I make a new scope for an instrument and then a new scope for each timeframe within the instrument and resolve with InCurrent scope - I still get a new version because each timeframe is in its own scope.
Named scopes didn't work because I only know the instrument names at runtime, and adding new registrations for new instruments, timeframes collided.
How can I keep subcontainers separate, but have then share singletons with their parents?
Update:
public void ScopeTest()
{
var globalContainer = new Container();
globalContainer.Register<IGlobalDependency, GlobalDependency>(Reuse.Singleton);
var EURUSD_Container = globalContainer.OpenScope("EUR/USD");
EURUSD_Container.Register<IInstrumentDependency, InstrumentDependency>(Reuse.InCurrentNamedScope("EUR/USD"), serviceKey: "EUR/USD");
var EURUSD_Timeframe_1_Container = EURUSD_Container.OpenScope("m1");
EURUSD_Timeframe_1_Container.Register<ITimeframeDependency, TimeframeDependency>(Reuse.InCurrentNamedScope("m1"), serviceKey: "m1");
var EURUSD_Timeframe_2_Container = EURUSD_Timeframe_1_Container.OpenScope("m5");
EURUSD_Timeframe_2_Container.Register<ITimeframeDependency, TimeframeDependency>(Reuse.InCurrentNamedScope("m5"), serviceKey:"m5");
var USDJPY_Container = globalContainer.OpenScope("USD/JPY");
EURUSD_Container.Register<IInstrumentDependency, InstrumentDependency>(Reuse.InCurrentNamedScope("USD/JPY"), serviceKey: "USD/JPY");
var USDJPY_Timeframe_1_Container = USDJPY_Container.OpenScope("m1");
USDJPY_Timeframe_1_Container.Register<ITimeframeDependency, TimeframeDependency>(Reuse.InCurrentNamedScope("m1"), serviceKey:"m1");
var USDJPY_Timeframe_2_Container = USDJPY_Container.OpenScope("m5");
USDJPY_Timeframe_2_Container.Register<ITimeframeDependency, TimeframeDependency>(Reuse.InCurrentNamedScope("m5"), serviceKey:"m5");
var globalfromEURUSDTimeframe1 = EURUSD_Timeframe_1_Container.Resolve<IGlobalDependency>();
var globalfromEURUSDTimeframe2 = EURUSD_Timeframe_2_Container.Resolve<IGlobalDependency>();
var globalfromUSDJPYTimeframe1 = EURUSD_Timeframe_1_Container.Resolve<IGlobalDependency>();
var globalfromUSDJPYTimeframe2 = EURUSD_Timeframe_2_Container.Resolve<IGlobalDependency>();
Assert.AreSame(globalfromEURUSDTimeframe1, globalfromEURUSDTimeframe2);
Assert.AreSame(globalfromUSDJPYTimeframe1, globalfromUSDJPYTimeframe2);
Assert.AreSame(globalfromEURUSDTimeframe1, globalfromUSDJPYTimeframe2);
}
Produces the following exception:
DryIoc.ContainerException: Unable to register service Namespace.ITimeframeDependency - {DI=25, ImplType="Namespace.TimeframeDependency", Reuse=CurrentScopeReuse {Name="m1", Lifespan=100}} with duplicate key [m1]. Already registered service with same key is {ID=22.... etc.... Name="m1"
on a side note its so annoying you can't copy exceptions from the visual studio test runner.

I managed to achieve this in the end by using WithRegistrationsCopy()
e.g.
_localContainer = container.WithRegistrationsCopy();
This allows me to get top level singleton instances in subcontainers, but also have specific registrations in subcontainers not known to other containers.

Related

Aggregate root invariant enforcement with application quotas

The application Im working on needs to enforce the following rules (among others):
We cannot register a new user to the system if the active user quota for the tenant is exceeded.
We cannot make a new project if the project quota for the tenant is exceeded.
We cannot add more multimedia resources to any project that belongs to a tenant if the maximum storage quota defined in the tenant is exceeded
The main entities involved in this domain are:
Tenant
Project
User
Resource
As you can imagine, these are the relationship between entities:
Tenant -> Projects
Tenant -> Users
Project -> Resources
As a first glance, It seems the aggregate root that will enforce those rules is the tenant:
class Tenant
attr_accessor :users
attr_accessor :projects
def register_user(name, email, ...)
raise QuotaExceededError if active_users.count >= #users_quota
User.new(name, email, ...).tap do |user|
active_users << user
end
end
def activate_user(user_id)
raise QuotaExceededError if active_users.count >= #users_quota
user = users.find {|u| u.id == user_id}
user.activate
end
def make_project(name, ...)
raise QuotaExceededError if projects.count >= #projects_quota
Project.new(name, ...).tap do |project|
projects << project
end
end
...
private
def active_users
users.select(&:active?)
end
end
So, in the application service, we would use this as:
class ApplicationService
def register_user(tenant_id, *user_attrs)
transaction do
tenant = tenants_repository.find(tenant_id, lock: true)
tenant.register_user(*user_attrs)
tenants_repository.save(tenant)!
end
end
...
end
The problem with this approach is that aggregate root is quite huge because it needs to load all users, projects and resources and this is not practical. And also, in regards to concurrency, we would have a lot of penalties due to it.
An alternative would be (I'll focus on user registration):
class Tenant
attr_accessor :total_active_users
def register_user(name, email, ...)
raise QuotaExceededError if total_active_users >= #users_quota
# total_active_users += 1 maybe makes sense although this field wont be persisted
User.new(name, email, ...)
end
end
class ApplicationService
def register_user(tenant_id, *user_attrs)
transaction do
tenant = tenants_repository.find(tenant_id, lock: true)
user = tenant.register_user(*user_attrs)
users_repository.save!(user)
end
end
...
end
The case above uses a factory method in Tenant that enforces the business rules and returns the User aggregate. The main advantage compared to the previous implementation is that we dont need to load all users (projects and resources) in the aggregate root, only the counts of them. Still, for any new resource, user or project we want to add/register/make, we potentially have concurrency penalties due to the lock acquired. For example, if Im registering a new user, we cannot make a new project at the same time.
Note also that we are acquiring a lock on Tenant and however we are not changing any state in it, so we dont call tenants_repository.save. This lock is used as a mutex and we cannot take advantage of optimistic concurrency unless we decide to save the tenant (detecting a change in the total_active_users count) so that we can update the tenant version and raise an error for other concurrent changes if the version has changed as usual.
Ideally, I'd like to get rid of those methods in Tenant class (because it also prevents us from splitting some pieces of the application in their own bounded contexts) and enforce the invariant rules in any other way that does not have a big impact with the concurrency in other entities (projects and resources), but I don't really know how to prevent two users to be registered simultaneously without using that Tenant as aggregate root.
I'm pretty sure that this is a common scenario that must have a better way to be implemented that my previous examples.
I'm pretty sure that this is a common scenario that must have a better way to be implemented that my previous examples.
A common search term for this sort of problem: Set Validation.
If there is some invariant that must always be satisfied for an entire set, then that entire set is going to have to be part of the "same" aggregate.
Often, the invariant itself is the bit that you want to push on; does the business need this constraint strictly enforced, or is it more appropriate to loosely enforce the constraint and charge a fee when the customer exceeds its contracted limits?
With multiple sets -- each set needs to be part of an aggregate, but they don't necessarily need to be part of the same aggregate. If there is no invariant that spans multiple sets, then you can have a separate aggregate for each. Two such aggregates may be correlated, sharing the same tenant id.
It may help to review Mauro Servienti's talk All our aggregates are wrong.
An aggregate shoud be just a element that check rules. It can be from a stateless static function to a full state complex object; and does not need to match your persistence schema nor your "real life" concepts nor how you modeled your entities nor how you structure your data or your views. You model the aggregate with just the data you need to check rules in the form that suits you best.
Do not be affraid about precompute values and persist them (total_active_users in this case).
My recommendation is keep things as simple as possible and refactor (that could mean split, move and/or merge things) later; once you have all behavior modelled, is easier to rethink and analyze to refactor.
This would be my first approach without event sourcing:
TenantData { //just the data the aggregate needs from persistence
int Id;
int total_active_users;
int quota;
}
UserEntity{ //the User Entity
int id;
string name;
date birthDate;
//other data and/or behaviour
}
public class RegistrarionAggregate{
private TenantData fromTenant;//data from persistence
public RegistrationAggregate(TenantData fromTenant){ //ctor
this.fromTenant = fromTenant;
}
public UserRegistered registerUser(UserEntity user){
if (fromTenant.total_active_users >= fromTenant.quota) throw new QuotaExceededException
fromTeant.total_active_users++; //increase active users
return new UserRegisteredEvent(fromTenant, user); //return system changes expressed as a event
}
}
RegisterUserCommand{ //command structure
int tenantId;
UserData userData;// id, name, surname, birthDate, etc
}
class ApplicationService{
public void registerUser(RegisterUserCommand registerUserCommand){
var user = new UserEntity(registerUserCommand.userData); //avoid wrong entity state; ctor. fails if some data is incorrect
RegistrationAggregate agg = aggregatesRepository.Handle(registerUserCommand); //handle is overloaded for every command we need. Use registerUserCommand.tenantId to bring total_active_users and quota from persistence, create RegistrarionAggregate fed with TenantData
var userRegisteredEvent = agg.registerUser(user);
persistence.Handle(userRegisteredEvent); //handle is overloaded for every event we need; open transaction, persist userRegisteredEvent.fromTenant.total_active_users where tenantId, optimistic concurrency could fail if total_active_users has changed since we read it (rollback transaction), persist userRegisteredEvent.user in relationship with tenantId, commit transaction
eventBus.publish(userRegisteredEvent); //notify external sources for eventual consistency
}
}
Read this and this for a expanded explanation.

Is there an equivalent way of doing 'lb soap' programmatically with loopback?

According to documentation on loopback, lb soap creates models of underlying soap based datasource. Is there a programmatic way to do this? I want to do it programmatically to facilitate a dynamic soap consumption through dynamically created models and datasources.
Disclaimer: I am a co-author and maintainer of LoopBack.
Here is the source code implementing the command lb soap:
soap/index.js
soap/wsdl-loader.js
Here is the code that's generating models definition and method source code:
exports.generateAPICode = function generateAPICode(selectedDS, operationNames) { // eslint-disable-line max-len
var apis = [];
var apiData = {
'datasource': selectedDS,
'wsdl': selectedWsdl,
'wsdlUrl': selectedWsdlUrl,
'service': selectedService.$name,
'binding': selectedBinding.$name,
'operations': getSelectedOperations(selectedBinding, operationNames),
};
var code = soapGenerator.generateRemoteMethods(apiData);
var models = soapGenerator.generateModels(apiData.wsdl, apiData.operations);
var api = {
code: code,
models: models,
};
apis.push(api);
return apis;
};
As you can see, most of the work is delegated to soapGenerator, which refers to loopback-soap - a lower-level module maintained by the LoopBack team too. In your application, you can use loopback-soap directly (no need to depend on our CLI tooling) and call its API to generate SOAP-related models.
Unfortunately we don't have much documentation for loopback-soap since it has been mostly an internal module so far. You will have to read the source code to build a better understanding. If you do so, then we would gladly accept contributions improving the documentation for future users.

Control number of active actors of a type

Is it possible to control the number of active actors in play? In a nutshell, I have an actor called AuthoriseVisaPaymentActor which handles the message VisaPaymentMessage. I have a parallel loop which sends 10 messages but I am trying to create something which allows for 3 actors to be working simultaneously and the other 7 messages will be blocked and waiting for an actor to be available. Is this possible? I am currently using a RoundRobin setup which I believe I have misunderstood..
var actor = sys.ActorOf(
Props.Create<AuthoriseVisaPaymentActor>().WithRouter(new RoundRobinPool(1)));
actor.Tell(new VisaPaymentMessage(curr.ToString(), 9.99M, "4444"));
To set up a round robin pools/groups, you need to specify the actor paths to use. This can either be done statically in your hocon settings or dynamically in code (see below). As far as messages being blocked, Akka's mailboxes already do that for you; it won't process any new messages until the one it is currently processing has been handled. It just holds them in queue until the actor is ready to handle it.
// Setup the three actors
var actor1 = sys.ActorOf(Props.Create<AuthoriseVisaPaymentActor>());
var actor2 = sys.ActorOf(Props.Create<AuthoriseVisaPaymentActor>());
var actor3 = sys.ActorOf(Props.Create<AuthoriseVisaPaymentActor>());
// Get their paths
var routees = new[] { actor1.Path.ToString(), actor2.Path.ToString(), actor3.Path.ToString() };
// Create a new actor with a router
var router = sys.ActorOf(Props.Empty.WithRouter(new RoundRobinGroup(routees)));
router.Tell(new VisaPaymentMessage(curr.ToString(), 9.99M, "4444"));

entity framework 6 and pessimistic concurrency

I'm working on a project to gradually phase out a legacy application.
In the proces, as a temporary solution we integrate with the legacy application using the database.
The legacy application uses transactions with serializable isolation level.
Because of database integration with a legacy application, i am for the moment best off using the same pessimistic concurrency model and serializable isolation level.
These serialised transactions should not only be wrapped around the SaveChanges statement but includes some reads of data as well.
I do this by
Creation a transactionScope around my DbContext with serialised isolation level.
Create a DbContext
Do some reads
Do some changes to objects
Call SaveChanges on the DbContext
Commit the transaction scope (thus saving the changes)
I am under the notion that this wraps my entire reads and writes into on serialised transaction and then commits.
I consider this a way form of pessimistic concurrency.
However, reading this article, https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
states that ef does not support pessimistic concurrency.
My question is:
A: Does EF support my way of using a serializable transaction around reads and writes
B: Wrapping the reads and writes in one transaction gives me the guarantee that my read data is not changed when committing the transaction.
C: This is a form of pessimistic concurrency right?
One way to acheive pessimistic concurrency is to use sonething like this:
var options = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = new TimeSpan(0, 0, 0, 10)
};
using(var scope = new TransactionScope(TransactionScopeOption.RequiresNew, options))
{ ... stuff here ...}
In VS2017 it seems you have to rightclick TransactionScope then get it to add a reference for: Reference Assemblies\Microsoft\Framework.NETFramework\v4.6.1\System.Transactions.dll
However if you have two threads attempt to increment the same counter, you will find one succeeds whereas the other thread thows a timeout in 10 seconds. The reason for this is when they proceed to saving changes they both need to upgrade their lock to exclusive, but they cannot because other transaction is already holding a shared lock on the same row. SQL Server will then detect the deadlock after a while fails one transactions to solve the deadlock. Failing one transaction will release shared lock and the second transaction will be able to upgrade its shared lock to exclusive lock and proceed with execution.
The way out of this deadlocking is to provide a UPDLOCK hint to the database using something such as:
private static TestEntity GetFirstEntity(Context context) {
return context.TestEntities
.SqlQuery("SELECT TOP 1 Id, Value FROM TestEntities WITH (UPDLOCK)")
.Single();
}
This code came from Ladislav Mrnka's blog which now looks to be unavailable. The other alternative is to resort to optimistic locking.
The document states that EF does not have a built in pessimistic concurrency support. But this does not mean you can't have pessimistic locking with EF. So YOU CAN HAVE PESSIMISTIC LOCKING WITH EF!
The recipe is simple:
use transactions (not necessarily serializable, cause it will lead to poor perf.) - readcommitted is ok to use...but depends...
do your changes, call dbcontext.savechanges()
do lock your table - execute T-SQL manually, or feel free to use the code att. below.
the given T-SQL command with the hints will keep that database locked until the duration of the given transaction.
there's one thing you need to take care: your loaded entities might be obsolete at the point you do the lock, so all entities from the locked table should be re-fetched (reloaded).
I did a lot of pessimistic locking, but optimistic locking is better. You can't go wrong with it.
A typical example where pessimistic locking can't help is a parent child relation, where you might lock the parent and treat it like an aggregate (so you assume you are the only one having access to the child too). So if other thread tries to access the parent object, it won't work (will be blocked) until the other thread releases the lock from the parent table. But with an ORM, any other coder can load the child independently - and from that point 2 threads will make changes to the child object... With pessimistic locking you might mess up the data, with optimistic you'll get an exception, you can reload valid data and do try to save again...
So the code:
public static class DbContextSqlExtensions
{
public static void LockTable<Entity>(this DbContext context) where Entity : class
{
var tableWithSchema = context.GetTableNameWithSchema<Entity>();
context.Database.ExecuteSqlCommand(string.Format("SELECT null as dummy FROM {0} WITH (tablockx, holdlock)", tableWithSchema));
}
}
public static class DbContextExtensions
{
public static string GetTableNameWithSchema<T>(this DbContext context)
where T : class
{
var entitySet = GetEntitySet<T>(context);
if (entitySet == null)
throw new Exception(string.Format("Unable to find entity set '{0}' in edm metadata", typeof(T).Name));
var tableName = GetStringProperty(entitySet, "Schema") + "." + GetStringProperty(entitySet, "Table");
return tableName;
}
private static EntitySet GetEntitySet<T>(DbContext context)
{
var type = typeof(T);
var entityName = type.Name;
var metadata = ((IObjectContextAdapter)context).ObjectContext.MetadataWorkspace;
IEnumerable<EntitySet> entitySets;
entitySets = metadata.GetItemCollection(DataSpace.SSpace)
.GetItems<EntityContainer>()
.Single()
.BaseEntitySets
.OfType<EntitySet>()
.Where(s => !s.MetadataProperties.Contains("Type")
|| s.MetadataProperties["Type"].ToString() == "Tables");
var entitySet = entitySets.FirstOrDefault(t => t.Name == entityName);
return entitySet;
}
private static string GetStringProperty(MetadataItem entitySet, string propertyName)
{
MetadataProperty property;
if (entitySet == null)
throw new ArgumentNullException("entitySet");
if (entitySet.MetadataProperties.TryGetValue(propertyName, false, out property))
{
string str = null;
if (((property != null) &&
(property.Value != null)) &&
(((str = property.Value as string) != null) &&
!string.IsNullOrEmpty(str)))
{
return str;
}
}
return string.Empty;
}
}

How to read all the items present in an Appfabric Cache:

I am trying to develop a tool (in Visual Studio 2010, C#) which can read all the items present in an Appfabric cache and store them in a Table. I don't have to use powershell.
First I thought that If I can get all the regions present in the cache, I can make use of the DataCache.GetObjectsInRegion Method to complete my task. But I was not able to get all the region names from the cache as it does not shows the user defined region names but only the default ones, so now I am giving up on this approach.
Can anyone please guide me here, my main goal is to read all the items present in a cache.
There is no built-in method to list all items in the cache.
You're correct, it's possible to list all items using GetObjectsInRegion for a named cache. You have to know first all regions names (if used) or call GetSystemRegions to get all (default) system regions. A simple foreach will allow you to list all items. When you put something into the cache without region name, it will be added to a system region.
Here is a basic example
// Declare array for cache host(s).
DataCacheServerEndpoint[] servers = new DataCacheServerEndpoint[1];
servers[0] = new DataCacheServerEndpoint("YOURSERVERHERE", 22233);
// Setup the DataCacheFactory configuration.
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
factoryConfig.Servers = servers;
factoryConfig.SecurityProperties = new DataCacheSecurity(DataCacheSecurityMode.None, DataCacheProtectionLevel.None);
// Create a configured DataCacheFactory object.
DataCacheFactory mycacheFactory = new DataCacheFactory(factoryConfig);
// Get a cache client for the default cache
DataCache myCache = mycacheFactory.GetDefaultCache(); //or change to mycacheFactory.GetCache(myNamedCache);
//inserty dummytest data
myCache.Put("key1", "myobject1");
myCache.Put("key2", "myobject2");
myCache.Put("key3", "myobject3");
Random random = new Random();
//list all items in the cache : important part
foreach (string region in myCache.GetSystemRegions())
{
foreach (var kvp in myCache.GetObjectsInRegion(region))
{
Console.WriteLine("data item ('{0}','{1}') in region {2} of cache {3}", kvp.Key, kvp.Value.ToString(), region, "default");
}
}