I'am trying to model hotel room booking system in actor pattern. just for information i'am using akka.net for this in .Net.
For now i have created following actors.
1. HotelActorRoomsActor (An Aggregate of RoomActor)
3. BookingsActor (An Aggregate of BookingActor)
4. EmployeesActor (An aggregate of EmployeeActor)
4. UIActor
Currently as i have planned iam creating the system as follows.
1. UIActor Takes the BookingInformation (Checkin, Checkout, No of rooms)
2. Tell the BookingsActor about the information.
3. BookingsActor creates / starts a new BookingActor and passes on the information
4. On BookingActor Start it will
4a. Schedule Rooms for the Booking
4b. Tell the Rooms about the schedule so that they block themselves
4c. Schedule Employee Tasks for the rooms
4d. Tell the selected employees about their tasks
4e. Tell the system that the booking as been created
4f. Tell the BookingsActor to restart the BookingActor at some specific time in
the future (24 hours before actual booking checkin) and shut down.
Problems im facing are
1. How to keep the UIActor in sync of the booking information
2. The UIActor Should also be able to Save information about multiple bookings (For a partiular customer etc) how and where should it be done inside an Actor Pattern?
3. Lets say i want the information about multiple bookings From Date1 to Date2 where should i persist this information to later retrieve it?
ad 1. you can see example of sync dispatcher in akka bootcamp
and example source:
dispatcher = akka.actor.synchronized-dispatcher
#causes ChartingActor to run on the UI thread for WinForms
ad 2. the UI actor should only collect data from user and display results, all other actions need to be pushed down for processing to let say storage actor
ad 3. you need a storage provider - that can be mongoDB or SQL solution. Passing a message to storage actor you can persist reservation data or retrive when needed.
Related
I'm building an Angular 11 web app using AppSync for the backend.
I've mentioned group chat, but basically I have a feature in my app where I have an announcement feature where there's a person creating announcements to a specific audience (can be individual members or groups of members) and whenever the receiving user opens the announcement, it has to mark that announcement as read for that user in their UI and also let the sender know that it has been opened by that particular member.
I have an idea for implementing this:-
Each announcement needs to have a "seenBy" which aggregates the user Ids of the ones who open it.
Each member also has an attribute in their user object named "announcementsRead" which is an array of Ids of the announcements that they have opened.
In the UI when I'm gathering the list of announcements for the user, the ones whose ID don't belong in the member's own announcementsRead array, will be marked as unread.
When they click on it and it is opened, I make 2 updates - a) To the announcement object I simply push the member's user ID to the "seenBy" attribute and push to db. b) to the member's user object, I add the announcement's id to the "announcementRead" attribute and push it to the DB.
This is just something that I came up with.
Please let me know if there are any pitfalls to this approach. Or if there are simpler ways to achieve this functionality.
I have a few concerns as well:-
Let's say that two users are opening an announcement at the same time, and the clients try to update the announcement with the updated seenBy containing the user's ID, what happens when the two requests from two different clients are happening concurrently? It's possible that the first user fetches the object and then the second user fetches it immediately, and by the time the second user has updated the attribute and sent it back to the DB, the first user has already written their updated data. In such a case the second user's write to the DB will overwrite the first user's change. I am not sure of the internal mechanisms of the amplify data store, but I can imagine this happening. Is this possible? If so, how do we ensure that it is prevented?
Is it really necessary for me to maintain the "announcementsRead" attribute in the user? I mean I can imagine generating that list in the UI every time I get the list of announcements by checking if the current user's ID exists in the announcement's "seenBy" and maintaining that list in the UI, that way we can eliminate redundancy of info in the DB and also it would make sense to not accumulate extremely old announcement IDs that may have been deleted. But I'm wondering if having this on the member actually helps in an indispensable way.
Hope my questions are clear.
I have a scenario whereby if part of a query matches an event, I want to fetch some other events from a datastore to test against the rest of the query
eg. "If JANE DOE buys from my store did she buy anything else over last 3 years" sort of thing.
Does Flink, Storm or WSO2 provide support for such complex event processing?
Flink can do this, but it would require that you process all events starting from the earliest that you care about (e.g. 3 years ago), so that you can construct the state for each customer. Flink then lets you manage this state (typically with RocksDB) so that you wouldn't have to replay all the events in the face of system failures.
If you can't replay all of the history, then typically you'd put this into some other store (Cassandra/HBase, Elasticsearch, etc) with the scalability and performance characteristics you need, and then use Flink's async function support to query it when you receive a new event.
WSO2 Stream processor let’s you implement such functionality with it’s time incremental analytics feature. To implement the scenario you’ve mentioned you can feed the events that are triggered when a customer arrives to an construct called ‘aggregate’. When you keep feeding events to an aggregate it will summarize data over time and will saved in a configured persistence store such as DB.
You can query this aggregate to get the state for a given period of time. For an example following query will fetch the name, total items bought and avg transaction value with the year 2014-2015
from CustomerSummaryRetrievalStream as b join CustoemrAggregation as a
on a.name == b.name
within "2014-01-01 00:00:00 +05:30", "2015-01-01 00:00:00 +05:30"
per “years”
select a.name, a.total, a.avgTxValue
insert into CustomerSummaryStream;
I have an event sourced system that runs on a server with clients that need to work offline from time to time. To make this work I have the domain events streamed from the server to the client when online so that the offline database is up to date in case the client goes offline. This works just fine.
When offline the user might add a new customer with the following sequence...
Add new customer command.
Customer aggregate added.
Customer aggregate creates initial appointment aggregate.
Query of read data returns new appointment details.
Command used to modify the appointment.
When back online I cannot reply the events for the server. Adding the new customer is fine but the resultant new appointment has an identifier I do not know about. So trying to replay the appointment update command fails because I have no idea what the correct appointment id should be.
Any ideas?
You need to review Greg Young's talk CQRS, not just for server systems.
Also Stack overflow question Occasionally Connected CQRS Systems, and the dddcqrs topic Merging Events in Occasionally Connected Clients.
I have no idea what the correct appointment id should be
Generate the ids when you generate the commands; you'll know what the appointment id is going to be, because you told the customer aggregate what id to use when creating the appointment.
I'm working on implementing a RESTfull API for my web application and it's in php. I have faced a problem on deciding whether it's recommended to create multiple different type of objects using single POST call is allowed. My scenario is as follows.
addEmployee api service function allow clients to create an employee inside my application by passing the data as POST parameters.
There are two dependencies for Employee in my system as Job Title and Employment Status and those are separately saved objects within the system. So client has to pass Job Title name and Employment Status name along with the addEmployee POST call.
When a client calls addEmployee method, it internally checks whether given Job Title and Employment Status are already there in the system and if so it only add a reference for those existing objects within the Employee object.
If given Job Title or Employment Status is not there in the system, addEmployee method will first save Job Title and Employment Status objects in the system and then will add a references in the Employee Object.
There are separate API functions for addJobTitle and addEmploymentStatus which can be used by client if they need to add more Job Titles and Employment Statuses to the system.
In the above workflow I'm not sure whether 4th step is correct because the internal saving operation is not visible to client and it reduce the visibility of the API. But usability wise it's good because client can add an Employee with maximum one web service call.
I can replace the 4th step as follows to improve the visibility.
If given Job Title or Employment Status are not there in the system, addEmployee method will return an exception saying those are not available in the system and along with that response will provide uris to addJobTitle and addEmploymentStatus functions allowing clients to use uris and save those Job Titles and Employment Status first. After saving Job Title and Employment Status objects client can again call addEmployee method to add the employee with given Job Title and Employment Status.
2nd approach will improve the visibility of the API but performance vise and usability vise it will not be much effective because client has to call API 3 times maximum to add an Employee to the system.
Please advice me what is the approach I should follow to resolve this issue.
I think the 4th step you are attempting is valid without any changes and also is the recommended approach.
If you consider Job Title and Employment Status, both of them are related to the Employee
The best approach is that you don't expose methods in your API to add Job Title and add Employment Status. Because if you do the client can keep on creating those for example one can create Job Title Software Engineer and another can create SW Engineer. Before you know it you have hundreds of Job Titles. Same applies to Employment Status.
Only Listing methods for Job Title and Employment Status may suffice with backend provisioning of those (SQL or Manual Insert or Admin only insert)
Finally as you mentioned you can reduce the multiple calls and reduce bandwidth which is crucial if the API is to be used by for example Mobile Apps over Wireless Networks.
I am currently working on a specification for a software component which will synchronize the product catalog of an ecommerce company with the Amazon Marketplace using Amazon MWS.
According to the MWS developer documentation, publishing products requires submitting up to 6 different feeds, which are processed asynchronously:
Product Feed: defines SKUs and contains descriptive data for the products
Inventory Feed: sets quantities/availability for each SKU
Price Feed: sets prices for SKUs
Image Feed: product images for each SKU
Relationship Feed: defines mappings between parent SKUs (e.g. a T-Shirt) and child SKUs (e.g. T-Shirt in a concrete size and color which is buyable)
Ovverride Feed:
My question concerns the following passage in the MWS documentation:
The Product feed is the first step in setting up your products on
Amazon. All subsequent catalog feeds are dependent upon the success of
this feed.
I am wondering what it means? There are at least two possibilities:
Do you have to wait until the Product feed is successfully processed before submitting subsequent feeds? This would mean that one had to request the processing state periodically until it is finished. This may take hours depending of the feed size and server load at Amazon. The process of synchronizing products would be more complex.
Can you send all the feeds immediately in one sequence and Amazon takes care that they are processed in a reasonable order? In this interpretation, the documentation would just tell the obvious, that the success of let's say image feed processing for a particular SKU depends on the success of inserting the SKU itself.
As I understand it for all other feeds other than the Product feed the products in question must already be on the catalogue, so your first possibility is the correct one.
However, this should only affect you on the very first run of the product feed or when you are adding a new product, as once the product is there you can then run the feeds in any order, unless you are using PurgeAndReplace of your entire catalogue each time which is not recommended.
The way I would plan it is this.
1) Run a Product Feed of the entire catalogue the very first time and wait for it to complete.
2) Run the other feeds in any order you like.
3) Changes to any of the products already on Amazon can now be done in any order. e.g you can run the price feed before the product feed if all you are doing is amending the description data etc
4) When you have to add a new product make sure you run the product feed first, then the other feeds.
If possible, I would create a separate process for adding new products. Also, I think it will help you if you only upload changes to products rather than the entire catalogue each time. It's a bit more work for you to determine what has changed but it will speed up the feed process and mean you're not always waiting for the product feed to complete.
Yes, Product Feed is the first primary feed.
You need to wait until product feed gets completed before sending out other feeds.
When You Send Product Feed, its status becomes:
1) _IN_PROGRESS_
2) SUBMITTED
3) DONE
4) COMPLETED
You must need to wait until status changes to " DONE " or "COMPLETED".
Thanks.