Sitecore workflow template questions - templates

Currently I am trying to find a way to have a template that is used across a shared environment capable of having a different workflow in use for each environment.
For example say I have a bike template shared between sites, I have one site that stocks the bike in a warehouse and a separate site that is a store front to sell the bike. The approval process will be different for these sites, the warehouse will simply go from Draft > Published whereas the store front wants to check over the details before displaying to the customer so they use a Draft > Pending Approval > Publish workflow.
Say I already have a bunch of bikes defined in both sites, how can I make a change so that for each different site a different workflow is used by the bikes. If possible I would like to avoid a solution that requires code.
I am guessing that I will need to duplicate the templates and have a separate one for each site (e.g. WH Bike and Sales Bike) which isn't really ideal either as this means lots of manual fixing of the existing workflow values.

Instead of using a separate workflow, it sounds like you just need a separate stage and action that is only available to your store front.
For example, your single workflow might look like this:
Stage 1: Draft
Actions:
Submit for Approval (secured to Store Front)
Submit for Publish (secured to Warehouse)
Stage 2: Pending Approval
Secured to Store Front, so as not to be visible to Warehouse
Stage 3: Publish
If the only difference is the stages, you can definitely go with security to use a single workflow and flow users through their own actions and stages.

Reworked my answer:
You can approach this by using the sitecore rules engine.
You can take a look at DYNAMIC WORKFLOW module in the Sitecore Marketplace.
It should allow you to create the rules and execute the start workflow action.
Taken from the module documentation:
Start workflow – moves item into a specified workflow and starts the
workflow process. Example: a landing workflow used when item gets
created but a specific workflow should be applied depending on item
location in the content tree.

Related

Release pipeline design idea

I want to build a release pipeline having two stages Stage1 and stage2. In each stages I need to deploy multiple customers. I am getting the customer information from the JSON file using PowerShell script. Below are some of the questions that I need to solve
The above JSON file needs to be created dynamically using input from the customer. How to get input from the customer?.
Also planning to create Variable groups to hold the constant data needed for each customer. DO I need to create a separate variable group for each customer?
Regarding your first question:
It is possible to allow variables to be provided by users when they launch the Release job. As an alternative, you might consider creating an app in PowerApps and use the Azure DevOps connector that PowerApps provide to trigger the job. This will allow you to create a more user friendly front-end.
Regarding your second question:
You don't necessarily need to, however you'll likely find it much easier if you use separate variable groups per customer.

Using Amazon SWF and Reactive Web Application

I have general question regarding Amazon SWF and web application which has a reactive style. For example, I have a shopping website where user ad products to cart, validate the quantity, enter the shipping and billing address, payment processing, order shipping and tracking. If I implement a work flow for the order fulfillment, how this should be designed in the SWF. Do this order fulfillment work flow begin only after all inputs received? How this work flow notifies to the customer on the progress of order process, any validation issues etc. How this should be distributed?
The simplest approach is to use SWF to perform backend order fulfillment and a separate data store to hold the order information and status. When an order is configured through the website the data store is updated. Later when the order is placed a workflow instance is created for it. The workflow uses information (by loading it using activities) from the data store. Then the workflow updates the data store using activities and the website queries the status and other progress information of the workflow from the data store.
Another option is to use execution state feature of SWF. See Exposing Execution State from SWF Developer Guide.
The Cadence (which is open sourced version of SWF) in the near future is going to add a query feature. It would allow synchronously query for the workflow state through the service API. It is different from execution state as it would allow multiple query types and query parameters.

Microservices Architecture: Cross Service data sharing

Consider the following micro services for an online store project:
Users Service keeps account data about the store's users (including first name, last name, email address, etc')
Purchase Service keeps track of details about user's purchases.
Each service provides a UI for viewing and managing it's relevant entities.
The Purchase Service index page lists purchases. Each purchase item should have the following fields:
id, full name of purchasing user, purchased item title and price.
Furthermore, as part of the index page, I'd like to have a search box to let the store manager search purchases by purchasing user name.
It is not clear to me how to get back data which the Purchase Service does not hold - for example: a user's full name.
The problem gets worse when trying to do more complicated things like search purchases by purchasing user name.
I figured that I can obviously solve this by syncing users between the two services by broadcasting some sort of event on user creation (and saving only the relevant user properties on the Purchase Service end). That's far from ideal in my perspective. How do you deal with this when you have millions of users? would you create millions of records in each service which consumes users data?
Another obvious option is exposing an API at the Users Service end which brings back user details based on given ids. That means that every page load in the Purchase Service, I'll have to make a call to the Users Service in order to get the right user names. Not ideal, but I can live with it.
What about implementing a purchase search based on user name? Well I can always expose another API endpoint at the Users Service end which receives the query term, perform a text search over user names in the Users Service, and then return all user details which match the criteria. At the Purchase Service, map the relevant ids back to the right names and show them in the page. This approach is not ideal either.
Am I missing something? Is there another approach for implementing the above? Maybe the fact that I'm facing this issue is sort of a code smell? would love to hear other solutions.
This seems to be a very common and central question when moving into microservices. I wish there was a good answer for that :-)
About the suggested pattern already mentioned here, I would use the term Data Denormalization rather than Polyglot Persistence, as it doesn't necessarily needs to be in different persistence technologies. The point is that each service handles its own data. And yes, you have data duplication and you usually need some kind of event bus to share data across services.
There's another option, which is a sort of a take on the first - making the search itself as a separate service.
So in your example, you have the User service for managing users. The Purchases services manages purchases. Each handles its own data and only the data it needs (so, for instance, the Purchases service doesn't really need the user name, only the ID). And you have a third service - the Search Service - that consumes data produced by other services, and creates a search "view" from the combined data.
It's totally fine to keep appropriate data in different databases, it's called Polyglot Persistence. Yes, you would like to keep user data and data about purchases separately and use message queue for sync. Millions of users seems fine to me, it's scalability, not design issue ;-)
In case of search - you probably want to search more than just username, right? So, if you use message queue to update data between services you can also easily route this data to ElasticSearch, for example. And from ElasticSearch perspective it doesn't really matter what field to index - username or product title.
I usually use both approaches. Sometimes i have another service which is sitting on top on x other services and combines the data. I don't really like this approach because it is causing dependencies and coupling between services. So in general, within my last projects we tried to stick to polyglot persistence.
Also think about, if you need to have x sub http requests for combining data in some kind of middleware service, it will lead you to higher latency. We always try to cut down the amount of requests for one task and handle everything what is possible through asynchronous queues. ( especially data sync )
If you conceptualize modules as the owners and controllers of the data they work on, then your model must also communicate that data out of that module to others. In contrast, the modules in a manufacturing process have the access to change data without possessing and controlling it.
Microservices is an architecture for distributed processing, like most code, where modules pass the data around to work on it. From classic articles by Harvard Business Review and McKinsey on the subject of owning members of a supply chain, I identified complexities arising from this model and wrote an article teaching programmers what you need to know: http://www.powersemantics.com/p.html
Manufacturing is an architecture for integrated processing, where modules work on the data without passing it around from point to point. This can be accomplished by having modules configured to access the same memory, files or database tables. My architecture shows how to accomplish this on memory via reference properties.
When you consider "exposing an API at the Users Service end which brings back user details based on given ids", you need to be aware that creates what HBR calls "irreversible" complexity, which I've dubbed centralization complexity. Don't build A->B (distributed) systems, because you can't decentralize them later after failing to separate requirements. Requirements in production processes represent user instructions, and centralized modules only enable you to change the wrong users' processes. In other words, centralized modules don't document user groups or distinguish them from derived-product-users.

How do I update a web API resource by request while also reacting with backend?

How do you update (RESTful) resources in a web API from the client, when you also need the backend to take actions regarding these changes?
Let's say I have a RESTful web API with the following resources:
Worker - id, first_name, last_name, ...
Project - id, title, due_date, ..., worker [ref to Worker]. A project can exist only if it belongs to a worker.
In the client (which is typically a mobile app), users can retrieve a list of some worker's projects. They can then modify each project in that list (update), delete, or create new ones.
Changes must take place locally on the client side, until a "send" command is dispatched, only then the server should receive the updates. Kind of like a classic form use case.
The tricky part:
I need the backend to take actions according to each change, both individually and also as a whole. For example:
A user retrieved some worker's projects list, deleted a project, and also updated the due_date of another.
According to this change, the backend needs to both send push notifications to all of that project's members, and also recalculate the relevant worker's priorities according the total change in their projects (one was deleted, another got postponed...).
How do I achieve this in the best way?
If I update/delete/create each project by itself (with seperate POSTs, PUTs and DELETEs), when will the backend do the overall recalculation task?
If I update them all together as a bulk (with PUT), the backend will then need to understand what exactly changed (which got deleted, which modified...), which is a hard chore.
Another option I heard is to create a third utility resource, that's something like "WorkerProjectUpdater" that holds the changes that need to be made, like transactions, and then have a "daemon" going through it and actually committing the changes. This is also hard to achieve as in the real story there are many many types of modifications, and it'll be quite complex to create a resource (with a model and DB records) for every type of change.
I'm using Django with Django Rest Framework for that web service.
Appreciate your help!

Two-staged approval process for wiki articles

I'm trying to configure a wiki to allow a two staged approval process. The basic work flow requires something like:
A group of users submits a short form
After admin approval, a larger form becomes available to the group
The group submits the larger form
After admin approval, the page (filled by the form) becomes public
I've been looking at TikiWiki and MediaWiki for a while trying to configure each to get even close to this model, but I'm having some problems.
With TikiWiki, it seems like the approval stage should be a transition, either changing the group permissions to allow access to a new tracker or changing the form category to close one form and open the other, but I haven't been able to nail down the permissions for that configuration.
With MediaWiki, the main problem seems to be that the back-end was not made to have complex permissions. I've been using SMWHalo along with SemanticForms to construct this, but I can't find anything like Tikiwiki's transitions for changing the permissions for either the group or the form automatically.
I'm a bit new to Wiki development and I know that there are a lot of options for wiki frameworks, so I'm asking for suggestions for a good work flow for this product. My goal is to only start actually touching the framework code to make the final adjustments and not to start off modifying an already well developed code base.
You should really ask yourself why you want this and why you want this in a wiki.
A Wiki's main advantage is being quick and easy and thus encouraging to the user. Adding approval stages will discourage users to participate. The hardest part in any wiki is not preventing vandalism or false information. The hardest part is to encourage participation.
If you really need a difficult approval workflow you might want to look at CMS systems. AFAIK typo3 has something like this built in.
If really you want to go with a wiki and an approval process, for DokuWiki you could have a look a the publish plugin: http://www.dokuwiki.org/plugin:publish
The FlaggedRevs extension to MediaWiki adds a basic permissions workflow:
http://www.mediawiki.org/wiki/Extension:FlaggedRevs
However, it's geared more at controlling changes to existing pages, not adding entirely new ones. You could set it up to create new pages as drafts and defaulting the public view to show only approved versions, but it sounds like you want to hide unapproved versions entirely, which would require some extra hacking (and, as Andreas says, kind of defeats the point of a wiki in the first place).