Using Multiple Process templates in single VSTS instance - templates

Is it possible to use multiple templates in a single instance of VSTS?
I have 20+ teams using VSTS that are doing different kinds of work. Given that, some teams would like to use the out of the box Scrum Template and some of the teams would like to use the Agile template. Can this be done or am I limited to one template per VSTS Node?
Follow on Question, if I am limited to a single temple, can I control what fields are visible in Stories & Tasks on a team by team basis?
Example -- I create a custom field that is visible in one teams task but it is not visible in a different teams tasks.
Thanks

Yes, is possible to use multiple process templates.
you can create a project for each team, in each project you can define another process template.
Choose the VSTS icon to open the Projects page, and then choose Create Project.
Fill out the form provided. Provide a name for your new project, select its initial source control type, select a process, and choose with whom to share the project.

We have been asking the same question in the project that I am working on - we have multiple teams, who are wanting to use multiple templates and have different sized iterations.
The solution that we have utilized is to use multiple projects for each team, rather than a single project, but then use a data visualization tool, such as Power BI, to complete the reporting.
Power BI has Data Connectors that allow direct connections to your VSTS instance, allowing you to gain input from multiple projects. Once the connections have been made, you can append and merge queries to provide a singular query that pulls data from multiple projects.
Microsoft have documented connecting your vsts instance to Power BI - https://learn.microsoft.com/en-us/azure/devops/report/powerbi/data-connector-connect?view=vsts
The projects themselves can also be linked, features in one project can have child processes within a different project meaning each project is not in a complete silo.

Related

What do you call the objects you build in the Power Platform?

In Power Apps, you build applications.
In Power Automate, you build flows.
In Power BI, you build reports and dashboards.
If you are writing about all of these things collectively, what is the correct term? Objects? Constructs? Artifacts?
I want to replace the word in italics in the following sentence:
When creating Power Platform objects, our team will follow certain naming conventions.
The common term is "Artifacts". EG
Returns a list of Power BI items (such as reports or dashboards) that the specified user has access to.
Admin - Users GetUserArtifactAccessAsAdmin
Solutions are the mechanism for implementing ALM; you use them to
distribute components across environments through export and import. A
component represents an artifact used in your application and
something that you can potentially customize. Anything that can be
included in a solution is a component, such as tables, columns, canvas
and model-driven apps, Power Automate flows, chatbots, charts, and
plug-ins.
Dataverse stores all the artifacts, including solutions.
Overview of application lifecycle management with Microsoft Power Platform

Django project-apps: What's your approach about implementing a real database scheme?

I've read articles and posts about what a project and an app is for Django, and basically end up using the typical example of Pool and Users, however a real program generally use a complex relational database, therefore its design gravitates around this RDB; and the eternal conflict raises once again about: which ones to consider an application and which one to consider components of that application?
Let's take as an example this RDB (courtesy of Visual Paradigm):
I could consider the whole set as an application or to consider every entity as an application, the outlook looks gray. The only thing I'm sure is about this:
$ django-admin startproject movie_rental
So I wish to learn from the expertise of all of you: What approach (not necessarily those mentioned before) would you use to create applications based on this RDB for a Django project?
Thanks in advance.
PS1: MORE DETAILS RELATED ABOUT MY REQUEST
When programming something I follow this steps:
Understand the context what you are going to program about,
Identify the main actors and objects in this context,
If needed, make an UML diagram,
Design a solid-relational-database diagram, (solid=constraints, triggers, procedures, etc.)
Create the relational database,
Start coding... suffer and enjoy
When I learn something new I hope they follow these same steps to understand where they want to go with their actions.
When reading articles and posts (and viewing videos), almost all of them omit the steps 1 to 5 (because they choose simple demo apps), and when programming they take the easy route, and don't show other situations or the many supposed features that Django offers (reusability, pluggability, etc).
When doing this request, I wish to know what criteria is used for experienced programmers in Django to determine what applications to create based on this sample RDB diagram.
With the (2) answers obtained so far, "application" for...
brandonris1 is about features/services
Jeff Hui is about implementing entities of a DB
James Bennett is about every action on a object, he likes doing a lot of apps
Conclusion so far: Django application is a personal creed.
My initial request was about creating applications, but as models are mentioned, I have this another question: is with a legacy relational database (as showed in the picture) possible to create a Django project with multiple apps? this is because in every Django demo project showed, every app created has a model with their own tables, giving the impression that tables do not interact with those of other applications.
I hope my request is more clear. Thanks again for your help.
It seems you are trying to decide between building a single monolithic application vs microservices. Both approaches have their pros and cons.
For example, a single monolithic application is a good solution if you have a small amount of support resources and do not need to be able to develop new features in fast sprints across the different areas of the application (i.e. Film Management Features vs Staff Management Features)
One major downside to large monolithic applications is that eventually their feature sets grow too large and with each new feature, you have a significant amount of regression testing which will need to be done to ensure there aren't any negative repercussions in other areas of the application.
Your other option is to go with a microservice strategy. In this case, you would divide these entities amongst a series of smaller services and provide them each methods to integrate/communicate with each other (APIs).
Example:
- Film Service
- Customer Service
- Staff Service
The benefits of this approach is it allows you to separate capabilities and features by specific service areas thus reducing risk and regression testing across the application when new features are deployed or there is a catastrophic issue (i.e. DB goes down).
The downside to this approach is that under true microservice architecture, all resources are separated therefore you need to have unique resources (ie Databases, servers) for each service thus increasing your operating cost.
Either of these options is a good option but is totally dependent on your support model and expected volumes. Hope this helps.
ADDITIONAL DETAIL:
After reading through your additional details, since this DB already exists and my assumption is that you cannot migrate it, you still have the same choice as to whether or not you follow a monolithic application or a microservices architecture.
For both approaches, you would need to connect your django webapp the the specific DB you are already using. I can't speak for every connector out there but I know that the MySQL connector allows django to read from the pre-existing db to systematically generate the models.py file for the application. As a part of that connector, there is a model variable which allows you to define whether or not Django is responsible for actually managing the DB tables themselves.
The only thing this changes from an architecture perspective is how many times do you want to code this connection?
If you only want to do it once and completely comply with the DRY method, you can build a monolithic application knowing that as new features become required, application wide regression testing will be an absolute requirement.
If you want ultimate flexibility for future changes with this collection of features and don't mind recoding the migration across multiple apps while reducing the need for application wide regression testing as new features become required, a microservice architecture strategy is more appropriate.

Release pipeline design idea

I want to build a release pipeline having two stages Stage1 and stage2. In each stages I need to deploy multiple customers. I am getting the customer information from the JSON file using PowerShell script. Below are some of the questions that I need to solve
The above JSON file needs to be created dynamically using input from the customer. How to get input from the customer?.
Also planning to create Variable groups to hold the constant data needed for each customer. DO I need to create a separate variable group for each customer?
Regarding your first question:
It is possible to allow variables to be provided by users when they launch the Release job. As an alternative, you might consider creating an app in PowerApps and use the Azure DevOps connector that PowerApps provide to trigger the job. This will allow you to create a more user friendly front-end.
Regarding your second question:
You don't necessarily need to, however you'll likely find it much easier if you use separate variable groups per customer.

How to setup a django project such that two independent django apps read and write to the same models

I have to create a django project which has two dictinct parts. The producer provides some UI to the user and eventually writes to some models. The consumer uses this data and does some processing and shows some pretty graphs etc.
Technically, these are completely isolated apps used by completely distinct user set. I would love to make them separate django projects altogether but unfortunately they share the db structure. The database is effectively a pipe that connects the two.
How should I structure the project? From what I read, I should ideally have isolated models per app. Unfortunately that is not possible.
Any recommendations?
If you define a "app" and use it inside of your "Django Project" I assume you use INSTALLED_APP's in settings.py to make it known within your environment.
If you look it from this point - its the same as if you use "django-social-auth" in two different projects/services and you share the same DB. I can't judge if its common or uncommon to use the same DB, its a design decision you have to make and you need to be happy with.
If you just like to have users and webusers seperated in your admin, please have a look at
separating-staff-and-user-accounts

Managing Multiple teams when we have single / multiple development servers in Siebel

We have several teams working on different work packages involving same objects with single development server. My question: how can we mangae this kind of situations without losing the time of the resources. To elaborate more, I have two teams working on Account BC with different change orders with same release date and I want the work to be done in parallel. What are the best ways to handle this situation and my answer is we need to wait and not possible. Does any one have the solution to handle this situation?
Have both teams develop with an offline copy of the Accounts BC - sharing the object between themselves as a SIF file. Merge the two streams together with the Tools archive import function.
Create an new object manager for the server which has its srf point to one other than siebel_sia. Create as many users and mobile clients as the developers. Get them the extracted database on thier client names. Make on team work on main object manager (_enu) and get the other work on (_custom). Bouncing individual object managers as the development cycle continues