Release pipeline design idea - build

I want to build a release pipeline having two stages Stage1 and stage2. In each stages I need to deploy multiple customers. I am getting the customer information from the JSON file using PowerShell script. Below are some of the questions that I need to solve
The above JSON file needs to be created dynamically using input from the customer. How to get input from the customer?.
Also planning to create Variable groups to hold the constant data needed for each customer. DO I need to create a separate variable group for each customer?

Regarding your first question:
It is possible to allow variables to be provided by users when they launch the Release job. As an alternative, you might consider creating an app in PowerApps and use the Azure DevOps connector that PowerApps provide to trigger the job. This will allow you to create a more user friendly front-end.
Regarding your second question:
You don't necessarily need to, however you'll likely find it much easier if you use separate variable groups per customer.

Related

Possible to route multiple projects to a Cloud Function endpoint?

I have a Saas billing-model and each user has their own GCP Project. This is similar to this reddit thread, which asks:
I’m thinking about selling a saas service. I’ve decided every customer will get their own gcp project every customer will have a bunch of cloud run services, a cloud sql database and some users in Identity platform. I know the default project limit is around 12 and it can be increased by filling a form.
This works for something like BigQuery, where each user's Dataset or Table will be created within their own GCP project, and thus their billing (and data) will be segmented under their project.
However, I also have some shared endpoints on Google Cloud Functions, for example let's say I have general/shared endpoints to do something like "export data". Now of course the query to grab the data will hit the correct GCP project, but if the export (or some other data processing task) is doing something that is very expensive -- some exports might take over an hour to write the data, if dealing with billions of rows, what would be the suggested way to set that up so the end user is paying for their computation, since I imagine an endpoint such as www.example.com/api/export is just going to be on the main Project account, and we wouldn't have, for example, 1000 different cloud functions that do the same thing just to have each one under their respective project.
What might be a solution to this? In a way I'm looking for something like this I suppose where the requestor pays.
You would probably need to record how long each function call took, and save that data somewhere before exiting the shared function.
The only alternative would be to split the function for each client, and use billing labels to help with allocation.

AWS service for managing state data - dynamodb/step functions/sqs?

I am building a Desktop-on-Demand solution using AWS Workspaces product and I am trying to understand what is the best AWS service to fit my requirements for managing state data for new users.
In a nutshell, solution will create a new AWS Workspace (virtual desktop instance) for a user when multiple conditions are met and checks are satisfied. These tasks would be satisfied by multiple lambda functions.
DynamoDB would be used as a central point for storing confguration data details like user data, user groups data and deployed virtual desktops data.
Logic for Desktops creation would be implemented using Step Functions like below:
Event hook comes from Identity Management system firing a lambda function that checks if user desktop already exists in DynamoDB table
If it does not exist, another lambda creates AWS AD connector
Once this is done, another lambda builds custom image for new desktop if needed
Another lambda pulls latest data from Identity Management system and updates DynamoDB table for users and groups.
Other lambda functions that may be fired up as a dependency
To ensure we have transactional mechanism, we only deploy new desktop when all conditions are met. I can think about few ways of implementing this check:
Use DynamoDB table for keeping State data. When all attributes in item are in expected state, desktop can be created. If any lambda fails or produces data that does not fit, dont' create desktop.
Just use Step Functions and design it's logic flow that all conditions must satisfy before desktop is created
Someone suggested using SQS queue but I don't see how this can be used for my purpose.
What is the best way to keep this data?
Step Functions is the method I would use for this. The DynamoDB solution would also work, but this seems like exactly the sort of thing Step Functions was designed to handle.
I agree that SQS would not be a correct solution.

Using Multiple Process templates in single VSTS instance

Is it possible to use multiple templates in a single instance of VSTS?
I have 20+ teams using VSTS that are doing different kinds of work. Given that, some teams would like to use the out of the box Scrum Template and some of the teams would like to use the Agile template. Can this be done or am I limited to one template per VSTS Node?
Follow on Question, if I am limited to a single temple, can I control what fields are visible in Stories & Tasks on a team by team basis?
Example -- I create a custom field that is visible in one teams task but it is not visible in a different teams tasks.
Thanks
Yes, is possible to use multiple process templates.
you can create a project for each team, in each project you can define another process template.
Choose the VSTS icon to open the Projects page, and then choose Create Project.
Fill out the form provided. Provide a name for your new project, select its initial source control type, select a process, and choose with whom to share the project.
We have been asking the same question in the project that I am working on - we have multiple teams, who are wanting to use multiple templates and have different sized iterations.
The solution that we have utilized is to use multiple projects for each team, rather than a single project, but then use a data visualization tool, such as Power BI, to complete the reporting.
Power BI has Data Connectors that allow direct connections to your VSTS instance, allowing you to gain input from multiple projects. Once the connections have been made, you can append and merge queries to provide a singular query that pulls data from multiple projects.
Microsoft have documented connecting your vsts instance to Power BI - https://learn.microsoft.com/en-us/azure/devops/report/powerbi/data-connector-connect?view=vsts
The projects themselves can also be linked, features in one project can have child processes within a different project meaning each project is not in a complete silo.

Sitecore workflow template questions

Currently I am trying to find a way to have a template that is used across a shared environment capable of having a different workflow in use for each environment.
For example say I have a bike template shared between sites, I have one site that stocks the bike in a warehouse and a separate site that is a store front to sell the bike. The approval process will be different for these sites, the warehouse will simply go from Draft > Published whereas the store front wants to check over the details before displaying to the customer so they use a Draft > Pending Approval > Publish workflow.
Say I already have a bunch of bikes defined in both sites, how can I make a change so that for each different site a different workflow is used by the bikes. If possible I would like to avoid a solution that requires code.
I am guessing that I will need to duplicate the templates and have a separate one for each site (e.g. WH Bike and Sales Bike) which isn't really ideal either as this means lots of manual fixing of the existing workflow values.
Instead of using a separate workflow, it sounds like you just need a separate stage and action that is only available to your store front.
For example, your single workflow might look like this:
Stage 1: Draft
Actions:
Submit for Approval (secured to Store Front)
Submit for Publish (secured to Warehouse)
Stage 2: Pending Approval
Secured to Store Front, so as not to be visible to Warehouse
Stage 3: Publish
If the only difference is the stages, you can definitely go with security to use a single workflow and flow users through their own actions and stages.
Reworked my answer:
You can approach this by using the sitecore rules engine.
You can take a look at DYNAMIC WORKFLOW module in the Sitecore Marketplace.
It should allow you to create the rules and execute the start workflow action.
Taken from the module documentation:
Start workflow – moves item into a specified workflow and starts the
workflow process. Example: a landing workflow used when item gets
created but a specific workflow should be applied depending on item
location in the content tree.

how to update fusion table dynamically from python

I am working on a health care project we have a device which continiously generates values for the fields ACTIVITY AND FREQUENCY .The values need to be updated continously from python to google fusion table.
The question is quite broad, you probably want to have a look at the documentation of the Google Fusion Tables API if you haven't so far: https://developers.google.com/fusiontables/docs/v1/using
Also it may be worth checking the quota section to make sure that Google Fusion Tables is indeed what you want to use:
https://developers.google.com/fusiontables/docs/v1/using#quota
I'll be glad to try to help if you come up with more specific questions :)
EDIT: since there are quite a few questions around the topic, I'll add some "hints".
A (Google Fusion) table belongs to a Google account. Your script must therefore include a step where it asks for your permission to modify data attached to your Google Account. You can therefore see your script as a web application which needs an authorization to achieve its goal. This web application will use the Google Fusion Tables API and therefore it must be registered in the Google API Console. You will find details about the process of registration and authentication with a Python script here:
https://developers.google.com/fusiontables/docs/articles/oauthfusiontables?hl=fr
I just checked that this works and you can insert rows to a table thereafter, so you may want to have a quick look at my script. Note that you can neither use my application credentials (which are by the way not included) nor my table as you are not authorized to edit it (it's mine!). So you must download your application credentials from the Google API console after having registered and adapt the script so it loads your credentials. Also, the script does not create a table (as of now) so as a first step you can create a table with two columns in the UI and copy paste the table id in the script so it will know in which table to write. Here's the script (sorry it's a bit of a mess right now, I'll do as soon as I can):
https://github.com/etiennecha/master_code/blob/master/code_fusion_tables/code_test_fusion_tables.py
Hope this helps.