How to get the queries to which an TFS work-item belongs? - tfs-workitem

Normally my team work with a big list of work-item and there are also lots of queries in Team Explorer.
Given an id of a work-item, how can I know which queries it belongs to?

Queries are like SQL statement, so that you need to execute each query and look if the workitem is in the result of the query.

Related

How to do "Select COUNT(*)" in DynamoDB from the AWS management console or any other 3rd party GUI?

Hi Im trying to query some table in DynamoDB. However from what I read I can only do it using some code or form the CLI. Is there a way to do complex queries from the GUI? I tried playing with it but can't seem to figure out how to do a simple COUNT(*). Please help.
Go to DynamoDB Console;
Select the table that you want to count
Go to "overview" page/tab
In table properties, click on Manage Live Count
Click Start Scan
This will give you the count of items of the table at that moment. Just be warned that this count is eventually consistent; what means that if someone is performing changes in the table at that exact moment your end result will not be exact (but probably very close to reality).
Digressing a little bit (only in case you're new to DynamoDB):
DynamoDB is a NoSQL database. It doesn't support the same commands that are common in SQL databases. Mainly because it doesn't support the same consistency model provided by SQL databases.
In SQL databases, when you send a count(*) query your RDMS make some very educated guesses and take some short paths to discover the number of lines in the table. It does that because reading your entire table to give you this answer would take too much time.
DynamoDB doesn't have means to make these educated guesses. When you want to know how many items one table have the only option it has is to read all of them counting one by one. That is the exact task that the command mentioned in the beginning of this answer does. It scans the entire table counting all the items one by one.
Because of that, when you perform this task it will bill you the entire table read (DynamoDB bills you per reads and writes). And maybe after you started the scan someone put another item in the the table while you are still counting. In that case it will not restart the count because by design DynamoDB is eventually consistent.

Problems loading data in to Analysis Services Model

I’m building an model in Azure Analysis Services. The model should contain only data for the last 3 months and is processed every day.
I have a separate dimension for date that has a relation with a fact table using a datekey. I’m using a power query to only load the last 3 months in the date dimension. In the power query to load the fact table I used Table.nestedjoin to only load the rows that have a value in the date table.
When I do this, the processing of the model takes forever. After some troubleshooting I saw that the query Analysis Services is using to retrieve data from the SQL database retrieves all rows. So, Am I correct saying AS load all data before it merge the rows? Is there a way to change this? Or is there a better way to a chief my solution?
Kind regards,
Joins are super slow in Power Query. You should avoid them if you can do it in the datasource or use normal relationships in the data model.
Also, you can setup the date dimension in DAX and dynamically populate it to contain only dates present in the FACT table.
As for the load of all the data, it could be because the data is fetched as is, and only then power query applies the transformations (the join).
You can modify the query in the Power Query Editor / Advenced Editor to add a where clause direclty in the query

Conditional Relationships between Two Tables in Django?

The following image shows a rough draft of my proposed database structure that I will develop for Django. Briefly, I have a list of ocean Buoys which have children tables of their forecast conditions and observed conditions. I'd like Users to be able to make a log of their surf sessions (surfLogs table) in which they input their location, time of surf session, and their own rating.
I'd like the program to then look in the buoysConditions table for the buoy nearest the user's logged location and time and append to the surfLog table the relevant buoyConditions. This will allow the user to keep track of what conditions work best for them (and also eventually create notifications for the user automatically).
I don't know what the name for this process of joining the tables is, so I'm having some trouble finding documentation on it. I think in SQL it's termed a join or update. How is this accomplished with Django?
Thanks!

Model based caching instead of view based caching in django

I am working on a django application. The main task of the application is providing suggestion like "Should I go outside today?". There is only a single endpoint to get the suggestion such as example.com/.
The main logic for providing the suggestion is:
Does the user has any pending task today? (querying from UserTaskModel)
Is today's weather is comfortable? (calculating weather forecasting)
If two user try to fetch data at the same date then the UserTask query will be different. But weather forecast query task will be same. If I use view based django caching then the weather forecast query will be execute for each user. But I want to cache the weather query data for all user at a same date. It can view implement by creating different view for the weather. But I don't want to use another endpoint for the weather.
Django cache set-get method can be use for this task. But is this way is the best way to do this type of task? In my example I use a simple weather calculation query depending the the date. But is this technique is good for complex query?
As you said cache set-get is your solution. but note these two things:
Assuming you want to cache weather for each day, set expire time +24h (cache won't expire soon)
Also your cache key should be something like weather_2019_09_22
I think creating a utility class can be really good (something like this, this is a pseudo code)
class WeatherCache:
def get(self):
date = today()
if forecast for date in cache:
return forecast
get forecast for date
insert forecast into cache
return forecast
Another idea can be simply creating a model and putting forecasts there, the good point is you will keep the history of forecasts. maybe it can be useful for later queries (and the table won't get too big so you don't need to worry about it)

Can Spanner maintain indexes to easily count analytics queries of my data?

I'd like to give my partners the results of simple COUNT(*) ... GROUP BY items.color type queries and perhaps joins over items and orders or some such. I'd like query response time to be sub-second (on the order of a second, at worst), and scale to billions of rows counted.
My current approach is to either backup my GCDatastore data and load it into BigQuery and provide daily analytics or use GCDataflow to maintain a set of pre-defined counters.
Is this something Spanner has as a use-case for, if I transition my backend from Datastore to Spanner?
Today, running counting queries in Cloud Spanner requires a full table scan. Depending on the size of the table this could take more than a second.
One thing you could do is to track the count in a separate table, and whenever you update the items table, update the count in the same transaction.