We are trying to update around 3000 rows in a table of roughly 300,000+ entries, with each update value being unique. The problem is it takes around an hour, and we want to see how we can efficiently optimize the query. Is there any way to create a temporary subset of the table that still references the original table?
Currently, we are doing this (x3000):
UPDATE table SET col1 = newVal1 WHERE col1 = oldVal1
UPDATE table SET col1 = newVal2 WHERE col1 = oldVal2
UPDATE table SET col1 = newVal3 WHERE col1 = oldVal3
.... etc (x3000)
There is probably an issue with trying to update a col that we are also searching whereby, causing more tax on the query, but this is a necessity as it's the only way to track the exact row item.
Using Mysql Workbench and InnoDB Engine
We are debating whether indexing col1 would be effective, since we are also writing over col1.
We have also tried to use a View of just the rows we want to update, but there is no difference in how long it takes.
Any suggestions?
Related
I have a DAX table "Sumtable" that calculates the the number of rows in "Maintable" for two cases: (1) all rows in Maintable and (2) the subset of rows where Cat = "A".
Sumtable = {
("Full set", COUNTROWS(Maintable)),
("Subset", COUNTROWS(FILTER(Maintable, Maintable[Cat] = "A")))
}
I want to viz Sumtable and make it respond to filter settings on Maintable. For example when I select Maintable[Sex] = "Male" the totals in Sumtable should reflect that. What is the right way to accomplish this?
For example when I select Maintable[Sex] = "Male" the totals in Sumtable should reflect that.
Just make this two measures instead of a calculated table.
If you want these measures to appear under Sometable, you can create a one-row, one-hidden-column table with two measures, instead of two calculated columns.
And when you hide all the non-measure columns on a table it becomes a "Measure Table" with a differentiated icon.
Solved it with this: analytics-tuts.com/bar-chart-using-measures-in-power-bi
Am new to Sql Alchemy. I have a raw sql which i need to execute by passing bind parameters. Resulting rows from the query, i need to update a particular column value. How do i do this in the efficient way?
Below are the columns in my table metrics
TABLE
id,total,pass,fail,category,ref_id
query = "Select * from table where id in(select max(id) from table ...)"
sql = text(query)
result = db.engine.execute(sql, CATEGORY=category)
for row in result:
//update here
So i have this complex query, that i need to execute as an inline query. Let's say i get three rows from my query and i need to update ref_id for all the 3 rows with a values. How can i achieve this preferably bulk update.
Am using python 2.7,SQLAlchemy==0.9.9,SQLAlchemy-Utils==0.29.8
Trying to sort a column in my custom date table (a csv file) via a calculated column in the same table but am seeing an error. The calculated column does not reference the column I wish to sort by. Here's the DAX for the calculated column:
PeriodOffset =
Dates[Period] + Dates[FiscalYear] * 13
- CALCULATE ( VALUES ( Dates[Period] ), Dates[Date] = TODAY () )
- CALCULATE ( VALUES ( Dates[FiscalYear] ), Dates[Date] = TODAY () ) * 13
My date table has every date from 2003/4 to 2034/35, along with custom period numbers, calendar and fiscal years etc. The column I am trying to sort is called PeriodFiscalYear. Each value in that column has only one entry in the PeriodOffset column so it's not that.
The weird thing is, I have had this working in a previous report. In this instance, I was simply trying to recreate the functionality but it won't do it. Even stranger, if I create the PeriodFiscalYear column as a calculated column (currently it's hard-coded in the csv file), it works! So I have a sort-of workaround, I would just like to understand what is going on.
Thanks
I believe this has to do with the fact that data column are sorted when data are ingested into PBI. Calculated columns are calculated only at a later time.
Therefore:
you can sort data column only with other data columns (because calculated columns have not been calculated yet)
you can sort calculated column with both data column and calculated column
Solution:
A) PeriodFiscalYear becomes a calculated column
B) PeriodOffset becomes a data column (either in your CSV or Power Query)
I actually figured this out. The problem was with my data model - I had a circular relationship in there as I was deriving the Period column in one table using my calendar table then linking them back in the relationship!
I created a linking table with the keys in both to make the relationship, then hid it.
Thanks
It's probably extremely simple but I can't find an answer. I have created a new column and I would like to use the DAX syntax to fill the column with hardcoded values.
I can write this: Column = 10 and I will get a column of 10s but let's say my table has 3 rows and I would like to insert a column with [10, 17, 155]. How can I do that?
Try using DATATABLE function
Table = DATATABLE("Column Name",INTEGER,{{10},{17},{155}})
You can also put more columns with their own data if you want to, check this
https://learn.microsoft.com/en-us/dax/datatable-function
Assuming your table has a primary key column, say, ID, you could create a new table with just the column you want to manually input.
ID Value
---------
1 10
2 17
3 155
You can create this table either through the Enter Data button or create it using the DAX DATATABLE function as #Deltapimol suggests.
Once you have this table you can create a relationship to your existing table in the data model at which point you can either use this new table in your report to get the values you need or if you really need them in the existing table for some reason, you can pull them over using the RELATED function in a calculated column.
Table1 = GENERATESERIES(1, 3)
Table2 = DATATABLE(
"ID", INTEGER,
"Value" INTEGER,
{{1, 10},{2, 17},{3, 155}}
)
Now you can create a relationship from Table1 to Table2[ID] and then define a calculated column on Table1 as follows:
ValueFromTable2 = RELATED(Table2[Value])
If you don't want to create a relationship, then you could use the LOOKUPVALUE function instead in a calculated column on Table11.
This might be very basic but I am new to PowerBI.
How do I get average of values for unique ID into another table.
For eg. My Table 1 has multiple ID values. I have created another table for unique ID which I am planning to used to join other table.
I want a calculated column in table 2 which will give me average value of respective ID from table 1.
How do I get the calculated column like shown below
In stead of creating a new table with the averages per ID and then joining on that, you could also do it directly with a calculated column using the following DAX expression:
Average by ID = CALCULATE(AVERAGE('Table 1'[Values]),ALLEXCEPT('Table 1','Table 1'[ID]))
Not exactly what you asked for, but maybe it's useful anyway.
How's it going?
The quickest way I can think of doing this would be to use
SUMMARIZECOLUMNS
You can accomplish this by creating another table based on your initial fact table like so:
Table 2 =
SUMMARIZECOLUMNS ( 'Table 1'[ID], "Avg", AVERAGE ( 'Table 1'[Values] ) )
Once this table has been created, you can create a relationship.
This will work in either SSAS or in PowerBI directly.
Hope this helps!! Have a good one!!