I have two tables like the fallowing:
On server:
| Orders Table | OrderDetails Table
-------------------------------------------------------------------------------------
| Id | Id
| OrderDate | OrderId
| ServerName | Product
| Quantity
On client:
| Orders Table | OrderDetails Table
-------------------------------------------------------------------------------------
| Id | Id
| OrderDate | OrderId
| Product
| Quantity
| ClientName
I need to sync the [Server].[Orders Table].[ServerName] to [Client].[OrderDetails Table].[ClientName]
The Question:
What is the true and efficient way of making it?
I know Deprovisioning and provisioning with different config, is one way of doing it.
So I just wanna know the correct way.
Thanks.
EDIT :
Other columns of each table should sync normally ([Server].[Orders Table].[Id] to [Client].[Orders Table].[Id] ...).
And mapping strategy sometimes changes based on the row of data (which which is sending/receiving).
Sync Fx is not an ETL tool. simply put, it's DB sync is per table.
if you really want to force it to do what you want, you can simply intercept ChangesSelected event for the OrderDetails table, lookup the extra column from the other table and then dynamically add the column to the dataset before it gets applied on the other side.
see this link on how to manipulate the change dataset
Related
I am new to DynamoDB and trying to model/index a single table design for tracking a single number value, but with separate update entries for each time the number is changed. Every time the number value is updated, an update entry should be saved to the table (as its own item, not as a property on a single "tracked number" item), and of course the tracked number value should be updated too. I want to be able to query and sort the update entries by date and number change value. And of course I want to be able to look up the current number value.
What is a good single table design for this data and access pattern? I have tried a couple different ways, but find myself blocked because either unable to get/write to a global secondary index, or always returning the item with the current number value when attempting to query on just the update entries themselves. I could very easily create a separate tables for the tracked number and number updates, but that seems to go against DynamoDB principles.
You could create a table with two types of entries:
The current amount with the value's ID as partition key, the literal string CURRENT_AMOUNT as sort key, and an an attribute current_amount that contains the actual value.
The update entry with the value's ID (same as in 1.) as partition key, the timestamp as sort key, and two attributes new_amount and old_amount to represent the changed values.
This way, you can retrieve:
The current amount of an item: pk={{ID}} AND sk="CURRENT_AMOUNT"
The history of an item: pk={{ID}} AND sk <> "CURRENT_AMOUNT"
Both the current amount and the history of an item at once: pk={{ID}}
Additional access patterns can potentially be satisfied using secondary indexes.
Here's an example of what the table could look like with some entries:
-----------------------------------------------------------------------
| pk | sk | current_amount | new_amount | old_amount |
-----------------------------------------------------------------------
| ID1 | "CURRENT_AMOUNT" | 7 |-------------------------|
| | 2020-12-19T14:01:42Z |----------------| 7 | 5 |
| | 2020-12-17T19:07:32Z |----------------| 5 | 9 |
-----------------------------------------------------------------------|
| ID2 | "CURRENT_AMOUNT" | 3 |-------------------------|
| | 2020-12-19T08:01:12Z |----------------| 3 | 7 |
-----------------------------------------------------------------------
I have the following columns in my dataset:
_____________________________________________________________________________________
| ... |Start Date| Start Time | End Date | End Time | Production Start Date | ... |
|_____|__________|____________|____________|____________|_______________________|_____|
| ... | 01022020 | 180000 | 02022020 | 190000 | 01022020 | ... |
| | | | | | | |
Sometimes the Start Date + Start Time etc. values are blank but the Production Start Date values are always populated.
When the Start Date is empty (NULL), for example, I want my dataset (or ideally, graph) to read the Production Start Date.
How can I achieve this in Power BI?
I know I can make a conditional column, then within that, determine which column to read data from but is there any way to add a condition to the existing Start Date column? I couldn't see such an option in the context menu or subsequent ribbon options.
Is my only option to create a custom conditional column instead?
As #Andrey Nikolov mentioned in the comments, the only ways you can achieve this is to:
1 Create a calculated DAX column.
2 Create a custom column in query mode (M).
3 Edit the original source table.
doug
Below is the schema of my BigQuery table. I am selecting the sentence_id, store and BU_model and inserting data into another table in BigQuery. The datatypes for the new table generated are integer, repeated and repeated respectively.
I want to flatten/unnest the repeated fields so that they are created as STRING fields in my second table. How could this be achieved using standard sql?
+- sentences: record (repeated)
| |- sentence_id: integer
| |- autodetected_language: string
| |- processed_language: string
| +- attributes: record
| | |- agent_rating: integer
| | |- store: string (repeated)
| +- classifications: record
| | |- BU_Model: string (repeated)
The query that I am using to create the second table is as below. I would want to query on the BU_Model as a STRING column.
SELECT sentence_id ,a.attributes.store,a.classifications.BU_Model
FROM staging_table , unnest(sentences) a
Expected Output should look like:
Staging table:
41783851 regions Apparel
district Footwear
12864656 regions
district
Final Target Table:
41783851 regions Apparel
41783851 regions Footwear
41783851 district Apparel
41783851 district Footwear
12864656 regions
12864656 district
I tried the below query and it seems to work as expected, but this means that i would have to unnest every expected repeated field. My table in Bigquery has 50+ columns which are repeated. Is there a easier way around this ?
SELECT
sentence_id,
flattened_stores,
flattened_Model
FROM `staging`
left join unnest(sentences) a
left join unnest(a.attributes.store) as flattened_stores
left join unnest(a.classifications.BU_Model) as flattened_Model
Assuming you want still three columns in your output - with arrays being flattened into string
SELECT sentence_id ,
ARRAY_TO_STRING(a.attributes.store, ',') store,
ARRAY_TO_STRING(a.classifications.BU_Model, ',') BU_Model
FROM staging_table , unnest(sentences) a
UPDATE to address recent changes in question
In BigQuery Standard SQL - use of LEFT JOIN UNNEST() (as you did in your last query) is the most reasonable way to do what you want to get as a result
In BigQuery Legacy SQL - you can use FLATTEN syntax - but it has same drawback of needing to repeat same for all 50+ column
Very simplified example:
#legacySQL
SELECT sentence_id, store, BU_Model
FROM (FLATTEN([project:dataset.stage], BU_Model))
Conclusion: I would go with LEFT JOIN UNNEST() approach
is there any possibility to save previous data before overriding because of refreshing data?
Steps i have done:
I Created a table and appended to table A
Created a Column called DateTime with the function
DateTime.LocalNow()
Now i have a problem how to save previous data before the refreshing phase. I need to preserve the timestamp of previous data and actually data.
Example giving:
Before refreshing:
Table A:
|Columnname x| DateTime | ....
| value | 23.03.2016 23:00
New Table:
|Columnname x| DateTime | ....
| value | 23.03.2016 23:00
After refreshing:
Table A:
|Columnname x| DateTime | ....
| value | 23.03.2016 23:00
| value 2 | 23.03.2016 23:01
New Table:
|Columnname x| DateTime | ....
| value | 23.03.2016 23:00
| value 2 | 23.03.2016 23:01
kind regards
Incremental refreshes in the Power BI Service or Power BI Desktop aren't currently supported. But please vote for this feature. (update: see that link for info on a preview feature that does this)
If you need this behavior you need to load these rows to a database then incrementally load the database. The load to Power BI will still be a full load of the table(s).
This is now available in PowerBI Premium
From the docs
Incremental refresh enables very large datasets in the Power BI Premium service with the following benefits:
Refreshes are faster. Only data that has changed needs to be refreshed. For example, refresh only the last 5 days of a 10-year dataset.
Refreshes are more reliable. For example, it is not necessary to maintain long-running connections to volatile source systems.
Resource consumption is reduced. Less data to refresh reduces overall consumption of memory and other resources.
Hopefully this makes sense. I will probably just keep mulling this over, until i figure it. I have a table, that is formatted in such as way, that a specific date may have more than one record assigned. Each record is a plant, so the structure of that table looks like the pinkish table in the image below. However when using the Google chart API the data needs to be in the format in the blue table for a line chart. Which I have working.
I am looking to create a graph in the Google chart api similar to the excel graph, using the pink table. Where at one date e.g. 01/02/2003 there are three species recorded A,B,C with values 1,2,3. I thought possibly using a scatter but that didn't work either.
What ties these together is the CenterID all these records belong to XXX CenterID. Each record with its species has an SheetID that grouped them together for example SheetID = 23, all those species were recorded on the same date.
Looking for suggestions, whether google chart API or php amendments. My PHP is below (I will switch to json_encode eventually).
$sql = "SELECT * FROM userrecords";
$stmt = $conn->prepare($sql);
$stmt->execute();
$data = $stmt->fetchAll();
foreach ($data as $row)
{
$dateArray = explode('-', $row['eventdate']);
$year = $dateArray[0];
$month= $dateArray[1] - 1;
$day= $dateArray[2];
$dataArray[] = "[new Date ($year, $month, $day), {$row['scientificname']}, {$row['category_of_taxom']}]";
To get that chart, where the dates are the series instead of the axis values, you need to change the way you are pulling your data. Assuming your database is structured like the pink table, you need to pivot the data on the date column instead of the species column to create a structure like this:
| Species | 01/02/2003 | 01/03/2003 | 01/04/2003 |
|---------|------------|------------|------------|
| A | 1 | 2 | 3 |
| B | 3 | 1 | 4 |
| C | 1 | 3 | 5 |