i am trying to create a strategy for intraday charts. one of the entry conditions of this strategy must be looking for daily MA conditions. how can i add this daily condition to my intraday strategy. i tried but it took the data from current time chart. can anyone help me?
Example:
Daily condition => close>ta.sma(close,20)
if in daily chart the condition accomplishes then the strategy must look second condition which is in intraday condition :
intraday condition=> close>ta.sma(close,10)
Related
I have a chart in excel I wish to replicate in PBI.
My Excel chart is a bar chart dated axis with 2 series, running up march, but with data only up until now.
Each bar series has a trendline, which forecasts a trend up to the end of the financial year.
In power BI I have tried to replicate, but I cannot seem to add 'forecast' from the analytics tab unless my chart is a line graph.
So I now have 2 line series in a chart like so:
I have added trendline to both but there is no option to add the forecast line unless I only have 1 data series.
As I've removed a series I can now toggle on the forecast line in the analytics tab.
So I now have 1 line series in my line graph like so:
I actually need to have the data and the corresponding forecast for both data series, so I would have series 1 & 2 plotted like above, on the same graph together.
Is there a way to do this on the analytics tab??
If not, how should I go about it? I was thinking I could use DAX to forecast until March 2023 instead and then drag the line into the line graph and format it to dashed?
Thanks
Here is some sample data step by step for what i'm looking for (very simple thing to do in excel!):
This is the same data, pivoted with the 2 series mapped out on a line graph, and trendline added for each:
I just want to be able to show a trendline extending and forecasting forward, past the months I already have like I have done in Excel.
Yeah i think here your best solution would be to use the "what if" parameter in Dax to get the result you need
For a forecasting line to be performed on a line chart, Below requirements should be met. Otherwise You can't see this option on the analytics tab:
You need to change your datasets, and datatypes accordingly.
The last one is especially valid for you, because there is not a one-day period between your data values, in fact monthly organized.
I am trying to create a forecast but this is the error that I get:
I am working with about 300,000 rows of data. Most of the report has already been built. My data just doesn't cotain certain dates. How can I solve this issue?
So the issue boils down to the problem of "How to create an evenly spaced timeline". You can easily achieve this in PowerQuery
Create a separate daily date table.
Outer join your observations onto the dates, which will give you "null" for the unobserved days
Apply the "fill down" operation on your values column, which basically means that the last value will be repeated until a new observation appears.
These evenly distributed time series is suitable for ML forecasting, at least when it comes to predicting trends. But the real power of this feature in Power BI is in predicting seasonality, and you most likely won't get that right with the above interpolation.
I have the following data currently in PowerBI
This is totals for multiple entities that are under the same location (ID 1).
What I would like to produce is the following
Which is the summation of the 3 totals over time, with start and end dates of when they applied.
I will eventually try to use this to show a trending chart over time for how the totals changed.
Is something like this even possible in PowerBI and/or DAX to first produce the results and then report a trend line like that? The trend line in this example would just have the 3 data points.
The only thing I can think of right now is to extrapolate out each range to 1 day at a time per original screenshots rows, and make the granularity of the chart daily, instead of ranges like this. Then the summation becomes a lot simpler as its just be ID and Date. Only concern is the data volumes that will be produced by extrapolating this out by days like that.
I am currently working on an ETL pipeline that uses BigQuery to store staging data, and then uses Dataprep to transform the data and store it in new BigQuery tables for production.
We have been experiencing issues finding the most cost effective way to apply these transforms on a small selection of the data, typically only the last X number of days from the current max date in the staging data table. For example, we need to calculate the max available date in the staging data, and then retrieve all rows within the past 3 days from this date. Unfortunately we can't rely on the 'max date' in the staging data always being up to date (this data is brought in from third party APIs of varying quality and reliability).
At first I tried applying these transforms directly in Dataprep by getting the max date, creating a comparison column using DATEDIFF and then discarding rows more than 3 days older than this 'max date'. This proved to be very time consuming and inefficient in terms of cost.
The next thing we tried was to filter down the data in BigQuery views, which would then be used as the initial datasets for the Dataprep flows (the data would be pre-filtered before Dataprep applies any transforms). We first tried doing this dynamically in BigQuery, like so:
WITH latest_partitiontime AS (SELECT _PARTITIONTIME as pt FROM
`{project}.{dataset}.{table}`
GROUP BY _PARTITIONTIME
ORDER BY _PARTITIONTIME DESC
LIMIT 1)
SELECT {columns}
FROM `{project}.{dataset}.{table}`
WHERE _PARTITIONTIME >= (SELECT pt FROM latest_partitiontime)
But upon preview of the GB/estimated cost of the query, it seems very inefficient and expensive.
The next thing we tried was hard coding the date, which for some reason is a lot cheaper/quicker:
SELECT {columns}
FROM `{project}.{dataset}.{table}`
WHERE _PARTITIONTIME >= '2018-08-08'
So our current plan is to maintain a view for each table, and update the hard coded date in the view SQL via the Python SDK each time the staging data successfully completes (https://cloud.google.com/bigquery/docs/managing-views).
It feels like we are potentially missing a much easier/more efficient solution to this problem. So I wanted to ask:
Is it more cost effective carrying out this initial filtering by date in Dataprep or in BigQuery?
What is the most cost effective way of filtering the data in the chosen product?
Are you familiar with the MERGE statement of standard SQL and the clustering feature released? that could actually merge your data and you can further customize it to read only some partitions.
Example from manual:
MERGE dataset.DetailedInventory T
USING dataset.Inventory S
ON T.product = S.product
WHEN NOT MATCHED AND quantity < 20 THEN
INSERT(product, quantity, supply_constrained, comments)
VALUES(product, quantity, true, ARRAY<STRUCT<created DATE, comment STRING>>[(DATE('2016-01-01'), 'comment1')])
WHEN NOT MATCHED THEN
INSERT(product, quantity, supply_constrained)
VALUES(product, quantity, false)
hint: you can partition by null, and leverage only the 'clustering level'
I have 40 million rows in my dataset. Each day I may get an extra 100 rows. Obviously I don't want to have to import the whole 40 million each time I do a data refresh. Is it possible to do an incremental refresh where only the new rows are added?
I don't think incremental update as you describe it is possible yet.
It looks like you can push rows with Power BI REST API, if you're happy to switch to that.
However, you might find this workaround useful:
Split your table and query into two: where date <= 'somedate' and where date >'somedate'
Add an "empty query", use Table.Combine to join your two subtables. Use this as your main table.
Whenever you need to refresh, only refresh the second query (the one with where date >'somedate').
Every once in a while, when that second query starts taking a long time, change somedate to the current date and do a full refresh.
The feature has now been implemented and is called Incremental refresh. Currently it is a premium only feature.