updating the value of items in a list in netlogo - list

I have 4 producers who have different attributes such as their new product's price, size, customer rates. I defined 4 lists representing them.
set att-price ((list p1-pr p2-pr p3-pr p4-pr)) ;prices of all the products of 4 producers
set att-size ((list p1-sz p2-sz p3-sz p4-sz))
set att-rates ((list p1-rt p2-rt p3-rt p4-rt))
As the time passes, the prices get updates, so I defined this to make this happen :
set (item 0 att-price) (item 0 att-price) * 0.20 ; changes in the price of product of producer one
set (item 1 att-price) (item 1 att-price) * 0.08
set (item 3 att-price) (item 3 att-price) * 0.43
But it has an error saying that "This isn't what you can "set" on"!
How can I update those items then?
Thanks

You use replace-item for this. For instance:
set att-price replace-item 0 att-price (0.2 * item 0 att-price)
That is, rather than setting the items of the list, we're making a new list with the item replaced, and then setting our list variable to that item.
If you want to replace all items at once, you can use map. For instance, it looks like you might have a list of price ratios by which your prices change:
let ratios [ 0.2 1.0 0.08 0.43 ]
set att-price (map [ [ price ratio ] -> price * ratio ] att-price ratios)

Related

Redshift - SQL - Cumulative average for grouped results

I currently have a table like so:
Date
customer_id
sales
1/1
1
1
1/1
1
1
1/1
1
1
1/1
2
1
1/2
2
3
1/2
2
1
1/2
1
2
1/2
1
1
1/3
1
2
1/3
2
2
1/3
2
3
1/3
2
3
This eventually gets aggregated by the customer_id to get total_sales like so:
customer_id
total_sales
1
8
2
13
I then calculate one metric based off of this table, average_sales, which is defined as:
sum(total_sales) / count(distinct customer_id)
This would result in average_sales of 10.5 based on the information above.
However, I need to find a way to calculate this average but for each day on a cumulative basis like so:
Date 1/1 would be sum(total sales) for 1/1 / count(distinct customer_ids) for 1/1
Date 1/2 would be sum(total sales) for 1/1-1/2 / count(distinct customer_ids) for 1/1-1/2
Date 1/3 would be sum(total sales) for 1/1-1/3 / count(distinct customer_ids) for 1/1-1/3
The final day(1/3) should be equal to the overall average metric of 10.5.
Final table should look like this:
Date
average_sales
1/1
2 (4/2)
1/2
5.5 (11/2)
1/3
10.5 (21/2)
I've tried multiple things thus far with grouping/window functions but can't seem to get the right numbers. Any help would be greatly appreciated!
The main problem is that you can't use COUNT(DISTINCT) with a window.
But, there's a hacky way to calculate it anyway.
Work out the first month each customer id appears
Rank the customers in order of when they appeared
MAX(customer_rank) is then the number of customers seen to date
This gives...
WITH
check_first_date AS
(
SELECT
*,
MIN(date_id) OVER (PARTITION BY cust_id) AS cust_id_first_date
FROM
example
),
rank_customers_by_time AS
(
SELECT
*,
DENSE_RANK() OVER (ORDER BY cust_id_first_date, cust_id) AS cust_rank
FROM
check_first_date
)
SELECT
date_id,
MAX(MAX(cust_rank)) OVER (ORDER BY date_id) AS customers_to_date,
SUM(SUM(sales)) OVER (ORDER BY date_id) AS sales_to_date
FROM
rank_customers_by_time
GROUP BY
date_id
ORDER BY
date_id
Then you can divide one by the other.
There are other ways to do the count-distinct over time, such as using correlated sub-queries. I suspect (I haven't tested) that it's even slower though.
SELECT
date_id,
(
SELECT COUNT(DISTINCT lookup.cust_id)
FROM example AS lookup
WHERE lookup.date_id <= example.date_id
)
AS customers_to_date,
SUM(SUM(sales)) OVER (ORDER BY date_id) AS sales_to_date
FROM
example
GROUP BY
date_id
ORDER BY
date_id
Here is a demo (using postgresql, as the clostest approximation to redshift) with slightly different data to show that it works even when customer id's appear 'out of order'.
https://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=a5a37f3337e42123424c5cf1dbfe0152
EDIT: An even shorter (faster?) version with windows
For each customer_id, identify which is their first row (implicitly requires the rows to have a unique id).
The sum up the number of first rows that have occurred to date...
WITH
check_first_occurrence AS
(
SELECT
*,
MIN(id) OVER (PARTITION BY cust_id) AS cust_id_first_id
FROM
example
)
SELECT
date_id,
SUM(SUM(CASE WHEN id = cust_id_first_id THEN 1 ELSE 0 END)) OVER (ORDER BY date_id) AS customers_to_date,
SUM(SUM(sales )) OVER (ORDER BY date_id) AS sales_to_date
FROM
check_first_occurrence
GROUP BY
date_id
ORDER BY
date_id
https://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=94e5fb624a89170aaf819e2b3ccd01d6
This version should be significantly more friendly to RedShift's horizontal scaling.
Assuming, for example, you distribute by customer and sort by date

DAX, Power BI, summarize table based on two columns

Let's say that table looks like this :
id
step
time
1
a
0.5
1
a
0.7
1
b
1
1
b
1.5
2
a
0.9
2
a
0.8
It's super simplified, but as you can see we can have the same ID and step more than once.
The question is how to create a measure in PowerBI ( DAX) to summarize time under two or more conditions without listing all steps and IDs (for example for ID "1" step "a" occurs twice so my sum should be 1.2, step "b" 2.5 etc.)
There is a nice function SUMMARIZE. You can create a table using this function:
Table 2 = SUMMARIZE(ALL('Table'), [id], [step], "time", SUM('Table'[time]))

Calculate cumulative % based on sum of next row

I want to calculate % based on below formula. Its little bit tricks and I am kind of stuck and not sure how to do it. Thanks if anyone can help.
I have few records in below table which are grouped by Range
Range Count
0-10 50
10-20 12
20-30 9
30-40 0
40-50 0
50-60 1
60-70 4
70-80 45
80-90 16
90-100 7
Other 1
I want to have one more column which has the cumulative % based on sum of next row against total row count (145), something like below
Range Count Cumulative % of Range
0-10 50 34.5% (which is 50/145)
10-20 12 42.7% (which is 62/145)
20-30 9 48.9% (which is 71/145)
30-40 0 48.9% (which is 71/145)
40-50 0 48.9% (which is 71/145)
50-60 1 49.6% (which is 72/145)
60-70 4 52.4% (which is 76/145)
70-80 45 83.4% (which is 121/145)
80-90 16 94.5% (which is 137/145)
90-100 7 99.3% (which is 144/145)
Other 1 100.0% (which is 145/145)
Follow the below steps to get your answer. Please vote and accept the answer, If you find the solution helpful.
1st step - - Create an index column from your range column. I have replaced "Other" Value to 999. You can replace it to much bigger number, which is unlikely to be there in your dataset. Convert this new column into whole number
Sort Column = if(Sickness[Range] = "Other",9999,CONVERT(LEFT(Sickness[Range],SEARCH("-",Sickness[Range],1,LEN(Sickness[Range])+1)-1),INTEGER))
2nd Step - Use the below measure to get value:
Measure =
var RunningTotal = CALCULATE(SUM(Sickness[Count]),FILTER(all(Sickness),Sickness[Sort Column] <= MAX(Sickness[Sort Column])))
var totalSum = CALCULATE(SUM(Sickness[Count]),ALL())
Return
RunningTotal/totalSum
Below is the output that exactly matches your requirement.
For cumulative calculation, a ordering is always required. Anyway, if your given values in column "Range" is real - this will also work for this purpose as an ascending ordering on this filed keep data in expected order. Do this following to get your desired output.
Create the following measure-
count_percentage =
VAR total_of_count =
CALCULATE(
SUM(your_table_name[Count]),
ALL(your_table_name)
)
VAR cumulative_count =
CALCULATE(
SUM(your_table_name[Count]),
FILTER(
ALL(your_table_name),
your_table_name[Range] <= MIN(your_table_name[Range])
)
)
RETURN cumulative_count/total_of_count
Here is the final output-

How to incorporate thresholds for limit check with a provided static value to compare against?

here is the instance:
Column 1 Column 2 Column 3
2.99 4 Price OK
1.99 4 Price below limit
12.99
5.99 6 Price OK
1.99 6 Price below limit
8.99 6 Price OK
So for Power BI context Column 2 is a custom column from power query, the goal is to set a threshold value for column 2 pack size, in this instance pack size of 4 needs to check for minimum price of $2.99 (higher is ok), below the price should be below limit, in instance of column 2 blanks (result should also be blank). In the instance of size 6 the minimum price to check for is 5.99.
Is there a decent way to go about this?
Let's do this in two steps. First, create a column MinPrice that defines your minimum prices.
if [Column 2] = 4 then 2.99
else if [Column 2] = 6 then 5.99
else null
Then create a column that compares the actual and the minimal
if [Column 1] = null or [Column 2] = null then null
else if [Column 1] < [MinPrice] then "Price below limit"
else "Price OK"
If you have a bunch of unique values in Column 2 that you need to create rules for, then create a reference table that you can merge onto your original table and expand the MinPrice column instead of the first step stated above.
Column2 MinPrice
-----------------
4 2.99
6 5.99
8 7.99
...

Power BI Measure Calculate %

I please need assistance to calculate % based on the below table.
Table 1 (Fixed Table - Total vehicles per branch)
Table 2 (Distinct Count - Total vehicles based on vehicle regsitation) Current Measure Calculation shown below.
Table 3 (Percentage of Table 2 / Table 1, using Branch Table as Unique Identifier)
Branch 1 = 5
Branch 2 = 5
Branch 3 = 10
Branch 4 = 7
Count Loads = DISTINCTCOUNT('DEL-MAN-REP'[Registration Number])
Vehicles 1 = 5
Vehicles 2 = 3
Vehicles 3 = 1
Vehicles 4 = 4
Branch 1 % = 100%
Branch 2 % = 60%
Branch 3 % = 10%
Branch 4 % = 57%
Any assistance will be appreciated.
If you are in the Data section of Power BI(matrix looking icon on the right task bar), there is a button in the 'Modeling' menu called 'New Column'. Click that and it will create a column named "Column" with the formula set to "Column = ". If you change the "column" in the formula it will change the column name.
Now, start typing the column name you want to use that has the 'Vehicles' data and divide that with the 'Branch' data. finally, press the '%' button under Formatting in the Modeling tab.
If for whatever reason any Branch does not have any vehicles(0), you will need an error handler. so it would look like IFERROR(Vehicles/Branch,0)