I am working on a project and I am trying to distribute data among months between two dates (Start Date & End Date) using S -Curve with little to no knowledge about the subject.
Here is the scenario: I have multiple projects in my data set each has 3 phases (Phase1, Phase2, Phase3). Every phase has its own start and end date and a budget amount allotted to that phase as following:
Projects Phases Budget START END
------------------------------------------------------
Example 1 Phase1 1,000.00 01-Jan-23 01-Mar23
Example 1 Phase2 2,000.00 01-Mar-23 01-Apr-24
Example 1 Phase3 5,000.00 01-Apr-24 01-Jan-27
Example 2 Phase1 3,000.00 01-Feb-22 01-Mar23
Example 2 Phase2 5,000.00 01-Mar-23 01-Oct-23
Example 2 Phase3 9,000.00 01-Oct-23 01-Jan-26
I have created a Calendar table with date column (month-year) selecting minimum month-year from [StartDate] and maximum from [EndDate] and then created a new table cross joining my project table and date table to spread it across all the months for each project and phase between start and end date:
Projects Phases Budget START END Date
--------------------------------------------------------------
Example 1 Phase1 1,000.00 01-Jan-23 01-Mar-23 Jan-23
Example 1 Phase1 1,000.00 01-Jan-23 01-Mar-23 Feb-23
Example 1 Phase1 1,000.00 01-Jan-23 01-Mar-23 Mar-23
Can't paste all above data here but basic understanding is clear in my opinion data continues across all the months
Now what I want to achieve is to distribute Budget across all the months for each project and its phases using S-Curve distribution,
I am using standard S-Curve equation to distribute my data as discussed here :https://blog.arkieva.com/basics-on-s-curves/
The end result would be as follows(Just an idea, not real results)
Projects Phases Budget START END Date Budget Per Month
Example 1 Phase1 1,000.00 01-Jan-23 01-Mar-23 Jan-23 200
Example 1 Phase1 1,000.00 01-Jan-23 01-Mar-23 Feb-23 700
Example 1 Phase1 1,000.00 01-Jan-23 01-Mar-23 Mar-23 100
Related
Outlet ID
Outlet Name
Order Date
Product
Qty
Net Value
Mum_1
Prime Traders
12th Oct 2022
RoundBox
3
300
Mum_4
Avon Trading
13th Oct 2022
Slice 100
10
1000
I have date wise transaction data for past 20 months for retail outlets.
Any outlet that has been billed in the last 3 months can be classified as an 'Available Outlet'.
Eg: Available outlets for Sept 2022 are the ones that have been billed at least once across July, August & Sept 2022.
Similarly I need to have ,month wise availability count in a column chart. Can someone please guide as to how can I write a DAX query for the same ?
data
I am trying to plot a bar graph for both sept and oct waves. As in the image you can see the id are the individuals who are surveyed across time. So on the one graph I need to plot sept in-house, oct in-house, sept out-house, oct out-house and just have to show the proportion of people who said yes in sept in-house, oct in-house, sept out-house, oct out-house. Not all the categories have to be taken into account.
Also I have to show whiskers for 95% confidence intervals for each of the respective categories.
* Example generated by -dataex-. For more info, type help dataex
clear
input float(id sept_outhouse sept_inhouse oct_outhouse oct_inhouse)
1 1 1 1 1
2 2 2 2 2
3 3 3 3 3
4 4 3 3 3
5 4 4 3 3
6 4 4 3 3
7 4 4 4 1
8 1 1 1 1
9 1 1 1 1
10 1 1 1 1
end
label values sept_outhouse codes
label values sept_inhouse codes
label values oct_outhouse codes
label values oct_inhouse codes
label def codes 1 "yes", modify
label def codes 2 "no", modify
label def codes 3 "don't know", modify
label def codes 4 "refused", modify
save tokenexample, replace
rename (*house) (house*)
reshape long house, i(id) j(which) string
replace which = subinstr(proper(which), "_", " ", .)
gen yes = house == 1
label def WHICH 1 "Sept Out" 2 "Sept In" 3 "Oct Out" 4 "Oct In"
encode which, gen(WHICH) label(WHICH)
statsby, by(WHICH) clear: ci proportion yes, jeffreys
set scheme s1color
twoway scatter mean WHICH ///
|| rspike ub lb WHICH, xla(1/4, noticks valuelabel) xsc(r(0.9 4.1)) ///
xtitle("") legend(off) subtitle(Proportion Yes with 95% confidence interval)
This has to be solved backwards.
The means and confidence intervals have to be plotted using twoway as graph bar is a dead-end here, because it does not allow whiskers too.
The confidence limits have to be put in variables before the graphics. Some graph commands, notably graph bar, will calculate means for you, but as said that is a dead end. So, we need to calculate the means too.
To do that you need an indicator variable for Yes.
The best way I know to get the results then is to reshape to a different structure and then apply ci proportion under statsby.
As a detail, the option jeffreys is explicit as a signal that there are different methods for the confidence interval calculation. You should choose one knowingly.
I should be able to make a report concerning a relationship between sick leaves (days) and man-years. Data is on monthly level, consists of four years and looks like this (there is also own columns for year and business unit):
Month Sick leaves (days) Man-years
January 35 1,5
February 0 1,63
March 87 1,63
April 60 2,4
May 44 2,6
June 0 1,8
July 0 1,4
August 51 1,7
September 22 1,6
October 64 1,9
November 70 2,2
December 55 2
It has to be possible for the user to filter year, month, as well as business unit and get information about sick leave days during the filtered time period (and in selected business unit) compared to the total sum of man-years in the same period (and unit). Calculated from the test data above, the desired result should be 488/22.36 = 21.82
However, I have not managed to do what I want. The main problem is, that calculation takes into account only those months with nonzero sick leave days and ignores man-years of those months with zero days of sick leaves (in example data: February, June, July). I have tried several alternative functions (all, allselected, filter…), but results remain poor. So all information about a better solution will be highly appreciated.
It sounds like this has to do with the way DAX handles blanks (https://www.sqlbi.com/articles/blank-handling-in-dax/). Your context is probably filtering out the rows with blank values for "Sick-days". How to resolve this depends on how your data are structured, but you could try using variables to change your filter context or use "IF ( ISBLANK ( ... ) )" to make sure you're counting the blank rows.
I am using Stata and I have 6 years of daily returns for stocks that individuals hold in their portfolios. I would like to aggregate the daily returns to monthly portfolio returns. In some instances, the individual may hold more than one stock in the portfolio. I am struggling with writing the code to do this.
For a visual, my data looks like this:
I would like the results to look like this:
Where individual 2's portfolio return for the month of December 1996 is calculated as: 0.3 * 0.0031 + 0.7 * 0.0076 = 0.00625.
I have tried the collapse command such as
collapse Return, by (ID Year Month)
but this does not provide the same return that I calculated out in Excel.
I am able to make a weighted portfolio return for all the days using
bysort ID year month: egen wt_return = stock_weight * monthly_return
But this gives me daily returns. My trouble is then aggregating them into one return for the corresponding month.
As for the specifics, I would like to calculate the monthly portfolio return as the product of 1 + the weighted daily returns. As a last resort, the mean return for the month could work.
You don't show monthly portfolio return for person 2 in 1991. Your initial example data doesn't show stock weights but the desired example
data does. Your variable Monthly Return is not reproducible. You should take time to verify your question is clear when posting.
It's supposed be clear to the public who will read it, not only to you.
I didn't bother checking if your computations are correct but below is what I
understand you want. The procedure is simply to compute a weighted return and then
add them up by person year month groups. (I assume the stock weights apply to stocks on a daily basis, which is what your example data implies.)
clear all
set more off
input ///
perid year month day str3 stockid return stockw
1 1991 1 1 "ABC" .01 1
1 1991 1 2 "ABC" .02 1
1 1991 1 3 "ABC" -.01 1
1 1991 1 31 "ABC" .004 1
1 1996 12 31 "ABC" .002 1
2 1991 1 1 "ABC" .01 .3
2 1991 1 2 "ABC" .02 .3
2 1996 12 31 "ABC" .004 .3
2 1991 1 1 "XYZ" .001 .7
2 1991 1 2 "XYZ" .004 .7
2 1996 12 31 "XYZ" .021 .7
end
* create weighted return
gen returnw = return * stockw
sort perid year month day
list, sepby(perid year month day)
* sum weighted returns by person, year, month
collapse (sum) returnw, by (perid year month)
list, sepby(perid)
If you want collapse to sum, then you must indicate it with the (sum) (although I'm not clear if this is what you want). By default, it computes the mean. Read help collapse thouroughly.
I have the following dataset (individual level data):
pid year state income
1 2000 il 100
2 2000 ms 200
3 2000 al 30
4 2000 dc 400
5 2000 ri 205
1 2001 il 120
2 2001 ms 230
3 2001 al 50
4 2001 dc 400
5 2001 ri 235
.........etc.......
I need to estimate average income for each state in each year and create a new dataset that would look like this:
state year average_income
ar 2000 150
ar 2001 200
ar 2002 250
il 2000 150
il 2001 160
il 2002 160
...........etc...............
I already have a code that runs perfectly fine (I have two loops). However, I would like to know is there any better way in Stata like sql style query?
This is shorter code than any suggested so far:
collapse average_income=income, by(state year)
This shouldn't need 2 loops, or any for that matter. There are in fact more efficient ways to do this. When you are repeating an operation on many groups, the bysort command is useful:
bysort year state: egen average_income = mean(income)
You also don't have to create a new dataset, you can just prune this one and save it. Start by only keeping the variables you want (state, year and average_income) and get rid of duplicates:
keep state year average_income
duplicates drop
save "mynewdataset.dta"
You have the SQL tag on the question. This is a basic aggregation query in SQL:
select state, year, avg(income) as average_income
from t
group by state, year;
To put this in a table, depends on your database. One of the following typically works:
create table NewTable as
select state, year, avg(income) as average_income
from t
group by state, year;
Or:
select state, year, avg(income) as average_income
into NewTable
from t
group by state, year;