Instead of creating a graph, I need to simply output a number that is the average, max or min of some date range supplied. I have had good success with the following code:
rrdtool graph a.png --start=1325484000 --end=1364472365 DEF:power=/data1/bpoll/rrd/ws3/pdu/pdu316/a.rrd:ct12:AVERAGE 'PRINT:power:AVERAGE:%2.1lf'
However, looking at the docu, it says specifying the CF (in this case AVERAGE) is deprecated. Yet I am completely lost as to the new format. At least I can't seem to wrap my head around it. If I leave out the CF, it errors. Where exactly am I going wrong here?
PRINT:power:AVERAGE:%2.1lf
This is the 'old style' syntax, where you pass a dataset and the consolodation function to the PRINT directive.
With the new format, you use a VDEF and so do not need a function as a VDEF is single-valued. However, you need to define the VDEF beforehand.
This is the new format:
VDEF:avgpower:power,AVERAGE
PRINT:avgpower:%2.1lf
In this example, we define a new VDEF value avgpower and print that. It has the same effect as the previous old syntax code, but is in the new sytax, which allows us to also add modifiers to the PRINT statement such as :strftime to print the point in time of maxima, and so on.
Related
I'm making a time-spending tracker based on the work I do every hour of the day.
Now, suppose I have 28 types of work listed in my tracker (which I also have to increase from time to time), and I have about 8 significance values that I have decided to relate to these 28 types of work, predefined.
I want that, as soon as I enter a type of work in cell 1 - I want the adjacent cell 2 to get automatically populated with a significance value (from a range of 8 values) that is pre-definitely set by me.
Every time I input a new or old occurrence of a type of work, the adjacent cell should automatically get matched with its relevant significance value & automatically get populated in real-time.
I know how to do it using IF, IFS, and IF_OR conditions, but I feel that based on the ever-expanding types of work & significance values, the above formulas will be very big, complicated, and repetitive in the future. I feel there's a more efficient way to achieve it. Also, I don't want it to be selected from a drop-down list.
Guys, please help me out with the most efficient way to handle this. TUIA :)
Also, I've added a snapshot and a sample sheet describing the problem.
Sample sheet
XLOOKUP() may work. Try-
=XLOOKUP(D2,A2:A,B2:B)
Or FILTER() function like-
=FILTER(B2:B,A2:A=D2)
You can use this formula for a whole column:
=INDEX(IFERROR(VLOOKUP(C14:C,A2:B9,2,0)))
Adapt the ranges to your actual tables in order to include in the second argument all the potential values and their significances
This is the formula, that worked for me (for anybody's reference):
I created another reference sheet, stating the types of work & their significance. From that sheet, I'm using either vlookup, filter, xlookup.Using gforms for inputting my data.
=ARRAYFORMULA(IFS(ROW(D:D)=1,"Significance",A:A="","",TRUE,VLOOKUP(D:D,Reference!$A:$B,2,0)))
Final:Result after calculating and displaying the differenceI am new to VVC and I am going through the reference software's code trying to understand it. I have encoded and decoded videos using the reference software. I want to extract the bitstream from it, I want to know the number of bits there are in each macroblock. I am not sure which class I should be working with, for now I am looking at, mv.cpp, QuantRDOQ.cpp, and TrQuant.cpp.
I am afraid to mess the code up completely, I don't know where to add what lines of code. Start: Result after calculating and displaying the difference
P.S. The linked pictures are after my problem has been solved, I attached these pictures because of my query in the comments.
As the error says, getNumBins() is not supported by the CABAC estimator. So you should make sure you call it "only" during the encoding, and not during the RDO.
This should do the job:
if (isEncoding())
before = m_BinEncoder.getNumBins()
coding_unit( cu, partitioner, cuCtx );
if (isEncoding())
{
after = m_BinEncoder.getNumBins();
diff = after - before;
}
The simpleset solution that I'm aware of is at the encoder side.
The trick is to compute the difference in the number of written bits "before" and "after" encoding a Coding Unit (CU) (aka macroblock). This stuff happens in the CABACWriter.cpp file.
You should go to to coding_tree() function, where coding_unit() function is called, which is responsible for context-coding all syntax elementes in the current CU.
There, you may call the function getNumBins() twice: once before and once after coding_unit(). The difference of the two value should do the job for you.
In Stata, is there a way to redirect the data that a command does into a table instead of a graph?
Example: if someone created a normal probability distribution of data with the pnorm var_name command, is there a way to redirect the data so that instead of appearing in a graph, it appears in a table?
To add to #Noobie's answer:
Different commands work in different ways. There's no better short summary.
What you can look out for includes
generate() options that produce new variables. (There is absolute rule that the options have this name, but that or a similar name is the most common single variety.)
Options that allow saving results to new datasets.
Saved results, especially those visible after return list or ereturn list. These can be quite elaborate, e.g. saving of matrices of counts after tabulate.
More broadly, Stata commands aren't functions! One characteristic of a function, as so named in many languages or programs, is that there is a result, with special cases where the result is void or null. There clearly are statistical programs which in broad terms hinge on calling functions which have results, and what you see displayed is often a side-effect of that. Stata commands don't work like that in the sense that the results of a program can be various. In the case of commands designed just to show something, the "result" may be a display. It's worth noting that Mata, which underlies and underpins Stata, is more recognisably a C-like language, with (e.g.) many matrix extensions, which is based on functions (and much else).
Yes and no. It really depends on the command you are using. You should look at the help files first.
For instance, pnorm does not allow that. You can create the data yourself using the formula for pnorm described in the help file, where the cumulative distribution at some point is plotted against the so-called plotting position.
Other Stata commands allow you to generate the points directly. This is the case for kdensity for instance.
The answers in topics with similar titles haven't given me much of a resolution to my particular problem, but possibly I am not asking the right question. It might help knowing I'm an absolute noob when it comes to spreadsheets, so finding my way around is next to nil.
Currently I can set a basic function in the first cell A1 =ROW()
Simple right? Well now here comes the complication. If I click on the bottom right of the cell and start dragging I can then apply that very same function to a whole range of cells. Let's say I apply it from A1:A10. Every cell within this group now has the same function.
Hooray! We did it, right? I applied a function to a range of cells each with their own output. But wait, if I then go back to the original cell and change its formula none of the other cells change with it. GRRRRR!!!!
There are a couple of fixes I've come up with but don't necessarily know how to implement. The first is to have every cell link back to the original cell and reference its function. This would be useful if I wanted to randomly scatter dependent cells about the document. The other would be much more useful in an orderly group where you know the exact dimensions by specifying in the original cell the size of the array you want to apply the function to.
With that said, let me hear your thoughts.
The closest I've come to an answer is to use FORMULA() which returns the formula used by a cell as text. Unfortunately all answers on evaluating the text resort to scripting. How strange! I thought something like this would be common. Might as well get to scripting.
Hold on, I may have spoke too soon. An array can be made with =MUNIT(), but it's only square. Drats!
Ok... I'm hoping the zebra stripes will eventually become its own answer unless someone else beats me to it. So a simple array can be made with ={1,2;3,4} where commas separate values by column and semicolons for values by row except to generate it you have to press Control+Shift+Enter (because reasons?). I'm thinking now that I'll need to have functions that can generate lists of values based on a single function for each row, and pray that it'll work. So, back to looking. (Wow this is taking forever)
The way I was hypothesizing can't even generate a 1x1, e.g., ={ROW()} returns Err:512 which is a formula overflow.
Alright, in summary so far I've narrowed down the two options,
1) link every cell to the original formula
2) populate an array with a single formula
each with their own incomplete answer,
a) use FORMULA() to return the formula of a cell as text
b) create a hypothetical array like so ={LIST_OF_VALUES()}
These both require a strange form of the nonexistent EVALUATE() function to 'function' correctly. Isn't that fun?
Google Sheets handles case b by allowing ={ROW()}Control+Shift+Enter to generate =ArrayFormula({ROW()}). Working with the general case of any sized array being filled with a single function doesn't exist in the world of spreadsheets it seems. That's very saddening because I can't think of a much better tool for what I want to do. Copy paste it is until I need to use macros.
Depending on your specific use case, creating a user-defined function may help:
use the Basic IDE to create your function;
apply it to any cells on any sheet;
modifying the Basic code will affect all cells where the function is used.
I've elaborated the steps in an answer on superuser.
Sure, you could write some complex code to update functions, but wouldn't the easy way be just to drag it to the same range of cells the same way you did before? It should properly overwrite the existing code in there, and if it doesn't, you can just as easily delete the outdated code and drag the new code in.
Probably the best approach is to simply drag the amended formula over the range of cells (as advised by OldBunny2800). This is less error prone and easier to maintain than a custom macro.
Another option would be to use an array function. Then you only have to edit the function once, and the same edit will be automatically applied to the whole range of cells in that array function.
I'm just starting to explore Xlrd, and to be honest am pretty new to programming altogether, and have been working through some of their simple examples, and can't get this simple code to work:
import xlrd
book=open_workbook('C:\\Users\\M\\Documents\\trial.xlsx')
sheet=book.sheet_by_index(1)
cell=sheet.cell(0,0)
print cell
I get an error: list index out of range (referring to the 2nd to last bit of code)
I cut and pasted most of the code from the pdf...any help?
You say:
I get an error: list index out of range (referring to the 2nd to last
bit of code)
I doubt it. How many sheets are there in the file? I suspect that there is only one sheet. Indexing in Python starts from 0, not 1. Please edit your question to show the full traceback and the full error message. I suspect that it will show that the IndexError occurs in the 3rd-last line:
sheet=book.sheet_by_index(1)
I would play around with it in the console.
Execute each statement one at a time and then view the result of each. The sheet indexes count from 0, so if you only have one worksheet then you're asking for the second one, and that will give you a list index out of range error.
Another thing that you might be missing is that not all cells exist if they don't have data in them. Some do, but some don't. Basically, the cells that exist from xlrd's standpoint are the ones in the matrix nrows x ncols.
Another thing is that if you actually want the values out of the cells, use the cell_value method. That will return you either a string or a float.
Side note, you could write your path like so: 'C:/Users/M/Documents/trial.xlsx'. Python will handle the / vs \ on the backend perfectly and you won't have to screw around with escape characters.