Bifacial Radiance error when importing weather data - pvlib

I keep getting the error:
"Error! Solar altitude is -19 < -6 degrees and Idh = 26 > 10 W/m^2 on day 1 !Ibn is 0. Attempting to continue" for almost all days when I tried to import the TMY file from Meteonorm.
Is there a reason behind it?
Also, if we leave the "Mod Wanted" and "Row Wanted" parameters blank, will it tell the average of all the modules?

This happens for the usually really early hours and late hours where there are irradiance values but the sun is underneath the horizon. Also, another particularity is that hourly data can be left or right labeled (i.e., data timestamped at 8 AM might be the average of 8 to 9 AM, or from 7 to 8 AM.), and the sun position can then be alculated for 8:30 AM, 8, or 7:30 AM. Previously our default was 7:30 AM, but if the sun had not risen yet that cuases issues. Besides the fact that you can now select if the data is right labeled or left labeled, the sunrise and sunset hours get checked and modified, so that for example if the sun rose at 7:40 AM and the data at that 8 AM timestam is from 7 to 8 AM, the sun position will be calculated in the midpoint between sunrise and the hour so 7:50 AM....
In general, that error doesn't forbid you from continuing, and the irradiance contribution from those hours is minimal.
Regarding the second question --- no, if module wanted and row wanted are not specified, the center of the array is chosen and sampled. It is not the average of all modules.

Related

Auth.NET API Refund more than 180 days

I have a customer who wants us to roll back all monthly charges for the last 18 months, and redo them on a different card (long story, they feel they have a legitimate ask)
I see in the documentation, it says the 'default' is that I can refund up to 180 days.
Is there any way to go beyond 180 days on refunds if 180 is the default? Is there a way to change that 180 day limit to 18 months, at least, for a short time?
I have no problem doing credits, and reauthing on a different card.
I just need to know how to get past the 180 days, if that's even possible.
I do have all auth codes and transaction ids that I need to go back 18 months.
Authorize.net calls this Expanded Credit Capabilities. The client would need to complete the form provided in the link.

RRD database does not store historical data

I don't think it's a bug but it's tough to find the correct answer on the Internet to understand what's happening. So I create an RRD(1minute step) database with 3 RRAs:
RRA:AVERAGE:0.5:1m:1d
RRA:AVERAGE:0.5:1h:6M
RRA:AVERAGE:0.5:1d:1y
So I assume when I update the data point I should have the capability to save 1-year data. However, I can see 24 hours data only whenever long I emit the data points to the RRD database.
This is the rrdtool info output from one RRD database I created: https://gist.github.com/meow-watermelon/206a10a83c937c771f6cfc5fa7a2e948
Is there anything I missed or any unknown corner cases that I hit which caused only 24 hours data is shown?
Thanks.
The RRA consolodated data points (cdp) are only written to the RRA when there are sufficient to make one. Thus, with a 1-minute interval, and an xff of 0.5, you would need to be collecting data every minute for more than 12 hours (plus 1 minute!) to make up a full cdp.
In addition, the cdp update on boundaries relative to UCT; this means that for your largest 1d size RRA, you would need to have at least 12 hours of data collected in the 24 hours prior to 00:00 UCT, and then the next update would write the cdp.
This means that you should collect data at the standard interval (60s) for more than 24 hours before you can be certain of seeing your cdp appear in the largest-granularity RRA; the best test is to collect data every minute for 48 hours and then check your 1d-granularity RRA

Strange behaviour of CPU balance in t2.2xlarge AWS EC2 instance [duplicate]

I am using a T2.medium instance. A third of the day I am doing intensive statistical calculations and figured that the rest 2/3 of the time I would "earn" credits at a rate at 24 per hour.
But that is not happening. This is my usage the last two days:
And this is my credit account:
I hadn´t used it for (more than) a day until yesterday 6 pm. I use it intensive for five hours. Then I would expect my "account" to acummulate 24 credits per hour but for 9-10 hours almost nothing happens, then it acummulate as expected for 9 hours and then goes flat again.
I am unable to figure out what is going on and if it is a fault. Do anyone have a good explanation?
EDIT: I have included a week of activity below. I still can´t figure out the algoritm:
Update: The rules used to calculate t2 CPU credit balances appear to have changed such that the issue prompting this question should no longer have an impact.
Based on customer feedback, we’ve updated T2 instances with a new CPU Credit allocation policy that is the same as or better than the previous policy in all cases.
...
Now, earned CPU Credits do not expire until the instance is terminated or stopped. A T2 instance can still earn up to the same maximum level allowed by the instance size. The CPUCreditBalance will now increase anytime the current CPUCreditUsage is below the baseline and can grow to the maximum allowed for the instance size
https://forums.aws.amazon.com/ann.jspa?annID=5196
h/t: Last Week in AWS for the update.
The original answer follows.
This question has caused me quite a bit of mental anguish over the last few hours, because the graphs almost make sense, based on what I know about t2 instances. Almost, but not quite, and I couldn't put my finger on the problem. That's the worst kind. Particularly being a huge fan of the value proposition offered by t2 machines.
But I did finally figure out what's going on here.
There's one concept of CPU credits the documentation doesn't seem to explain, but the math works out, and the explanation holds up nicely under real-world observations:
The most recently earned CPU credits are spent first, not last.
Does order matter? It does.
For testing, I used a t2.micro (primarily because I had an idle one that had been running for several days, and needed something to do, and I didn't want the extra "initial" credits of a new instance to cloud up the observations) but all instance types in the t2 class have similar behavior.
By way of background: in the t2 class, CPU credits are earned at different rates, but CPU credits are used at the same rate for all instance types in the class:
A CPU Credit provides the performance of a full CPU core for one minute.
The t2.micro and t2.small have only one core, so they can burn up to 1 credit per minute or 60 credits per hour, at 100% CPU utilization. The t2.medium and t2.large are dual core, so they can burn up to 2 credits per minute, or 120 credits per hour, at 100% CPU utilization on both cores.
If 1 credit = 100% of 1 core for 1 minute, then 1 credit is also equal to 20% of 1 core for 5 minutes. Since the Cloudwatch graph interval is in 5 minute increments, I set up the following test:
On a t2.micro that has been running for several weeks with essentially no load, I installed lookbusy, a handy utility that allows you to make a machine "look busy" with parameters you specify -- e.g, keep the CPU at 20% utilization.
$ screen -S eat_cpu
$ ./lookbusy -v -c 20 -r fixed
This does exactly what you'd expect, burning 1 CPU credit every 5 minutes. The "CPU Credit Usage" graph confirms this, showing 1 credit being used every 5 minutes. (The CPU Utilization graph, and top, both confirm the 20%.)
But what's happening to my credit balance? It's being depleted by 1 credit every 5 minutes. That seems wrong, doesn't it? I mean, yes, I just said that's how many I'm using, but... I'm also supposed to be earning 6 credits per hour, so I should only be depleting by balance by a net of 0.5 credits every 5 minutes, right?
Hold on... checking the numbers, again: I'm earning 6 per hour, spending 12 per hour, so, yes... that seems like it should be a net decrease of only 6 per hour, not 12... right? Clearly, something doesn't add up the way I expected, because my balance is definitely going down by 12 per hour, and my CPU is definitely only running at 20%.
I seem to be earning no credits to offset my usage. How is that possible?
Unless...
Unused earned credits from a given 5 minute interval expire 24 hours after they are earned
Well, 24 hours ago, my instance was completely idle. During that hour, I earned 6 credits that I... didn't (?) use. Am I not using them now? Shouldn't I be?
any expired credits are removed from the CPU credit balance at that time, before any newly earned credits are added
Crud. Could this be related? This hour, I earned 6 new credits. But right before that, I lost 6 credits from 24 hours ago. Then I spent 12 credits this hour... so my balance when down by 6, up by 6, and down by another 12. Well, that explains the -12 change for the hour, but...
Can that be the reason?
I'm a voracious reader of documentation, so I knew about the expiring credits aspect... but I assumed all along that this was nothing more than the reason an idle instance hovers near its maximum balance, and did not have any other significance. How could it? If I have less than the maximum (6 x 24 = 144 for a t2.micro) then how could I have credits the need to expire?
If my credits from 24 hours ago are always counting against me, wouldn't my balance tend toward zero, regardless of what I do?
Unless...
After tossing and turning most of the night while contemplating sliding around piles of imaginary tokens (representing CPU credits) on an imaginary table top (representing time)... I realized that the "expiration" rule would cause exactly the behavior we observe if, counter-intuitively, credits are not spent in the order in which they are earned (FIFO), but rather in the reverse order (LIFO).
Following that line of reasoning, the explanation for what my 20% CPU test is actually doing is this, where the first hour of my test was "hour 0" --
| spends 6+6 credits | expire 6 credits
test | earned this many | earned this many
hour | hours before hour 0 | hours before hour 0
-----+---------------------+--------------------
0 -1, -2 -24
1 -3, -4 -23
2 -5, -6 -22
3 -7, -8 -21
4 -9, -10 -20
5 -11, -12 -19
6 -13, -14 -18
7 -15, -16 -17
And they meet in the middle.
Is this genuine, or am I guessing? I'm not guessing, and here's the evidence:
After 8 hours, my CPU credit usage graph remains solid, still holding steady at 1 credit per 5 minutes, but after the same 8 hours, my CPU credit balance finally begins to deplete at the (slower) rate I originally expected: 0.5 credits every 5 minutes.
Apparently, as I worked backward in time, spending previously earned credits "newest first," I caught up with my old credits that were about to expire, finally reaching the point where I was using them before they had a chance to expire. Now, I have no credits that are 24 hours old, and so no credits are expiring -- so I am no longer losing credits before new credits are earned. I am now able to keep the 6 that I earn per hour, because I used up the old ones, decreasing the net impact to my credit balance to the expected level.
This explains the only reservation I had about the graphs in the question: why, when utilization drops off, does it take so long for the balance to rebound?
The TL;DR answer is this: the balance doesn't rebound immediately, after a burst of heavy utilization, because you still have unused credits from 24 hours prior, which are canceling out your newly-earned credits, until you reach the point in time when you don't have any 24-hour-old unused credits. When that happens, your credit balance increases again.
Leave the instance completely idle for 24 hours and you will eventually see the balance steadily (for the most part) rise to the maximum again, as expected. Anything less than 24 hours completely idle will cause your balance to remain perpetually be somewhere below the max.
My test script eventually depleted my credit balance almost all the way down. When I killed the process eating the CPU, the credit balance began to recover immediately, at the expected rate of 6 credits per hour.
Conversely, when I took a different machine that had seen low utilization for 24 hours, and ran it's CPU to 100% for a few minutes, then took it back to idle, the credits did not begin to accumulate immedately... being offset by old, expiring ones.
Quotes are from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html.

EC2 t2.medium burstable credit "savings" calculation

I am using a T2.medium instance. A third of the day I am doing intensive statistical calculations and figured that the rest 2/3 of the time I would "earn" credits at a rate at 24 per hour.
But that is not happening. This is my usage the last two days:
And this is my credit account:
I hadn´t used it for (more than) a day until yesterday 6 pm. I use it intensive for five hours. Then I would expect my "account" to acummulate 24 credits per hour but for 9-10 hours almost nothing happens, then it acummulate as expected for 9 hours and then goes flat again.
I am unable to figure out what is going on and if it is a fault. Do anyone have a good explanation?
EDIT: I have included a week of activity below. I still can´t figure out the algoritm:
Update: The rules used to calculate t2 CPU credit balances appear to have changed such that the issue prompting this question should no longer have an impact.
Based on customer feedback, we’ve updated T2 instances with a new CPU Credit allocation policy that is the same as or better than the previous policy in all cases.
...
Now, earned CPU Credits do not expire until the instance is terminated or stopped. A T2 instance can still earn up to the same maximum level allowed by the instance size. The CPUCreditBalance will now increase anytime the current CPUCreditUsage is below the baseline and can grow to the maximum allowed for the instance size
https://forums.aws.amazon.com/ann.jspa?annID=5196
h/t: Last Week in AWS for the update.
The original answer follows.
This question has caused me quite a bit of mental anguish over the last few hours, because the graphs almost make sense, based on what I know about t2 instances. Almost, but not quite, and I couldn't put my finger on the problem. That's the worst kind. Particularly being a huge fan of the value proposition offered by t2 machines.
But I did finally figure out what's going on here.
There's one concept of CPU credits the documentation doesn't seem to explain, but the math works out, and the explanation holds up nicely under real-world observations:
The most recently earned CPU credits are spent first, not last.
Does order matter? It does.
For testing, I used a t2.micro (primarily because I had an idle one that had been running for several days, and needed something to do, and I didn't want the extra "initial" credits of a new instance to cloud up the observations) but all instance types in the t2 class have similar behavior.
By way of background: in the t2 class, CPU credits are earned at different rates, but CPU credits are used at the same rate for all instance types in the class:
A CPU Credit provides the performance of a full CPU core for one minute.
The t2.micro and t2.small have only one core, so they can burn up to 1 credit per minute or 60 credits per hour, at 100% CPU utilization. The t2.medium and t2.large are dual core, so they can burn up to 2 credits per minute, or 120 credits per hour, at 100% CPU utilization on both cores.
If 1 credit = 100% of 1 core for 1 minute, then 1 credit is also equal to 20% of 1 core for 5 minutes. Since the Cloudwatch graph interval is in 5 minute increments, I set up the following test:
On a t2.micro that has been running for several weeks with essentially no load, I installed lookbusy, a handy utility that allows you to make a machine "look busy" with parameters you specify -- e.g, keep the CPU at 20% utilization.
$ screen -S eat_cpu
$ ./lookbusy -v -c 20 -r fixed
This does exactly what you'd expect, burning 1 CPU credit every 5 minutes. The "CPU Credit Usage" graph confirms this, showing 1 credit being used every 5 minutes. (The CPU Utilization graph, and top, both confirm the 20%.)
But what's happening to my credit balance? It's being depleted by 1 credit every 5 minutes. That seems wrong, doesn't it? I mean, yes, I just said that's how many I'm using, but... I'm also supposed to be earning 6 credits per hour, so I should only be depleting by balance by a net of 0.5 credits every 5 minutes, right?
Hold on... checking the numbers, again: I'm earning 6 per hour, spending 12 per hour, so, yes... that seems like it should be a net decrease of only 6 per hour, not 12... right? Clearly, something doesn't add up the way I expected, because my balance is definitely going down by 12 per hour, and my CPU is definitely only running at 20%.
I seem to be earning no credits to offset my usage. How is that possible?
Unless...
Unused earned credits from a given 5 minute interval expire 24 hours after they are earned
Well, 24 hours ago, my instance was completely idle. During that hour, I earned 6 credits that I... didn't (?) use. Am I not using them now? Shouldn't I be?
any expired credits are removed from the CPU credit balance at that time, before any newly earned credits are added
Crud. Could this be related? This hour, I earned 6 new credits. But right before that, I lost 6 credits from 24 hours ago. Then I spent 12 credits this hour... so my balance when down by 6, up by 6, and down by another 12. Well, that explains the -12 change for the hour, but...
Can that be the reason?
I'm a voracious reader of documentation, so I knew about the expiring credits aspect... but I assumed all along that this was nothing more than the reason an idle instance hovers near its maximum balance, and did not have any other significance. How could it? If I have less than the maximum (6 x 24 = 144 for a t2.micro) then how could I have credits the need to expire?
If my credits from 24 hours ago are always counting against me, wouldn't my balance tend toward zero, regardless of what I do?
Unless...
After tossing and turning most of the night while contemplating sliding around piles of imaginary tokens (representing CPU credits) on an imaginary table top (representing time)... I realized that the "expiration" rule would cause exactly the behavior we observe if, counter-intuitively, credits are not spent in the order in which they are earned (FIFO), but rather in the reverse order (LIFO).
Following that line of reasoning, the explanation for what my 20% CPU test is actually doing is this, where the first hour of my test was "hour 0" --
| spends 6+6 credits | expire 6 credits
test | earned this many | earned this many
hour | hours before hour 0 | hours before hour 0
-----+---------------------+--------------------
0 -1, -2 -24
1 -3, -4 -23
2 -5, -6 -22
3 -7, -8 -21
4 -9, -10 -20
5 -11, -12 -19
6 -13, -14 -18
7 -15, -16 -17
And they meet in the middle.
Is this genuine, or am I guessing? I'm not guessing, and here's the evidence:
After 8 hours, my CPU credit usage graph remains solid, still holding steady at 1 credit per 5 minutes, but after the same 8 hours, my CPU credit balance finally begins to deplete at the (slower) rate I originally expected: 0.5 credits every 5 minutes.
Apparently, as I worked backward in time, spending previously earned credits "newest first," I caught up with my old credits that were about to expire, finally reaching the point where I was using them before they had a chance to expire. Now, I have no credits that are 24 hours old, and so no credits are expiring -- so I am no longer losing credits before new credits are earned. I am now able to keep the 6 that I earn per hour, because I used up the old ones, decreasing the net impact to my credit balance to the expected level.
This explains the only reservation I had about the graphs in the question: why, when utilization drops off, does it take so long for the balance to rebound?
The TL;DR answer is this: the balance doesn't rebound immediately, after a burst of heavy utilization, because you still have unused credits from 24 hours prior, which are canceling out your newly-earned credits, until you reach the point in time when you don't have any 24-hour-old unused credits. When that happens, your credit balance increases again.
Leave the instance completely idle for 24 hours and you will eventually see the balance steadily (for the most part) rise to the maximum again, as expected. Anything less than 24 hours completely idle will cause your balance to remain perpetually be somewhere below the max.
My test script eventually depleted my credit balance almost all the way down. When I killed the process eating the CPU, the credit balance began to recover immediately, at the expected rate of 6 credits per hour.
Conversely, when I took a different machine that had seen low utilization for 24 hours, and ran it's CPU to 100% for a few minutes, then took it back to idle, the credits did not begin to accumulate immedately... being offset by old, expiring ones.
Quotes are from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html.

Trouble calculating correct decimal digits

I am trying to create a program that will do some simple calculations, but am having trouble with the program not doing the correct math, or placing the decimal correctly, or something. Some other people I asked cannot figure it out either.
Here is the code: http://pastie.org/887352
When you enter the following data:
Weekly Wage: 500
Raise: 3
Years Employed: 8
It outputs the following data:
Year Annual Salary
1 $26000.00
2 $26780.00
3 $27560.00
4 $28340.00
5 $29120.00
6 $29900.00
7 $30680.00
8 $31460.00
And it should be outputting:
Year Annual Salary
1 $26000.00
2 $26780.00
3 $27583.40
4 $28410.90
5 $29263.23
6 $30141.13
7 $31045.36
8 $31976.72
Here is the full description of the task:
8.17 ( Pay Raise Calculator Application) Develop an application that computes the amount of money an employee makes each year over a user- specified number of years. Assume the employee receives a pay raise once every year. The user specifies in the application the initial weekly salary, the amount of the raise (in percent per year) and the number of years for which the amounts earned will be calculated. The application should run as shown in Fig. 8.22. in your text. (fig 8.22 is the output i posted above as what my program should be posting)
Opening the template source code file. Open the PayRaise.cpp file in your text editor or IDE.
Defining variables and prompting the user for input. To store the raise percentage and years of employment that the user inputs, define int variables rate and years, in main after line 12. Also define double variable wage to store the user’s annual wage. Then, insert statements that prompt the user for the raise percentage, years of employment and starting weekly wage. Store the values typed at the keyboard in the rate, years and wage variables, respectively. To find the annual wage, multiply the new wage by 52 (the number of weeks per year) and store the result in wage.
Displaying a table header and formatting output. Use the left and setw stream manipulators to display a table header as shown in Fig. 8.22 in your text. The first column should be six characters wide. Then use the fixed and setprecision stream manipulators to format floating- point values with two positions to the left of the decimal point.
Writing a for statement header. Insert a for statement. Before the first semicolon in the for statement header, define and initialize the variable counter to 1. Before the second semicolon, enter a loop- continuation condition that will cause the for statement to loop until counter has reached the number of years entered. After the second semicolon, enter the increment of counter so that the for statement executes once for each number of years.
Calculating the pay raise. In the body of the for statement, display the value of counter in the first column and the value of wage in the second column. Then calculate the new weekly wage for the following year, and store the resulting value in the wage variable. To do this, add 1 to the percentage increase (be sure to divide the percentage by 100.0 ) and multiply the result by the current value in wage.
Save, compile and run the application. Input a raise percentage and a number of years for the wage increase. View the results to ensure that the correct years are displayed and that the future wage results are correct.
Close the Command Prompt window.
We can not figure it out! Any help would be greatly appreciated, thanks!
Do not store money as floating point. This will end only in tears. Store money as an integral number of cents.
The reason for this is that floating point math on a computer is necessarily inexact. You know that 0.40 / 2 = 0.20, but it's entirely possible that the computer will say it is 0.19999999999999, and that is not an error. The internal representation of floating point numbers makes it impossible for a computer to exactly represent some fractions, much like you cannot write out an exact decimal representation of 1/3 (without an infinite amount of paper).
When you are dealing with numbers that have fractional parts and for which inexactness is not acceptable (e.g. money), you must compute using fixed-point math. In general, you might use a fixed point library, but for an assignment like this, if you're not allowed to do so, an int that stores a number of pennies will do just fine, so long as you understand how integer division works. You will have to write more math code and account for the rounding yourself, though. But that's what you want. You want absolute control over rounding.
I changed your for loop to this:
cout << (i+1) << " $" << wage*52 << "\n";
wage = wage * (1+(raise/100.0));
And it did worked!. I see you didn't understand the language of the problem.
I think that the intention is to receive a 3% raise each year, but you are actually only adding 3% of the starting salary ($780 in this case) each year. You may want to explore modifying the wage value on each pass in the loop (I won't present a solution as I suspect that this is a homework problem, yes?).
The best way to catch this sort of problem is to run it in a debugger and step through each line looking for when the results don't match your expectations. It's usually pretty easy at that point to figure out where your logic went astray.
Your problem is that your program ignores compounding. You are calculating the dollar value of the raise once, and using that for each increase. Once you get your first raise, the value of your second raise needs to be calculated based on your new wage, not your original wage.