I recently did an AWS exam (certified developer-associate). As you may know, the scoring range is between 100 to 1000, and the minimum score to pass the exam is 720.
Unfortunately I scored 615 points, which means I did not pass the exam. AWS e-mailed me to inform my score. In this e-mail there is no no transcription/percentage of each part of the test. This means I am not able to see on which topics I need to study more to pass my next exam.
Is there any person here who took this exam? If so, could you please tell me how I can understand how many questions I needed to answer more to pass this exam?
The total exam consists of 65 questions. With a passing rate of 72% (720/1000), this amounts to approximately 47 questions (65 * 0.72).
A score of 615 means you had about 40 questions right (65 * 0.615). This means you had to correctly answer about 7 questions more to pass the exam. This is an assumption, as AWS may change the passing rate over time.
615 is not too bad, I would suggest you take some more practice exams and focus on the topics that need more attention according to those tests. To be certain I would make the real exam if you score about 80% on practice exams.
Related
I have a customer who wants us to roll back all monthly charges for the last 18 months, and redo them on a different card (long story, they feel they have a legitimate ask)
I see in the documentation, it says the 'default' is that I can refund up to 180 days.
Is there any way to go beyond 180 days on refunds if 180 is the default? Is there a way to change that 180 day limit to 18 months, at least, for a short time?
I have no problem doing credits, and reauthing on a different card.
I just need to know how to get past the 180 days, if that's even possible.
I do have all auth codes and transaction ids that I need to go back 18 months.
Authorize.net calls this Expanded Credit Capabilities. The client would need to complete the form provided in the link.
For a project I am working on, which uses annual financial reports data (of multiple categories) from companies which have been successful or gone bust/into liquidation, I previously created a (fairly well performing) model on AWS Sagemaker using a multiple linear regression algorithm (specifically, the AWS stock algorithm for logistic regression/classification problems - the 'Linear Learner' algorithm)
This model just produces a simple "company is in good health" or "company looks like it will go bust" binary prediction, based on one set of annual data fed in; e.g.
query input: {data:[{
"Gross Revenue": -4000,
"Balance Sheet": 10000,
"Creditors": 4000,
"Debts": 1000000
}]}
inference output: "in good health" / "in bad health"
I trained this model by just ignoring what year for each company the values were from and pilling in all of the annual financial reports data (i.e. one years financial data for one company = one input line) for the training, along with the label of "good" or "bad" - a good company was one which has existed for a while, but hasn't gone bust, a bad company is one which was found to have eventually gone bust; e.g.:
label
Gross Revenue
Balance Sheet
Creditors
Debts
good
10000
20000
0
0
bad
0
5
100
10000
bad
20000
0
4
100000000
I hence used these multiple features (gross revenue, balance sheet...) along with the label (good/bad) in my training input, to create my first model.
I would like to use the same features as before as input (gross revenue, balance sheet..) but over multiple years; e.g take the values from 2020 & 2019 and use these (along with the eventual company status of "good" or "bad") as the singular input for my new model. However I'm unsure of the following:
is this an inappropriate use of logistic regression Machine learning? i.e. is there a more suitable algorithm I should consider?
is it fine, or terribly wrong to try and just use the same technique as before, but combine the data for both years into one input line like:
label
Gross Revenue(2019)
Balance Sheet(2019)
Creditors(2019)
Debts(2019)
Gross Revenue(2020)
Balance Sheet(2020)
Creditors(2020)
Debts(2020)
good
10000
20000
0
0
30000
10000
40
500
bad
100
50
200
50000
100
5
100
10000
bad
5000
0
2000
800000
2000
0
4
100000000
I would personally expect that a company which has gotten worse over time (i.e. companies finances are worse in 2020 than in 2019) should be more likely to be found to be a "bad"/likely to go bust, so I would hope that, if I feed in data like in the above example (i.e. earlier years data comes before later years data, on an input line) my training job ends up creating a model which gives greater weighting to the earlier years data, when making predictions
Any advice or tips would be greatly appreciated - I'm pretty new to machine learning and would like to learn more
UPDATE:
Using Long-Short-Term-Memory Recurrent Neural Networks (LSTM RNN) is one potential route I think I could try taking, but this seems to commonly just be used with multivariate data over many dates; my data only has 2 or 3 dates worth of multivariate data, per company. I would want to try using the data I have for all the companies, over the few dates worth of data there are, in training
I once developed a so called Genetic Time Series in R. I used a Genetic Algorithm which sorted out the best solutions from multivariate data, which were fitted on a VAR in differences or a VECM. Your data seems more macro economic or financial than user-centric and VAR or VECM seems appropriate. (Surely it is possible to treat time-series data in the same way so that we can use LSTM or other approaches, but these are very common) However, I do not know if VAR in differences or VECM works with binary classified labels. Perhaps if you would calculate a metric outcome, which you later label encode to a categorical feature (or label it first to a categorical) than VAR or VECM may also be appropriate.
However you may add all yearly data points to one data points per firm to forecast its survival, but you would loose a lot of insight. If you are interested in time series ML which works a little bit different than for neural networks or elastic net (which could also be used with time series) let me know. And we can work something out. Or I'll paste you some sources.
Summary:
1.)
It is possible to use LSTM, elastic NEt (time points may be dummies or treated as cross sectional panel) or you use VAR in differences and VECM with a slightly different out come variable
2.)
It is possible but you will loose information over time.
All the best,
Patrick
This is my first post on StackOverflow. I want to learn how to code, and develop software. I've enrolled in computer science at my local community college, and have a question about my 'flowchart.' My question is, does my flowchart adhere to the questions being asked? Here's the question:
Draw a flowchart that would provide a workable solution to the following problem.
Management would like a printed report that shows the total bonus pay awarded based on the number of years a person has worked for the company and the total bonus pay.
The data file is on a disk.
The file contains the required fields (date of hire and total annual pay) and may include fields that are not needed for this problem.
The bonus for those with 30 or more years of service is 10% of total annual pay.
The bonus for those with at least 20 years of service but less than 30 years is 6% of total annual pay.
The bonus for those with at least 5 years of service but less than 20 years is 3% of total annual pay.
An employee who has not worked for the company for at least 5 years receives a bonus of $200.
I've bounced the flowcharts I've done on reddit, but I literally have no frame of reference until I get further in the courses, so I need someone to kind of do a once-over and confirm if the flowchart works...
your flowchart is a good start. The things I would add are arrows showing the direction of the flow as well as a block that shows where or how you would print each bonus.
There are also some good flowchart applications like lucidchart that are easy to use.
After hours of searching the web (including SO), I am requesting advice from the community. RRD seems to be the right tool for this, but I could not get a straight answer until now.
My question is : Is it possible to get RRD output a graph for the day, that averages data from the past year ?
In other words, I want the "view span" to be one day long, but the "data span" to extend over the last 12 months, so that for 6pm, the value will be computed as the average value of ALL previous traffic measured at 6pm last 12 months.
Any hints, or instructions welcomed!
There is no direct way to create such a graph, at least in theory it would be possible using multiple DEF lines together with the SHIFT operation to create such a chart ... you would have to use a program to create the necessary command line though
I've seen this twice now, and I just don't understand it. When calculating a "Finance Charge" for a fixed rate loan, applications make the user enter in all possible loan amounts and associated finance charges. Even though these rates are calculable (30%), they application makes the user fill out a table like this:
Loan Amount Finance Charge
100 30
105 31.5
etc, with the loan amounts being provided from $5 to $1500 in $5 increments.
We are starting a new initiative to rebuild this system. Is there a valid reason for doing a rate table this way? I would imagine that we should keep a simple interest field, and calculate it every time we need it.
I'm really at a loss as to why anyone would hardcode a table like that instead of calculating...I mean, computers are kind of designed to do stuff like this. Right?
It looks like compound interest where you're generously rounding up. The 100 case + 1 is pretty boring. But the 105 case + 1 is interesting.
T[0] = FC[105] => 31.5
T[1] = FC[136.5] => ?
Where does 136.5 hit -- 135 or 140? At 140, you've made an extra $1.05.
Or... If the rates were ever not calculable, that would be one reason for this implementation.
Or... The other reason (and one I would do if annoyed enough) would be that these rates were constantly changing, the developer got fed up with it, and he gave them an interface where the end users could set them on their own. The $5 buckets seem outrageous but maybe they were real jerks...