In MFC, what is the BEST way to convert a month string to an int, e.g. April to 4? "BEST" here could mean shortest in code, fastest in execution or least memory usage.
I doubt this would be the fastest in execution or least memory usage, but I feel it is pretty short and simple.
int ToNumber(LPCTSTR lpMonthName)
{
COleDateTime datetime;
datetime.ParseDateTime(CString(_T("1 ")) + lpMonthName + _T("2000"),VAR_DATEVALUEONLY,LANG_USER_DEFAULT );
return datetime.GetMonth();
}
I got the idea from how I have seen it done in C#
DateTime.ParseExact(month, "MMMM", CultureInfo.CurrentCulture ).Month
Related
I was reviewing my handouts for our algorithm class and I started to think about this question:
Given different types of coins with different values, find all coin configurations to add up to a certain sum without duplication.
During class, we solved the problem to find the number of all possible ways for a sum and the least number of coins for a sum. However, we never tried to actually find the solutions.
I was thinking about solving this problem with dynamic programming.
I came with the recursion version(for simplicity I only print the solutions):
void solve(vector<string>& result, string& currSoln, int index, int target, vector<int>& coins)
{
if(target < 0)
{
return;
}
if(target == 0)
{
result.push_back(currSoln);
}
for(int i = index; i < coins.size(); ++i)
{
stringstream ss;
ss << coins[i];
string newCurrSoln = currSoln + ss.str() + " ";
solve(result, newCurrSoln, i, target - coins[i], coins);
}
}
However, I got stuck when trying to use DP to solve the problem.
I have 2 major obstacles:
I don't know what data structure I should use to store previous answers
I don't know what my bottom-up procedure(using loops to replace recursions) should look like.
Any help is welcomed and some codes would be appreciated!
Thank you for your time.
In a dp solution you generate a set of intermediate states, and how many ways there are to get there. Then your answer is the number that wound up in a success state.
So, for change counting, the states are that you got to a specific amount of change. The counts are the number of ways of making change. And the success state is that you made the correct amount of change.
To go from counting solutions to enumerating them you need to keep those intermediate states, and also keep a record in each state of all of the states that transitioned to that one - and information about how. (In the case of change counting, the how would be which coin you added.)
Now with that information you can start from the success state and recursively go backwards through the dp data structures to actually find the solutions rather than the count. The good news is that all of your recursive work is efficient - you're always only looking at paths that succeed so waste no time on things that won't work. But if there are a billion solutions, then there is no royal shortcut that makes printing out a billion solutions fast.
If you wish to be a little clever, though, you can turn this into a usable enumeration. You can, for instance, say "I know there are 4323431 solutions, what is the 432134'th one?" And finding that solution will be quick.
It is immediately obvious that you can take a dynamic programming approach. What isn't obvious that in most cases (depending on the denominations of the coins) you can use the greedy algorithm, which is likely to be more efficient. See Cormen, Leiserson, Rivest, Stein: Introduction to Algorithms 2nd ed, problems 16.1.
We have a bespoke datetime C++ class which represents time in number of seconds passed since epoch. This is stored as int64. This class provides number of helper functions to read and write various types of datetime formats.
Unfortunately it cant handle dates before epoch because its methods rely on gmtime() and mktime() for many operations, which on our windows system does not support dates before epoch. Does anyone knows of replacement of gmtime and mktime which support negative values on windows.
An example of this limitation is our applicatio's inability to store birthdays before 1970, that is because every date has to use this class.
I amy not be clear on what I am asking, this is because of my limited knowledge of datetime implementation/use and my reluctance to understand that huge legacy class, so if you feel this question can be framed in another way or I might look for something different feel free to suggest.
You could use Boost.DateTime, or use the Win32 APIs directly rather than the CRT.
It's likely that you have a lot of testing ahead of you to ensure that handling of data does not change in your rework. Make sure you have exhaustive unit tests in place for the library as it stands before you begin any refactoring.
If you have to consider your values being valid across multiple different locations in the world, use UTC time as your canonical form and translate to/from local time as needed for sensible input/display.
Maybe you've solved this problem already since it was years ago, but you could also use ICU. Examples at: http://userguide.icu-project.org/datetime/calendar/examples
Coming soon to a std::lib implementation near you:
#include <chrono>
#include <iostream>
int
main()
{
using namespace std::chrono;
std::cout << "Valid range is ["
<< sys_days{year::min()/January/1} + 0us << ", "
<< sys_days{year::max()/December/31} + 23h + 59min + 59s + 999'999us
<< "]\n";
}
Output:
Valid range is [-32767-01-01 00:00:00.000000, 32767-12-31 23:59:59.999999]
Preview available here.
I have a super-simple class representing a decimal # with fixed precision, and when I want to format it I do something like this:
assert(d.DENOMINATOR == 1000000);
char buf[100];
sprintf(buf, "%d.%06d", d._value / d.DENOMINATOR, d._value % d.DENOMINATOR);
Astonishingly (to me at least) this does not work. The %06d term comes out all 0s even when d.DENOMINATOR does not evenly divide d._value. And if I throw an extra %d in the format string, I see the right value show up in the third spot -- so it's like something is secretly creating an extra argument between my two.
If I compute the two terms outside of the call to sprintf, everything behaves how I expect. I thought to reproduce this with a more simple test case:
char testa[200];
char testb[200];
int x = 12345, y = 1000;
sprintf(testa, "%d.%03d", x/y, x%y);
int term1 = x/y, term2 = x%y;
sprintf(testb, "%d.%03d", term1, term2);
...but this works properly. So I'm completely baffled as to exactly what's going on, how to avoid it in the future, etc. Can anyone shed light on this for me?
(EDIT: Problem ended up being that d._value and d.DENOMINATOR are both long longs so %d doesn't suffice. Thanks very much to Serge's comment below which pointed to the problem, and Mark's answer submitted shortly thereafter.)
Almost certainly your term components are a 64-bit type (perhaps long on a 64-bit system) which is getting passed into the non-type-safe sprintf. Thus when you create an intermediate int the size is right and it works fine.
g++ will warn about this and many other useful things with -Wall. The preferred solution is of course to use C++ iostreams for your formatting as they're totally type safe.
The alternate solution is to cast the result of your expression to the type that you told sprintf to expect so it pulls the proper number of bytes out of memory.
Finally, never use sprintf when almost every compiler supports snprintf which prevents all sorts of silly mistakes. Your code is fine now but when someone modifies it later and it runs off the end of the buffer you may spend days tracking down the corruption.
I'm dealing with the problem of reading a 64bit unsigned integer unsigned long long from a string. My code should work both for GCC 4.3 and Visual Studio 2010.
I read this question and answers on the topic: Read 64 bit integer string from file and thougth that strtoull would make the work just fine and more efficiently than using a std::stringstream. Unfortunately strtoullis not available in Visual Studio's stdlib.h.
So I wrote a short templated function:
template <typename T>
T ToNumber(const std::string& Str)
{
T Number;
std::stringstream S(Str);
S >> Number;
return Number;
}
unsigned long long N = ToNumber<unsigned long long>("1234567890123456789");
I'm worried about the efficiency of this solution so, is there a better option in this escenario?
See http://social.msdn.microsoft.com/Forums/en-US/vclanguage/thread/d69a6afe-6558-4913-afb0-616f00805229/
"It's called _strtoui64(), _wcstoui64() and _tcstoui64(). Plus the _l versions for custom locales.
Hans Passant."
By the way, the way to Google things like this is to notice that Google automatically thinks you're wrong (just like newer versions of Visual Studio) and searches for something else instead, so be sure to click on the link to search for what you told it to search for.
Of course you can easily enough write your own function to handle simple decimal strings. The standard functions handle various alternatives according to numeric base and locale, which make them slow in any case.
Yes, stringstream will add a heap allocation atop all that. No, performance really doesn't matter until you can tell the difference.
There is a faster option, to use the deprecated std::strstream class which does not own its buffer (hence does not make a copy or perform an allocation). I wouldn't call that "better" though.
You could parse the string 9 digits at a time starting from the rear and multiplying by 1 000 000 000 ^ i, ie (last 8 digits * 1) + (next 8 digits * 1 billion) ... or
(1000000000000000000)+(234567890000000000)+(123456789)
I know that you can get the digits of a number using modulus and division. The following is how I've done it in the past: (Psuedocode so as to make students reading this do some work for their homework assignment):
int pointer getDigits(int number)
initialize int pointer to array of some size
initialize int i to zero
while number is greater than zero
store result of number mod 10 in array at index i
divide number by 10 and store result in number
increment i
return int pointer
Anyway, I was wondering if there is a better, more efficient way to accomplish this task? If not, is there any alternative methods for this task, avoiding the use of strings? C-style or otherwise?
Thanks. I ask because I'm going to be wanting to do this in a personal project of mine, and I would like to do it as efficiently as possible.
Any help and/or insight is greatly appreciated.
The time it takes to extract the digits will be dwarfed by the time required to dynamically allocate the array. Consider returning the result in a struct:
struct extracted_digits
{
int number_of_digits;
char digits[12];
};
You'll want to pick a suitable value for the maximum number of digits (12 here, which is enough for a 32-bit integer). Alternatively, you could return a std::array<char, 12> and encode the terminal by using an invalid value (so, after the last value, store a 10 or something else that isn't a digit).
Depending on whether you want to handle negative values, you'll also have to decide how to report the unary minus (-).
Unless you want the representation of the number in a base that's a power of 2, that's about the only way to do it.
Smacks of premature optimisation. If profiling proves it matters, then be sure to compare your algo to itoa - internally it may use some CPU instructions that you don't have explicit access to from C++, and which your compiler's optimiser may not be clever enough to employ (e.g. AAM, which divs while saving the mod result). Experiment (and benchmark) coding the assembler yourself. You might dig around for assembly implementations of ITOA (which isn't identical to what you're asking for, but might suggest the optimal CPU instructions).
By "avoiding the use of strings", I'm going to assume you're doing this because a string-only representation is pretty inefficient if you want an integer value.
To that end, I'm going to suggest a slightly unorthodox approach which may be suitable. Don't store them in one form, store them in both. The code below is in C - it will work in C++ but you may want to consider using c++ equivalents - the idea behind it doesn't change however.
By "storing both forms", I mean you can have a structure like:
typedef struct {
int ival;
char sval[sizeof("-2147483648")]; // enough for 32-bits
int dirtyS;
} tIntStr;
and pass around this structure (or its address) rather than the integer itself.
By having macros or inline functions like:
inline void intstrSetI (tIntStr *is, int ival) {
is->ival = i;
is->dirtyS = 1;
}
inline char *intstrGetS (tIntStr *is) {
if (is->dirtyS) {
sprintf (is->sval, "%d", is->ival);
is->dirtyS = 0;
}
return is->sval;
}
Then, to set the value, you would use:
tIntStr is;
intstrSetI (&is, 42);
And whenever you wanted the string representation:
printf ("%s\n" intstrGetS(&is));
fprintf (logFile, "%s\n" intstrGetS(&is));
This has the advantage of calculating the string representation only when needed (the fprintf above would not have to recalculate the string representation and the printf only if it was dirty).
This is a similar trick I use in SQL with using precomputed columns and triggers. The idea there is that you only perform calculations when needed. So an extra column to hold the indexed lowercased last name along with an insert/update trigger to calculate it, is usually a lot more efficient than select lower(non_lowercased_last_name). That's because it amortises the cost of the calculation (done at write time) across all reads.
In that sense, there's little advantage if your code profile is set-int/use-string/set-int/use-string.... But, if it's set-int/use-string/use-string/use-string/use-string..., you'll get a performance boost.
Granted this has a cost, at the bare minimum extra storage required, but most performance issues boil down to a space/time trade-off.
And, if you really want to avoid strings, you can still use the same method (calculate only when needed), it's just that the calculation (and structure) will be different.
As an aside: you may well want to use the library functions to do this rather than handcrafting your own code. Library functions will normally be heavily optimised, possibly more so than your compiler can make from your code (although that's not guaranteed of course).
It's also likely that an itoa, if you have one, will probably outperform sprintf("%d") as well, given its limited use case. You should, however, measure, not guess! Not just in terms of the library functions, but also this entire solution (and the others).
It's fairly trivial to see that a base-100 solution could work as well, using the "digits" 00-99. In each iteration, you'd do a %100 to produce such a digit pair, thus halving the number of steps. The tradeoff is that your digit table is now 200 bytes instead of 10. Still, it easily fits in L1 cache (obviously, this only applies if you're converting a lot of numbers, but otherwise efficientcy is moot anyway). Also, you might end up with a leading zero, as in "0128".
Yes, there is a more efficient way, but not portable, though. Intel's FPU has a special BCD format numbers. So, all you have to do is just to call the correspondent assembler instruction that converts ST(0) to BCD format and stores the result in memory. The instruction name is FBSTP.
Mathematically speaking, the number of decimal digits of an integer is 1+int(log10(abs(a)+1))+(a<0);.
You will not use strings but go through floating points and the log functions. If your platform has whatever type of FP accelerator (every PC or similar has) that will not be a big deal ,and will beat whatever "sting based" algorithm (that is noting more than an iterative divide by ten and count)