This question already has answers here:
Platform-independent GUID generation in C++?
(6 answers)
Closed 6 years ago.
I'm trying to generate a GUID in a platform agnostic manner, most search results suggest using BOOST or platform specific libraries. I remember once coming across the following snippet and I was wondering if this is a reliable way of generating GUID's:
unsigned int generateGuid()
{
char c;
return (unsigned int)&c;
}
More specifically, does this guarantee a unique value always? And if not, what are some good lightweight and cross-platform approaches of doing this?
A basic example:
#include <boost/uuid/uuid.hpp> // uuid class
#include <boost/uuid/uuid_generators.hpp> // generators
#include <boost/uuid/uuid_io.hpp> // streaming operators etc.
int main() {
boost::uuids::uuid uuid = boost::uuids::random_generator()();
std::cout << uuid << std::endl;
}
Example output:
No it is not. This function will return some address from stack depending on where it is called. In two subsequent calls or tight loop it will be always the same address. For example in Visual Studio the default stack size is 1 MB, I think, so in the best case you will get one million unique values. Typical program does not use more than 1KB of stack so in that case you will get at most one thousand unique values.
Related
This question already has answers here:
Using Bit Fields to save memory
(2 answers)
When is it worthwhile to use bit fields?
(11 answers)
How slow are bit fields in C++
(3 answers)
C/C++: Force Bit Field Order and Alignment
(7 answers)
Declare a bit in C++
(5 answers)
Closed 2 years ago.
I wanted to create a bit sized variable to save the only possible values 2^1 = 0 | 1
My initial approach was to create a class with variables of type int for storing the value 0 | 1
But then i found that i could also use a bitfield and also create my own struct with custom bits for each type. My question is that does using a struct with bits set to 1 provide better memory performance and hence faster implementation than the class approach for a array of struct like ( 4000 x 4000 )
The code :
#include<iostream>
using namespace std;
struct maze
{
unsigned int top : 1;
unsigned int right : 1;
unsigned int bottom : 1;
unsigned int left : 1;
};
int main()
{
maze access;
cout<<sizeof(access);
access.top=1;
access.right=1;
access.bottom=1;
access.left=1;
cout<<endl<<sizeof(access);
return 0;
}
Edit:
I think i have found the answer : https://stackoverflow.com/a/46544000/13868755
This is really hard to speculate about the performance for this case, and there is a fair chance you'll make it work slower.
Unless this is a proven bottleneck, you should focus on writing a readable and testable code. Does the struct storing the four values make the code cleaner? If so, use the struct. Are the values actually boolean, i.e. true or false only? Use bool to make it clear to the reader.
This will be likely just as performant as your current implementation, and take as much space.
For gauging the performance you need a working code to benchmark on, as for the different applications the performance implications will differ, and guessing it beforehand, though a funny thought experiment, isn't really useful in practice.
An exception is when you know you have a lot of data (like, gigabytes) and your are memory constrained. In that case you should indeed focus on the memory over both code readability and CPU usage. In that case going straight to std::bitset would look like a promising option, with good memory usage guarantees and proven correctness.
If memory is only a secondary concern, simply using packed structs/arrays (look for compiler options for that) should be sufficient and way simpler and cleaner to write.
This question already has answers here:
Is floating point math broken?
(31 answers)
Math precision requirements of C and C++ standard
(1 answer)
Closed 4 years ago.
I have a program that were giving slithly different results under Android and Windows. As I validate the output data against a binary file containign expected result, the difference, even if very small (rounding issue) is annoying and I must find a way to fix it.
Here is a sample program:
#include <iostream>
#include <iomanip>
#include <bitset>
int main( int argc, char* argv[] )
{
// this value was identified as producing different result when used as parameter to std::exp function
unsigned char val[] = {158, 141, 250, 206, 70, 125, 31, 192};
double var = *((double*)val);
std::cout << std::setprecision(30);
std::cout << "var is " << var << std::endl;
double exp_var = std::exp(var);
std::cout << "std::exp(var) is " << exp_var << std::endl;
}
Under Windows, compiled with Visual 2015, I get the output:
var is -7.87234042553191493141184764681
std::exp(var) is 0.00038114128472300899284561093161
Under Android/armv7, compiled with g++ NDK r11b, I get the output:
var is -7.87234042553191493141184764681
std::exp(var) is 0.000381141284723008938635502307335
So the results are different starting e-20:
PC: 0.00038114128472300899284561093161
Android: 0.000381141284723008938635502307335
Note that my program does a lot of math operations and I only noticed std::exp producing different results for the same input...and only for some specific input values (did not investigate if those values are having a similar property), for most of them, results are identical.
Is this behaviour kind of "expected", is there no guarantee to have the same result in some situations?
Is there some compiler flag that could fix that?
Or do I need to round my result to end with the same on both platformas? Then what would be the good strategy for rounding? Because rounding abritrary at e-20 would loose too many information if input var in very small?
Edit: I consider my question not being a duplicate of Is floating point math broken?. I get exactly the same result on both platforms, only std::exp for some specific values produces different results.
The standard does not define how the exp function (or any other math library function1) should be implemented, thus each library implementation may use a different computing method.
For instance, the Android C library (bionic) uses an approximation of exp(r) by a special rational function on the interval [0,0.34658] and scales back the result.
Probably the Microsoft library is using a different computing method (cannot find info about it), thus resulting in different results.
Also the libraries could take a dynamic load strategy (i.e. load a .dll containing the actual implementation) in order to leverage the different hardware specific features, making it even more unpredictable the result, even when using the same compiler.
In order to get the same implementation in both (all) platforms, you could use your own implementation of the exp function, thus not relying on the different implementations of the different libraries.
Take into account that maybe the processors are taking different rounding approaches, which would yield also to a different result.
1 There are some exceptions to these, for isntance the sqrt function or std::fma and some rounding functions and basic arithmetic operations
This question already has answers here:
How to store extremely large numbers?
(4 answers)
Closed 5 years ago.
Is there any way to store a 1000 digit number in c++? I tried storing it to an unsigned long double but it is still to large for its type.
You may find your answer here How to store extremely large numbers? GMP answer sounds right, ie this is what it does with pi digits https://gmplib.org/pi-with-gmp.html
You have to implement it yourself or use a library for it. In particular I love GMP: https://gmplib.org/ , which is an C implementation of Big Int/Float and has C++ wrapper
Use a custom class for your number, something like this:
#include <vector>
#include <iostream>
class large_num {
private:
int digits; // The number of digits in the large number
std::vector<int> num; // The array with digits of the number.
public:
// Implement the constructor, destructor, helper functions etc.
}
For a very large number just add each digit to the vector. For example if the number if 123456, then you do num.pushback(); In this case push all the digits 1,2, .. 6. You can store a extremely large numbers this way.
You should try this one
http://sourceforge.net/projects/libbigint/
great library for things like this.
also you can use boost.
http://www.boost.org/doc/libs/1_53_0/libs/multiprecision/doc/html/boost_multiprecision/intro.html
also one of the most commons is
https://gmplib.org/
Depends on the usage.If you need to do computation on it, probably go with the Big Int Library. If not and only aim is storing, store it in an array with each digit stored in one array element.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is the most random function in C++?
In C++, is this safe:
int main() {
srand(time(0));
unsigned char arr[10];
for(int i=0; i <sizeof(arr); ++i)
arr[i] = (unsigned char)rand();
}
Is there a better way to randomly fill a byte array in a platform independent way? failing that, is there a better way to do this on windows? (I know rand() isn't a very good PRNG, I'm just using it as an example).
Thank you!
What about using boost.random? The generators it uses can be passed to std::generate to fill your array, it's platform independent and headers-only. I would give a code sample but have no boost install available at the moment.
Edit: as mentioned: in C++0x (the upcoming standard), you can use tr1::random, which is essentially just the boost library becoming part of the standard C++ library.
This thread is ok.
How to get Processor and Motherboard Id?
I wanted to get processor ID using C++ code not using WMI or any third party lib.
OR anything on a computer that turns out to be unique.
One thing is Ethernet ID but which is again removable on some machines. This I want to use mostly for licensing purpose.
Is processor ID unique and available on all major processors?
I had a similar problem lately and I did the following. First I gained some unique system identification values:
GetVolumeInformation for HDD serial number
GetComputerName (this of course is not unique, but our system was using the computer names to identify clients on a LAN, so it was good for me)
__cpuid (and specifically the PSN - processor serial number field)
GetAdaptersInfo for MAC addresses
I took these values and combined them in an arbitrary but deterministic way (read update below!) (adding, xoring, dividing and keeping the remainder etc.). Iterate over the values as if they were strings and be creative. In the end, you will get a byte literal which you can transform to the ASCII range of letters and numbers to get a unique, "readable" code that doesn't look like noise.
Another approach can be simply concatenating these values and then "cover them up" with xoring something over them (and maybe transforming to letters again).
I'm saying it's unique, because at least one of the inputs is supposed to be unique (the MAC address). Of course you need some understanding of number theory to not blew away this uniqueness, but it should be good enough anyway.
Important update: Since this post I learned a few things about cryptography, and I'm on the opinion that making up an arbitrary combination (essentially your own hash) is almost certainly a bad idea. Hash functions used in practice are constructed to be well-behaved (as in low probability of collisions) and to be hard to break (the ability construct a value that has the same hash value as another). Constructing such a function is a very hard computer science problem and unless you are qualified, you shouldn't attempt. The correct approach for this is to concatenate whatever information you can collect about the hardware (i.e. the ones I listed in the post) and use a cryptographic hash or digital signature to get a verifiable and secure output. Do not implement the cryptographic algorithms yourself either; there are lots of vulnerability pitfalls that take lots of knowledge to avoid. Use a well-known and trusted library for the implementation of the algorithms.
If you're using Visual Studio, Microsoft provides the __cpuid instrinsic in the <intrin.h> header. Example on the linked msdn site.
Hm...
There are special libraries to generate unique ID based on the hardware installed (so for the specified computer this ID always be the same). Most of them takes motherboard ID + HDD ID + CPU ID and mix these values.
Whe reinvent the wheel? Why not to use these libraries? Any serious reason?
You can use command line.
wmic cpu list full
wmic baseboard list full
Or WMI interface
#include <wmi.hpp>
#include <wmiexception.hpp>
#include <wmiresult.hpp>
#include <../src/wmi.cpp>
#include <../src/wmiresult.cpp> // used
#pragma comment(lib, "wbemuuid.lib")
struct Win32_WmiCpu
{
void setProperties(const WmiResult& result, std::size_t index)
{
//EXAMPLE EXTRACTING PROPERTY TO CLASS
result.extract(index, "ProcessorId", (*this).m_cpuID);
}
static std::string getWmiClassName()
{
return "Win32_Processor";
}
string m_cpuID;
//All the other properties you wish to read from WMI
}; //end struct Win32_ComputerSystem
struct Win32_WmiMotherBoard
{
void setProperties(const WmiResult& result, std::size_t index)
{
//EXAMPLE EXTRACTING PROPERTY TO CLASS
result.extract(index, "SerialNumber", (*this).m_mBId);
}
static std::string getWmiClassName()
{
return "Win32_BaseBoard";
}
string m_mBId;
}; //end struct Win32_ComputerSystem
try
{
const Win32_WmiCpu cpu = Wmi::retrieveWmi<Win32_WmiCpu>();
strncpy_s(ret.m_cpu, cpu.m_cpuID.c_str(), _TRUNCATE);
}
catch (const Wmi::WmiException& )
{
}
try
{
const Win32_WmiMotherBoard mb = Wmi::retrieveWmi<Win32_WmiMotherBoard>();
strncpy_s(ret.m_mb, mb.m_mBId.c_str(), _TRUNCATE);
}
catch (const Wmi::WmiException& )
{
}