How to deal with 128bit variable in MinGM32 bit compiler for Encryption (Diffie Hellman Algorithm) in Qt - c++

I want to use the below equation in one of the code
A = g^a mod p; //g raise to a modulus p.
(something like 2^5 % 3) = 32%3 = 2
(This equation looks like Diffie Hellman algorithm for security)
Where:
^ is (power)
g is fixed number 0x05
a is 128bit(16bytes) randomly generated number,
p is fixed hex number of 128bit(16bytes). Something like (0x0xD4A283974897234CE908B3478387A3).
I am using:
Qt 4.8.7
Compiler MinGW32 (checked with boost library boost 1.70)
The solutions which I found which didn`t work for me are listed below:
one can use __int128 but to support that one should have used
latest GCC compiler or MinGW64 bit compiler, neither of that I am using now.
I found one latest version of Qt has QSslDiffieHellmanParameters class,
but again not supported in our Qt version.
I found some libraries like boost/multiprecision/cpp_int.hpp (boost 1.70))
that does have data type such as int128_t and int256_t, but due to
our compiler isssue or something else, we are not able to store
128bit number, meaning
if I do:
int128_t ptval128 = 0xAB1232423243434343BAE3453345E34B;
cout << "ptval128 = " << std::hex << ptval128 << endl;
//will print only 0xAB12324232434343;//half digits only,
I tried using Bigint which much more useful, but again
5^(128bit number) is way too big, it takes hours to compute things,
(I waited till 1 hour and 16 mins and kill the application).
int myGval = 0x05;
128_bit_data_type myPVal= 0xD4A283974897234CE908B3478387A3;
128_bit_data_type 128_bit_variable = 128_bit_random_data;
myVal = (myGval)^(128_bit_variable) % (myPVal);

That is not how to do modular exponentiation! The first problem is that 5 ^ 128_bit_variable is huge, so big that it won't fit into memory in any computers available today. To keep the required storage space within bounds, you have to take the remainder % myPVal after every operation.
The second problem is that you can't compute 5 ^ 128_bit_variable simply by multiplying by 5 by itself 128_bit_variable times -- that would take longer than the age of the universe. You need to use an exponentiation ladder, which requires just 128 squarings and at most 128 multiplications. See this Wikipedia article for the details. In the end, the operation 5 ^ 128_bit_number should take a fraction of a second.

Related

How do programs calculate square roots?

I understand that this is a pretty math-y question, but how do programs get square roots? From what I've read, this is something that is usually native to the cpu of a device, but I need to be able to do it, probably in c++ (although that's irrelevant).
The reason I need to know about this specifically is that I have an intranet server and I am getting started with crowdsourcing. For this, I am going to start with finding a lot of digits of a certain square root, like sqrt(17) or something.
This is the extent of what python provides is just math.sqrt()
I am going to make a client that can work with other identical clients, so I need complete control over the processes of the math. Heck, this question might not even have an answer, but thanks for your help anyway.
[edit]
I got it working, this is the 'final' product of it: (many thanks to #djhaskin987)
def square_root(number):
old_guess = 1
guess = 2
guesssquared = 0
while round(guesssquared, 10) != round(number, 10):
old_guess = guess
guess = ((number / guess) + guess ) / 2
print(guess)
guesssquared = guess * guess
return guess
solution = square_root(7) #finds square root of 7
print(solution)
Computers use a method that people have actually been using since babylonian times:
def square_root(number):
old_guess = 1
guess = 2
while old_guess != guess:
old_guess = guess
guess = ((number / guess) + guess ) / 2
return guess
x86 has many sqrt in registry, starting with FSQRT for float.
In general, if your function is too complicated or has no implementation, and is C^\infty ("infinitely" differentiable), you can expand it into a polynom via Taylor expansion. This is extremely common in HPC.

Converting bits from one array to another?

I am building a library for this vacuum fluorescent display. Its a very simple interface and I have all the features working.
The problem I am having now is that I am trying to make the code as compact as possable, but the custom character loading is not intuitive. That is the bitmap for the font maps to completely different bits and bytes to the display itself. From the IEE VFD datasheet, when you scroll down you see that the bits are mapped all over the place.
The code I have so far works like so:
// input the font bitmap, the bit from that line of the bitmap and the bit it needs to go to
static unsigned char VFD_CONVERT(const unsigned char* font, unsigned char from, unsigned char to) {
return ((*font >> from) & 0x01) << to;
//return (*font & (1 << from)) ? (1<<to) : 0;
}
// macros to make it easyer to read and see
#define CM_01 font+0, 4
#define CM_02 font+0, 3
#define CM_03 font+0, 2
#define CM_04 font+0, 1
#define CM_05 font+0, 0
// One of the 7 lines I have to send
o = VFD_CONVERT(CM_07,6) | VFD_CONVERT(CM_13,5) | VFD_CONVERT(CM_30,4) | VFD_CONVERT(CM_23,3) | VFD_CONVERT(CM_04,2) | VFD_CONVERT(CM_14,1) | VFD_CONVERT(CM_33,0);
send(o);
This is oviously not all the code. You can see the rest over my Google code repository but it should give you some idea what I am doing.
So the question I have is if there is a better way to optimize this or do the translation?
Changing the return statement on VFD_CONVERT makes GCC go crazy (-O1, -O2, -O3, and -Os does it) and expands the code to 1400 bytes. If I use the return statement with the inline if, it reduces it to 800 bytes. I have been going though the asm generated statements and current I am tempted to just write it all in asm as I am starting to think the compiler doesn't know what it is doing. However I thought maybe its me and I don't know what I am doing and so it confuses the compiler.
As a side note, the code there works, both return statements upload the custom character and it gets displayed (with a weird bug where I have to send it twice, but that's a separate issue).
First of all, you should file a bug report against gcc with a minimal example, since -Os should never generate larger code than -O0. Then, I suggest storing the permutation in a table, like this
const char[][] perm = {{ 7, 13, 30, 23, 4, 14, 33}, ...
with special values indicating a fixed zero or one bit. That'll also make your code more readable.

Retrieve RAM info on a Mac?

I need to retrieve the total amount of RAM present in a system and the total RAM currently being used, so I can calculate a percentage. This is similar to: Retrieve system information on MacOS X?
However, in that question the best answer suggests how to get RAM by reading from:
/usr/bin/vm_stat
Due to the nature of my program, I found out that I am not cannot read from that file - I require a method that will provide me RAM info without simply opening a file and reading from it. I am looking for something to do with function calls. Something like this preferably : getTotalRam() and getRamInUse().
I obviously do not expect it to be that simple but I was looking for a solution other than reading from a file.
I am running Mac OS X Snow Leopard, but would preferably get a solution that would work across all current Mac OS X Platforms (i.e. Lion).
Solutions can be in C++, C or Obj-C, however C++ would the best possible solution in my case so if possible please try to provide it in C++.
Getting the machine's physical memory is simple with sysctl:
int mib [] = { CTL_HW, HW_MEMSIZE };
int64_t value = 0;
size_t length = sizeof(value);
if(-1 == sysctl(mib, 2, &value, &length, NULL, 0))
// An error occurred
// Physical memory is now in value
VM stats are only slightly trickier:
mach_msg_type_number_t count = HOST_VM_INFO_COUNT;
vm_statistics_data_t vmstat;
if(KERN_SUCCESS != host_statistics(mach_host_self(), HOST_VM_INFO, (host_info_t)&vmstat, &count))
// An error occurred
You can then use the data in vmstat to get the information you'd like:
double total = vmstat.wire_count + vmstat.active_count + vmstat.inactive_count + vmstat.free_count;
double wired = vmstat.wire_count / total;
double active = vmstat.active_count / total;
double inactive = vmstat.inactive_count / total;
double free = vmstat.free_count / total;
There is also a 64-bit version of the interface.
You're not supposed to read from /usr/bin/vm_stat, rather you're supposed to run it; it is a program. Look at the first four lines of output
Pages free: 1880145.
Pages active: 49962.
Pages inactive: 43609.
Pages wired down: 123353.
Add the numbers in the right column and multiple by the system page size (as returned by getpagesize()) and you get the total amount of physical memory in the system in bytes.
vm_stat isn't setuid on Mac OS, so I assume there is a non-privileged API somewhere to access this information and that vm_stat is using it. But I don't know what that interface is.
You can figure out the answer to this question by looking at the source of the top command. You can download the source from http://opensource.apple.com/. The 10.7.2 source is available as an archive here or in browsable form here. I recommend downloading the archive and opening top.xcodeproj so you can use Xcode to find definitions (command-clicking in Xcode is very useful).
The top command displays physical memory (RAM) numbers after the label "PhysMem". Searching the project for that string, we find it in the function update_physmem in globalstats.c. It computes the used and free memory numbers from the vm_stat member of struct libtop_tsamp_t.
You can command-click on "vm_stat" to find its declaration as a membor of libtop_tsamp_t in libtop.h. It is declared as type vm_statistics_data_t. Command-clicking that jumps to its definition in /usr/include/mach/vm_statistics.h.
Searching the project for "vm_stat", we find that it is filled in by function libtop_tsamp_update_vm_stats in libtop.c:
mach_msg_type_number_t count = sizeof(tsamp->vm_stat) / sizeof(natural_t);
kr = host_statistics(libtop_port, HOST_VM_INFO, (host_info_t)&tsamp->vm_stat, &count);
if (kr != KERN_SUCCESS) {
return kr;
}
You will need to figure out how libtop_port is set if you want to call host_statistics. I'm sure you can figure that out for yourself.
It's been 4 years but I just wanted to add some extra info on calculating total RAM.
To get the total RAM, we should also consider Pages occupied by compressor and Pages speculative in addition to Kyle Jones answer.
You can check out this post for where the problem occurs.

Superscript in C++ console output

I'd like to have my program output "cm2" (cm squared).
How do make a superscript 2?
As Zan said, it depends what character encoding your standard output supports. If it supports Unicode , you can use the encoding for ²(U+00B2). If it supports the same Unicode encoding for source files and standard output, you can just embed it in the file. For example, my GNU/Linux system uses UTF-8 for both, so this works fine:
#include <iostream>
int main()
{
std::cout << "cm²" << std::endl;
}
This is not something C++ can do on its own.
You would need to use a specific feature of your console system.
I am not aware of any consoles or terminals that implement super-script. I might be wrong though.
I was trying to accomplish this task for the purpose of making a quadratic equation solver. Writing ax² inside a cout << by holding ALT while typing 253 displayed properly in the source code only, BUT NOT in the console. When running the program, it appeared as a light colored rectangle instead of a superscript 2.
A simple solution to this seems to be casting the integer 253 as a char, like this... (char)253.
Because our professor discourages us from using 'magic numbers', I declared it as a constant variable... const int superScriptTwo = 253; //ascii value of super script two.
Then, where I wanted the superscript 2 to appear in the console, I cast my variable as a char like this...
cout << "f(x) = ax" << (char)superScriptTwo << " + bx + c"; and it displayed perfectly.
Perhaps it's even easier just to create it as a char to begin with, and not worry about casting it. This code will also print a super script 2 to the console when compiled and run in VS2013 on my Lenovo running Windows 7...
char ssTwo = 253;
cout << ssTwo << endl;
I hope someone will find this useful. This is my first post, ever, so I do apologize in advance if I accidentally violated any Stack Overflow protocols for answering a question posted 5+ years ago. Any such occurrence was not intentional.
Yes, I agree with Zan.
Basic C++ does not have any inbuilt functionality to print superscripts or subscripts. You need to use any additional UI library.
std::cout << cm\x00B2;
writes cm^2.
For super scripting or sub scripting you need to use ascii value of the letter or number.
Eg: Super scripting 2 for x² we need to get the ascii value of super script of 2 (search in google for that) ie - 253. For typing ascii character you have to do alt + 253 here, you can write a any number, but its 253 in this case.
Eg:-cout<<"x²";
So, now it should display x² on the black screen.
Why don't you try ASCII?
Declare a character and give it an ASCII value of 253 and then print the character.
So your code should go like this;
char ch = 253;
cout<<"cm"<<ch;
This will definitely print cm2.

what the snippet of code does

I would like to know what the snippet of code does..
Drive[0] = 'A';
Drive[1] = ':';
Drive[2] = '\\';
Drive[3] = 0;
DriveMask = GetLogicalDrives();
for( anIndex = 0; anIndex < 26;
anIndex++ )
{
if( DriveMask & 1 )
{
Drive[0] = 'A' + anIndex;
DriveMask >>= 1;
}
}
Please let me know your answer.
Thank you for your time to read my post.
It checks if the lowest bit is set i.e. if there is an A drive. See GetLogicalDrives
It's enumerating all the possible attached drives between A:\ and Z:\ and checking to see whether they're removable (eg CD, floppy).
It loops 26 times, and each time
DriveMask >>= 1;
causes the bitmask to be shifted right by 1 bit, so that each logical drive can be tested for via the
if( DriveMask & 1 )
in succession.
GetDriveType() requires a drive path, so the label is constructed by adding the loop count to the letter A (so A, B, C, D, ..., Z) and leaving the previously-initialized :\ part in-place.
In C++ the & is a bitwise and.
So take the value Drives and do a bitwise with 0x00000001. The result should be 1 if the number is odd (only way to have an odd number is with the least significant bit is 1). Since 0 anded with 1 = 0, it basically zeroes out all the values except for the least significant bit. If that bit is 1, then the result is 1 and evaluates to true.
Otherwise it's 0, and you don't hit the if.
It checks if the number is odd.
& is a bit-wise AND comparison.
0101 (5)
& 0001 (1)
= 0001 (1 -- true)
1110 (14)
& 0001 (1)
= 0000 (0 -- false)
In this case, GetLogicalDrives returns a number whose bits indicate the presence of certain drives. The least significant bit (20, 1) indicates the A drive.
The expression Drives & 1 is testing to see that the result of a logical and between Drives and 0x00000001 is non-zero. Thus it is checking to see if Drives is odd.
actually api returns reply in binary format :- here what MSDN says about it
"
If the function succeeds, the return value is a bitmask representing the currently available disk drives. Bit position 0 (the least-significant bit) is drive A, bit position 1 is drive B, bit position 2 is drive C, and so on.
"
means
if( Drives & 1 ) // i dont understand this if condition here that what it checks ? {
}
Condition checking for digit drive presense.
The GetLogicalDrives function returns the set of logical drives where each drive is encoded as a bit (a binary digit, can be either 0 or 1). The drive labels start at "A" in bit 0 (the least significant bit). The bit is 1 if the drive is present, else it's 0. The & in the above code is a logical-AND operation to test bit 0. Essentially this code checks if the system has an "A:\" drive.
This piece of code does not do absolutely anything in the common understanding of the word do. This code contains only non-modifying query-type operations with no side effects, i.e. it makes some queries and verifies some conditions, but it doesn't make any actions based on the results of these conditions.
In other words, if this code was fed into some hypothetical super-optimizing compiler, which also knows the Windows API, that compiler would simply throw out (optimize away) the entire code, since it doesn't do anything.
Apparently, the code you provided is fake - it is not the whole code. Without the whole thing, it is impossible to say what it was supposed to do. However, if we guess that some useful functionality was supposed to be present between the {} in the following if
if( GetDriveType( Drive ) == DRIVE_REMOVABLE )
{
// Actually DO something here
}
then we can make an educated guess about what it was supposed to do. This code iterates though all possible single-letter drive designations in a Windows system. It checks whether a logical drive designated by that letter is present in the system. And if the drive is present, it checks whether this drive works with removable media. And, finally, if it is true, then it does something useful that you are not showing us. I don't know what it was. Nobody does.