Installed C compiler now how can I import number theory functions? - primes

I have installed the CODE BLOCKS C COMPILER. Now I would use it for number theory. I need to import functions like is_probableprime or factor for checking extremely large numbers. Could I import GWNUM library and how to do it?

Related

C++20 conditional import statements

There's a way to conditionaly use import statements with the C++20 modules feature?
// Pseudocode
IF OS == WINDOWS:
import std.io;
ELSE:
import <iostream>;
You use macros, just like you would for most other conditional compilation operations. And yes, this means that modules have to be built differently for different command line options. But that was always going to be the case.
Also FYI: std.io is a thing provided by MSVC, not Windows. And you should avoid using it due to its non-standard (and likely not-going-to-be-standard) status. If you must use one of MSVC's standard library modules, use import std;.

D compiler profiling

How to figure out which part of my d code takes long time to compile?
I tried to use valgrind, but the the method names were not very insightful. 87% of time was spent in <cycle 7>, 40% of the time in _D4ddmd5lexer5Lexer4scanMFPS4ddmd6tokens5TokenZv
I'm looking for something like this: 40% of the time was spent on xy.d, out of that 80% of the time took compiling various instantiations of template xyz and a reason is because it used memcpy 99% of the time.
I'm interested profiling both DMD and LDC.
As the D compiler front end is written in D, profiling using conventional tools will be rather hard compared to something like C++. I have had some success using tools like gdb and valgrind on Linux and tools like VisualD on Windows, Mac users are kind of SOL.
You have five other options:
Stop trying to find the specific function in the compiler and turn to common knowledge about the problem (see below)
Use a tool like https://github.com/CyberShadow/DBuildStat. It doesn't give you the exact answer you're asking about, but if you're trying to get a large project to compile faster it's better than nothing.
Use the -v flag to try and see which parts of your program take a while. Granted, this is a very brute force approach and can take you a while.
Modify the makefile the DMD front-end to use the -profile switch. Every time you run DMD you will get a profile file with a lot of information. Granted, I don't think this has ever been tried. Your milage may vary.
Try to ask the LDC team about this on their Github issues page. IIRC they made a patched version for profiling that they used for the Weka.io codebase.
When I say turn to common knowledge, I mean to say that your slow compilation is likely due to a few common problems. For example, when an SQL query is taking too long, my first reaction is not to try to profile the MySQL server code. Here are a couple of the most common issues
CTFE, while it speeds up your runtime, is slow. Especially if your doing recursive templates like allSatisfy or your using functions like ctRegex. If you're doing heavy CTFE and you want faster compiles at the price of possible slower code, consider switching them to run time calls.
DMD doesn't (yet) ignore symbols which aren't used in your program, meaning if you import a module, code-gen will happen for all of the functions in the module. This is true even for selective imports. If you don't use them the linker will prune the functions from the resulting executable, but the compiler still took time to compile them. Avoid imports like import std.algorithm; or import std.range;. Instead use package specific imports like import std.algorithm.iteration : map;.

Reusing compiled Theano functions

Suppose I have implemented the following function in Theano:
import theano.tensor as T
from theano import function
x = T.dscalar('x')
y = T.dscalar('y')
z = x + y
f = function([x, y], z)
When I try to run it a graph of computations is constructed, the function gets optimized and compiled.
How can I reuse this compiled chunk of code from within a Python script and/or a C++ application?
EDIT:
The goal is to construct a deep learning network and reuse it in a final C++ app.
Currently this isn't possible. There is user that modified Theano to allow pickling the Theano function, but during unpickling we already re optimize the graph.
There is a Pull Request that allow Theano to generate a C++ library. The user can then compile it himself and use it as a normal C++ library. The lib links against the python lib and requires numpy to be installed. But this isn't ready for broad usage.
What is your goal? To save on the compilation time? If so Theano already caches the c++ module that it compiles, so the next time it is reused, the compilation will be faster. But for a big graph, the optimization phase is always redone as told above, and this can take a significant time.
So what is your goal?
This is something that we are working on. Make sure to use the latest Theano release (0.6) as it compiles faster. The development version is also a little faster.

import function Matlab Coder and C++ executable

Is there any work around for using the "import" function when coverting a matlab *.m file to a C++ executable?
Matlab gives me this response: "Import statements are currently unsupported." and I just wanted to know if I was SOL or not.
Thanks
import makes Java classes available to Matlab programs. Since doing so makes it necessary to actually have a running Java Runtime Environment, I think it would be very costly to provide this functionality to generated C++ code – while it is always present when running the original m-file. I therefore would interpret the error message to say exactly what it says: "unsupported".
To be more precise and give references: MATLAB Language Features Supported for C/C++ Code Generation explicitly says that Java is not supported, but Matlab classes are. Moreover, import is not contained in the list of Functions Supported for C/C++ Code Generation.

Why does trivial loop in python run so much slower than the same in C++? And how to optimize that? [duplicate]

This question already has answers here:
Why are Python Programs often slower than the Equivalent Program Written in C or C++?
(11 answers)
Closed 9 years ago.
simply run a near empty for loop in python and in C++ (as following), the speed are very different, the python is more than a hundred times slower.
a = 0
for i in xrange(large_const):
a += 1
int a = 0;
for (int i = 0; i < large_const; i++)
a += 1;
Plus, what can I do to optimize the speed of python?
(Addition:
I made a bad example here in the first version of this question, I don't really mean that a=1 so that C/C++ compiler could optimize that, I mean the loop itself consumed a lot of resource (maybe I should use a+=1 as example).. And what I mean by how to optimize is that if the for loop is just like a += 1 that simple, how could it be run in the similar speed as C/C++? In my practice, I used Numpy so I can't use pypy anymore (for now), is there some general methods for making loop far more quickly (such as generator in generating list)?
)
A smart C compiler can probably optimize your loop away by recognizing that at the end, a will always be 1. Python can't do that because when iterating over xrange, it needs to call __next__ on the xrange object until it raises StopIteration. python can't know if __next__ will have side-effect until it calls it, so there is no way to optimize the loop away. The take-away message from this paragraph is that it is MUCH HARDER to optimize a Python "compiler" than a C compiler because python is such a dynamic language and requires the compiler to know how the object will behave in certain circumstances. In
C, that's much easier because C knows exactly what type every object is ahead of time.
Of course, compiler aside, python needs to do a lot more work. In C, you're working with base types using operations supported in hardware instructions. In python, the interpreter is interpreting the byte-code one line at a time in software. Clearly that is going to take longer than machine level instructions. And the data model (e.g. calling __next__ over and over again) can also lead to a lot of function calls which the C doesn't need to do. Of course, python does this stuff to make it much more flexible than you can have in a compiled language.
The typical way to speed up python code is to use libraries or intrinsic functions which provide a high level interface to low-level compiled code. scipy and numpy are excellent examples this kind of library. Other things you can look into are using pypy which includes a JIT compiler -- you probably won't reach native speeds, but it'll probably beat Cpython (the most common implementation), or writing extensions in C/fortran using the Cpython-API, cython or f2py for performance critical sections of code.
Simply because Python is a more high level language and has to do more different things on every iteration (like acquiring locks, resolving variables etc.)
“How to optimise” is a very vague question. There is no “general” way to optimise any Python program (everythng possible was already done by the developers of Python). Your particular example can be optimsed this way:
a = 1
That's what any C compiler will do, by the way.
If your program works with numeric data, then using numpy and its vectorised routines often gives you a great performance boost, as it does everything in pure C (using C loops, not Python ones) and doesn't have to take interpreter lock and all this stuff.
Python is (usually) an interpreted language, meaning that the script has to be read line-by-line at runtime and its instructions compiled into usable bytecode at that point.
C is (usually) a compiled language, so by the time you're running it you're working with pure machine code.
Python will never be as fast as C, for that reason.
Edit: In fact, python compiles INTO C code at run time, that's why you get those .pyc files.
As you go more abstract the speed will go down. The fastest code is assembly code which is written directly.
Read this question Why are Python Programs often slower than the Equivalent Program Written in C or C++?