OpenCL, float vs uint, expected performance gain? - c++

What would be the expected performance gain with using (u)int16 over float in an OpenCL kernel ? If any ?
I expect the memory transfer to be roughly divided by two but what of the device load ?
Strangely I can hardly find any benchs or documentations on the subject. (or maybe my google fu is just failing me...)
I'm working on image processing (filtering mostly). The precision is not that critical, indeed the result of several kernels operations is cast into a char. We narrowed several operations where using shorter data types is acceptable. So I was wondering if those operations can be speed up by using shorter data where the precision is not critical.
thanks for your help.

GPUs tend to do floating-point operations better than integral. For example, some will have extra pipelines for floating-point ops, and making everything integral just reduces the GPU's throughput. Data copy may not be your bottleneck and halving the amount by using 16-bit integers may not help. Moreover, on integrated GPU's like Intel or AMD's you can get zero-copy behavior. So the effect on image or buffer size is minimal (to a point).
Also, you might look into 16-bit floating point number support. That gets you the best part of both worlds (half the data w/ floating point numbers).

Related

GPU HLSL compute shader warnings int and uint division

I keep having warnings from compute shader compilation in that I'm recommended to use uints instead of ints with dividing.
By default from the data type I assume uints are faster; however various tests online seem to point to the contrary; perhaps this contradiction is on the CPU side only and GPU parallelisation has some unknown advantage?
(Or is it just bad advice?)
I know that this is an extremely late answer, but this is a question that has come up for me as well, and I wanted to provide some information for anyone who sees this in the future.
I recently found this resource - https://arxiv.org/pdf/1905.08778.pdf
The table at the bottom lists the latency of basic operations on several graphics cards. There is a small but consistent savings to be found by using uints on all measured hardware. However, what the warning doesn't state is that the greater optimization is to be found by replacing division with multiplication if at all possible.
https://www.slideshare.net/DevCentralAMD/lowlevel-shader-optimization-for-nextgen-and-dx11-by-emil-persson states that type conversion is a full-rate operation like int/float subtraction, addition, and multiplication, whereas division is very slow.
I've seen it suggested that to improve performance, one should convert to float, divide, then convert back to int, but as shown in the first source, this will at best give you small gains and at worst actually decrease performance.
You are correct that it varies from performance of operations on the CPU, although I'm not entirely certain why.
Looking at https://www.agner.org/optimize/instruction_tables.pdf it appears that which operation is faster (MUL vs IMUL) varies from CPU to CPU - in a few at the top of the list IMUL is actually faster, despite a higher instruction count. Other CPUs don't provide a distinction between MUL and IMUL at all.
TL;DR uint division is faster on the GPU, but on the CPU YMMV

OpenCL, half vs float performance

I'm currently working on an application that requires large amounts of variables to be stored and processed (~4gb in float)
Since precision of the individual variables are of less importance (I know that they'll be bounded), I saw that I could use OpenCL's half instead of floats, since that would really decrease the amount of memory.
My question is twofold.
Is there any performance hit to using half instead of float (I'd image graphics cards being built for float operations)
Is there a performance hit for mixing floats and half's in calculations? (i.e, a float times a half.)
Sincerily,
Andreas Falkenstrøm Mieritz
ARM CPUs and GPUs have native support for half in their ALUs so you'll get close to double speed, plus substantial savings in energy consumption. Edit: The same goes for PowerVR GPUs.
Desktop hardware only supports half in the load/store and texturing units, AFAIK. Even so, I'd expect half textures to perform better than float textures or buffers on any GPU. Particularly if you can make some clever use of texture filtering.
OpenCL kernels are almost always memory-speed or pci-speed bound. If you are converting a decent chunk of your data for half floats, this will enable faster transfers of your values. Almost certainly faster on any platform/device.
As far as performance, half is rarely worse than float. I am fairly sure that any device which supports half will do computations as fast as it would with float. Again, even if there is a slight overhead here, you will more than make up for it in your far-superior transfer times.

x86 4byte floats vs. 8byte doubles (vs. long long)?

We have a measurement data processing application and currently all data is held as C++ float which means 32bit/4byte on our x86/Windows platform. (32bit Windows Application).
Since precision is becoming an issue, there have been discussions to move to another datatype. The options currently discussed are switching to double (8byte) or implementing a fixed decimal type on top of __int64 (8byte).
The reason the fixed-decimal solution using __int64 as underlying type is even discussed is that someone claimed that double performance is (still) significantly worse than processing floats and that we might see significant performance benefits using a native integer type to store our numbers. (Note that we really would be fine with fixed decimal precision, although the code would obviously become more complex.)
Obviously we need to benchmark in the end, but I would like to ask whether the statement that doubles are worse holds any truth looking at modern processors? I guess for large arrays doubles may mess up cache hits more that floats, but otherwise I really fail to see how they could differ in performance?
It depends on what you do. Additions, subtractions and multiplies on double are just as fast as on float on current x86 and POWER architecture processors. Divisions, square roots and transcendental functions (exp, log, sin, cos, etc.) are usually notably slower with double arguments, since their runtime is dependent on the desired accuracy.
If you go fixed point, multiplies and divisions need to be implemented with long integer multiply / divide instructions which are usually slower than arithmetic on doubles (since processors aren't optimized as much for it). Even more so if you're running in 32 bit mode where a long 64 bit multiply with 128 bit results needs to be synthesized from several 32-bit long multiplies!
Cache utilization is a red herring here. 64-bit integers and doubles are the same size - if you need more than 32 bits, you're gonna eat that penalty no matter what.
Look it up. Both and Intel publish the instruction latencies for their CPUs in freely available PDF documents on their websites.
However, for the most part, performance won't be significantly different, or a couple of reasons:
when using the x87 FPU instead of SSE, all floating point operations are calculated at 80 bits precision internally, and then rounded off, which means that the actual computation is equally expensive for all floating-point types. The only cost is really memory-related then (in terms of CPU cache and memory bandwidth usage, and that's only an issue in float vs double, but irrelevant if you're comparing to int64)
with or without SSE, nearly all floating-point operations are pipelined. When using SSE, the double instructions may (I haven't looked this up) have a higher latency than their float equivalents, but the throughput is the same, so it should be possible to achieve similar performance with doubles.
It's also not a given that a fixed-point datatype would actually be faster either. It might, but the overhead of keeping this datatype consistent after some operations might outweigh the savings. Floating-point operations are fairly cheap on a modern CPU. They have a bit of latency, but as mentioned before, they're generally pipelined, potentially hiding this cost.
So my advice:
Write some quick tests. It shouldn't be that hard to write a program that performs a number of floating-point ops, and then measure how much slower the double version is relative to the float one.
Look it up in the manuals, and see for yourself if there's any significant performance difference between float and double computations
I've trouble the understand the rationale "as double as slower than float we'll use 64 bits int". Guessing performance has always been an black art needing much of experience, on today hardware it is even worse considering the number of factors to take into account. Even measuring is difficult. I know of several cases where micro-benchmarks lent to one solution but in context measurement showed that another was better.
First note that two of the factors which have been given to explain the claimed slower double performance than float are not pertinent here: bandwidth needed will the be same for double as for 64 bits int and SSE2 vectorization would give an advantage to double...
Then consider than using integer computation will increase the pressure on the integer registers and computation units when apparently the floating point one will stay still. (I've already seen cases where doing integer computation in double was a win attributed to the added computation units available)
So I doubt that rolling your own fixed point arithmetic would be advantageous over using double (but I could be showed wrong by measures).
Implementing 64 fixed points isn't really fun. Especially for more complex functions like Sqrt or logarithm. Integers will probably still a bit faster for simple operations like additions. And you'll need to deal with integer overflows. And you need to be careful when implementing rounding, else errors can easily accumulate.
We're implementing fixed points in a C# project because we need determinism which floatingpoint on .net doesn't guarantee. And it's relatively painful. Some formula contained x^3 bang int overflow. Unless you have really compelling reasons not to, use float or double instead of fixedpoint.
SIMD instructions from SSE2 complicate the comparison further, since they allow operation on several floating point numbers(4 floats or 2 doubles) at the same time. I'd use double and try to take advantage of these instructions. So double will probably be significantly slower than floats, but comparing with ints is difficult and I'd prefer float/double over fixedpoint is most scenarios.
It's always best to measure instead of guess. Yes, on many architectures, calculations on doubles process twice the data as calculations on floats (and long doubles are slower still). However, as other answers, and comments on this answer, have pointed out, the x86 architecture doesn't follow the same rules as, say, ARM processors, SPARC processors, etc. On x86 floats, doubles and long doubles are all converted to long doubles for computation. I should have known this, because the conversion causes x86 results to be more accurate than SPARC and Sun went through a lot of trouble to get the less accurate results for Java, sparking some debate (note, that page is from 1998, things have since changed).
Additionally, calculations on doubles are built in to the CPU where calculations on a fixed decimal datatype would be written in software and potentially slower.
You should be able to find a decent fixed sized decimal library and compare.
With various SIMD instruction sets you can perform 4 single precision floating point operations at the same cost as one, essentially you pack 4 floats into a single 128 bit register. When switching to doubles you can only pack 2 doubles into these registers and hence you can only do two operations at the same time.
As many people have said, a 64bit int is probably not worth it if double is an option. At least when SSE is available. This might be different on micro controllers of various kinds but I guess that is not your application. If you need additional precision in long sums of floats, you should keep in mind that this operation is sometimes problematic with floats and doubles and would be more exact on integers.

Overhead of casting double to float?

So I have megabytes of data stored as doubles that need to be sent over a network... now I don't need the precision that a double offers, so I want to convert these to a float before sending them over the network. What is the overhead of simply doing:
float myFloat = (float)myDouble;
I'll be doing this operation several million times every few seconds and don't want to slow anything down. Thanks
EDIT: My platform is x64 with Visual Studio 2008.
EDIT 2: I have no control over how they are stored.
As Michael Burr said, while the overhead strongly depends on your platform, the overhead is definitely less than the time needed to send them over the wire.
a rough estimate:
800MBit/s payload on a excellent Gigabit wire, 25M-floats/second.
On a 2GHz single core, that gives you a whopping 80 clock cycles for each value converted to break even - anythign less, and you will save time. That should be more than enough on all architectures :)
A simple load-store cycle (barring all caching delays) should be below 5 cycles per value. With instruction interleaving, SIMD extensions and/or parallelizing on multiple cores, you are likely to do multiple conversions in a single cycle.
Also, the receiver will be happy having to handle only half the data. Remember that memory access time is nonlinear.
The only thing arguing against the conversion would be is if the transfer should have minimal CPU load: a modern architecture could transfer the data from disk/memory to bus without CPU intervention. However, with above numbers I'd say that doesn't matter in practice.
[edit]
I checked some numbers, the 387 coprocessor would indeed have taken around 70 cycles for a load-store cycle. On the initial pentium, you are down to 3 cycles without any parallelization.
So, unless you run a gigabit network on a 386...
It's going to depend on your implementation of C++ libraries. Test it and see.
Even if it does take time this will not be the slow point in your application.
Your FPU can do the conversion a lot quicker than it can send network traffic (so the bottleneck here will more than likely be the write to the socket).
But as with al things like this measure it and see.
Personally I don't think any time spent here will affect the real time spent sending the data.
Assuming that you're talking about a significant number of packets to ship the data (a reasonable assumption if you're sending millions of values) casting the doubles to float will likely reduce the number of network packets by about half (assuming sizeof(double)==8 and sizeof(float)==4).
Almost certainly the savings in network traffic will dominate whatever time is spent performing the conversion. But as everyone says, measuring some tests will be the proof of the pudding.
Bearing in mind that most compilers deal with doubles a lot more efficiently than floats -- many promote float to double before performing operations on them -- I'd consider taking the block of data, ZIPping/compressing it, then sending the compressed block across. Depending on what your data looks like, you could get 60-90% compression, vs. the 50% you'd get converting 8-byte values to four bytes.
You don't have any choice but to measure them yourself and see. You could use timers to measure them. Looks like some has already implemented a neat C++ timer class
I think this cast is a lot cheaper than you think since it doesn't really involve any kind of calculation. In fact, it's just bitshifting to get rid of some of the digits of exponent and mantissa.
It will also depend on the CPU and what floating point support it has. In the bad old days (1980s), processors supported integer operations only. Floating point math had to be emulated in software. A separate chip for floating point (a coprocessor) could be bought separately.
Modern CPUs now have SIMD instructions, so large amounts of floating point data can be processed at once. These instructions include MMX, SSE, 3DNow! and the like. Your compiler may know how to make use of these instructions, but you may need to write your code in a particular way, and turn on the right options.
Finally, the fastest way to process floating point data is in a video card. A fairly new language called OpenCL lets you send tasks to the video card to be processed there.
It all depends on how much performance you need.

Should I use double or float?

What are the advantages and disadvantages of using one instead of the other in C++?
If you want to know the true answer, you should read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
In short, although double allows for higher precision in its representation, for certain calculations it would produce larger errors. The "right" choice is: use as much precision as you need but not more and choose the right algorithm.
Many compilers do extended floating point math in "non-strict" mode anyway (i.e. use a wider floating point type available in hardware, e.g. 80-bits and 128-bits floating), this should be taken into account as well. In practice, you can hardly see any difference in speed -- they are natives to hardware anyway.
Unless you have some specific reason to do otherwise, use double.
Perhaps surprisingly, it is double and not float that is the "normal" floating-point type in C (and C++). The standard math functions such as sin and log take doubles as arguments, and return doubles. A normal floating-point literal, as when you write 3.14 in your program, has the type double. Not float.
On typical modern computers, doubles can be just as fast as floats, or even faster, so performance is usually not a factor to consider, even for large calculations. (And those would have to be large calculations, or performance shouldn't even enter your mind. My new i7 desktop computer can do six billion multiplications of doubles in one second.)
This question is impossible to answer since there is no context to the question. Here are some things that can affect the choice:
Compiler implementation of floats, doubles and long doubles. The C++ standard states:
There are three floating point types: float, double, and long double. The type double provides at least as much precision as float, and the type long double provides at least as much precision as double.
So, all three can be the same size in memory.
Presence of an FPU. Not all CPUs have FPUs and sometimes the floating point types are emulated and sometimes the floating point types are just not supported.
FPU Architecture. The IA32's FPU is 80bit internally - 32 bit and 64 bit floats are expanded to 80bit on load and reduced on store. There's also SIMD which can do four 32bit floats or two 64bit floats in parallel. Use of SIMD is not defined in the standard so it would require a compiler that does more complex analysis to determine if SIMD can be used, or requires the use of special functions (libraries or intrinsics). The upshot of the 80bit internal format is that you can get slightly different results depending on how often the data is saved to RAM (thus, losing precision). For this reason, compilers don't optimise floating point code particularly well.
Memory bandwidth. If a double requires more storage than a float, then it will take longer to read the data. That's the naive answer. On a modern IA32, it all depends on where the data is coming from. If it's in L1 cache, the load is negligible provided the data comes from a single cache line. If it spans more than one cache line there's a small overhead. If it's from L2, it takes a while longer, if it's in RAM then it's longer still and finally, if it's on disk it's a huge time. So the choice of float or double is less important than the way the data is used. If you want to do a small calculation on lots of sequential data, a small data type is preferable. Doing a lot of computation on a small data set would allow you to use bigger data types with any significant effect. If you're accessing the data very randomly, then the choice of data size is unimportant - data is loaded in pages / cache lines. So even if you only want a byte from RAM, you could get 32 bytes transferred (this is very dependant on the architecture of the system). On top of all of this, the CPU/FPU could be super-scalar (aka pipelined). So, even though a load may take several cycles, the CPU/FPU could be busy doing something else (a multiply for instance) that hides the load time to a degree.
The standard does not enforce any particular format for floating point values.
If you have a specification, then that will guide you to the optimal choice. Otherwise, it's down to experience as to what to use.
Double is more precise but is coded on 8 bytes. float is only 4 bytes, so less room and less precision.
You should be very careful if you have double and float in your application. I had a bug due to that in the past. One part of the code was using float while the rest of the code was using double. Copying double to float and then float to double can cause precision error that can have big impact. In my case, it was a chemical factory... hopefully it didn't have dramatic consequences :)
I think that it is because of this kind of bug that the Ariane 6 rocket has exploded a few years ago!!!
Think carefully about the type to be used for a variable
I personnaly go for double all the time until I see some bottlenecks. Then I consider moving to float or optimizing some other part
This depends on how the compiler implements double. It's legal for double and float to be the same type (and it is on some systems).
That being said, if they are indeed different, the main issue is precision. A double has a much higher precision due to it's difference in size. If the numbers you are using will commonly exceed the value of a float, then use a double.
Several other people have mentioned performance isssues. That would be exactly last on my list of considerations. Correctness should be your #1 consideration.
I think regardless of the differences (which as everyone points out, floats take up less space and are in general faster)... does anyone ever suffer performance issues using double? I say use double... and if later on you decide "wow, this is really slow"... find your performance bottleneck (which is probably not the fact you used double). THEN, if it's still too slow for you, see where you can sacrifice some precision and use float.
Use whichever precision is required to achieve the appropriate results. If you then find that your code isn't performing as well as you'd like (you used profiling correct?) take a look at:
Intel 64 and IA-32 Architectures Optimization Reference Manual
Software Optimization for the AMD64 Processor
It depends highly on the CPU the most obvious trade-offs are between precision and memory. With GBs of RAM, memory is not much of an issue, so it's generally better to use doubles.
As for performance, it depends highly on the CPU. floats will usually get better performance than doubles on a 32 bit machine. On 64 bit, doubles are sometimes faster, since it is (usually) the native size. Still, what will matter much more than your choice of data types is whether or not you can take advantage of SIMD instructions on your processor.
double has higher precision, whereas floats take up less memory and are faster. In general you should use float unless you have a case where it isn't accurate enough.
The main difference between float and double is precision. Wikipedia has more info about
Single precision (float) and Double precision.