C++ difference between array of structs and equal arrays [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there any difference in memory usage / execution speed between e.g.
struct test
{
int a;
float b;
char c;
};
test ar[30];
and
int arr1[30];
float arr2[30];
char arr3[30];
? Lets pretend, that we are not talking about work comfort or programmer sided things, only speed of execution / memory usage.

In terms of memory usage definitely.
When you allocate test ar[30] you are actually allocating:
int - float - char - (padding) - int - float - char - ...
While in your second example you are allocating:
int - int - int - .... - float - float - ... - char - ...
So the layout in your memory is completely different, which will have an impact on your performance (depending on what you do OFC)

In term of execution performance (speed), there is a difference because of the CPU cache; even if you ask the compiler to optimize.
If all the members of a given structure are accessed nearly together, the locality is increased, and you get less cache misses.

In terms of memory size, the compiler may add padding to the struct to align memory, so it is possible that sizeof(test) > sizeof(arr1) + sizeof(arr2) + sizeof(arr3)

Related

Modern way to use SIMD instructions in C++ using g++ [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I'm building a ray tracer and in many cases I need to make additions and multiplications on three floats.
In such cases I've been doing it the naive way:
class Color{
float mR, mG, mB;
...
Color operator+(const Color &color) const
{
return Color(mR + color.mR,
mG + color.mG,
mB + color.mB);
}
Color operator*(const Color &color) const
{
return Color(mR * color.mR / COLOR_MAX,
mG * color.mG / COLOR_MAX,
mB * color.mB / COLOR_MAX);
}
}
This would also happen in equivalent classes such as Point or Vect3.
Then I heard about SIMD instructions and they look like a good fit for what I'm doing. So, of course, I googled it and found this piece of code:
typedef int v4sf __attribute__((mode(V4SF))); // Vector of three single floats
union f4vector
{
v4sf v;
float f[4];
};
Which first of all uses an extra four I don't need right now. But then gcc warns me that:
specifying vector types with __attribute__ ((mode)) is deprecated
I'd like to know how to do this in C++14 (if it even makes a difference at all) and I can't seem to find any other way of doing it.

Iterative or Recursive Factorial [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've implemented factorial recursively this way:
int factorial(int x){
if (x <= 0)
return 1;
else
return (x*factorial(x - 1));
}
int _tmain(int argc, _TCHAR* argv[])
{
std::cout << "Please enter your number :::";
int x;
std::cin >> x;
std::cout<<"factorial(" << x << ") = "<< factorial(x);
getchar(); getchar();
}
which way of implementing such code is more useful writing it using iteration and loops or writing it recursively such as above?
It depends on the number itself.
For normal-range numbers, recursive solution could be used. Since it makes use of previous values to calculate future values, it can provide the value of 'factorial(n-1)' on the fly.
factorial(n) = factorial(n-1) * n
However, since recursion makes use of stacks, it would eventually overflow if your calculation goes deeper than the stack-size. Moreover, recursive solution would give poor performance because of heavy push-pop of the registers in the ill level of each recursive call.
In such cases, iterative approach would be safe.
Have a look at this comparison.
Hope it helps!
In C++ a recursive implementation is usually more expensive than an iterative one.
The reason is that each call causes a creation of a new stack frame which holds data such as return address, local variables and registers that are needed to be saved. Therefore a recursive implementations requires amount of memory which is linear with the depth of your nested calls while iterative implementations use a constant amount of memory.
In your specific case, which is tail recursion, a compiler optimization may switch the function call with a direct jump preventing the unnecessary usage of memory on the call-stack.
For more detailed explanation you might want to read:
Is recursion ever faster than looping?

Defining variables vs calculating on the fly [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In terms of readability and memory usage/ processing speed is it better to define a variable, modify it and output the variable or to just output a result? eg:
int a = 1, b = 2, c;
c = a+b;
std::cout << c << std::endl;
vs
int a = 1, b = 2;
std::cout << a+b << std::endl;
Thanks
Well with this example processing speed and space is negligible. So small and so few instructions.
But in the grand scheme of things the answer is -- well it depends.
The term "better" is in the eye of the beholder. What is better for one program might not be better for another (this includes readability). What may work in one instance may not work in another. Or in the end it could be negligible (arithmetic instructions are pretty fast depending on the scope of what you need and int, double, char, float data types are relatively small and well defined so you know how much memory you are taking up).
Here you do not define if these variables were declared on the stack or the heap. If on the stack then it doesn't matter if you declared it because after the function that these variables live in ends, the memory gets released. If on the heap you may not want to declare millions of variables just to sit there. But then again you may need them there.
So its based almost entirely on a case by case bases when dealing with bigger projects.
And you tell me what is better here?
int result = (3434*234+3423-4/3*23< 233+5435*234+342)? (int)(234+234+234234/34):(int)(2+3*234);
std::cout << result << std::endl;
OR
double x = 3434*234+3423-4/3*23;
double y = 233+5435*234+342;
double a = 234+234+234234/34;
double b = 2+3*234;
int result = 0;
if( x>y) result = a;
else result = b;
std::cout << result << std::endl;
in the end it these do the same things are the same with negligble difference but which one is easier to read?
Your question on memory is easy to answer, variables are stored identifiers so each take a couple bytes (bytes store 8 bits or binary digits) to store. That being said, a byte is almost no memory, meaning that ultimately it has no net effect. In terms of RAM (or Random Access Memory) a byte is again, almost negligible meaning that defining a, b, and c is barely slower than just calculating a + b. Makes sense?

difference between literal types and variable types in c++ [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I was studying C++ via cplusplus.com and came across something like 75u, which seems to describe an unsigned constant.
What has got me confused is: what's the point of declaring a constant to be unsigned when there is already a provision to declare the variable to which 75 will be assigned as unsigned?
Simpler said:
Why would you specifically add a u to a number when assigning it to (for example) an unsigned int?
What's the difference between
unsigned int i = 75;
and
unsigned int i = 75u;
That's because the type of the variable (in an assignment) on the left hand side of the = has nothing to do with how an expression is evaluated (the right hand side).
This seems to surprise many new programmers, but it's still true.
Something like this:
const float two_thirds = 2 / 3; /* Bad code! */
does not assign 0.6666667 to two_thirds; since both 2 and 3 are int literals, the expression is evaluated using integer math.
You need:
const float two_thirds = 2.f / 3;
to force the expression to float. Similar reasoning applies to the use of unsigned, since it has larger range than signed variables.

Shift operator fast or not fast? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What is the faster between the two following code? why?
In which case one can be preferred to the other?
double x = (a + b) >> 1
double x = (a + b) / 2.0
These do different things so pick the one with the functionality you like: Truncating the result down or returning the 0.5 fraction.
"Premature optimization is root of all evil". Use what is more readable, when you have perfomance issue first look for algorithms and data structures, where you can get most perfomance gain, then run profiler and optimize where is necessary.
As others have said, the two statements produce different results, especially if (a + b) is an odd value.
Also, according to the language, a and b must be integral values in order to satisfy the shifting operation.
If a and b differ in type between the two statements, you are comparing apples to elephants.
Given this demo program:
#include <iostream>
#include <cstdlib>
#include <cmath>
using std::cout;
using std::endl;
int main(void)
{
const unsigned int a = 5;
const unsigned int b = 8;
double x = (a + b) >> 1;
cout << x << endl;
double y = (a + b) / 2.0;
cout << y << endl;
return EXIT_SUCCESS;
}
The output:
6
6.5
Based on this experiment, the comparison is apples to oranges. The statement involving shifting is a different operation that dividing by a floating point number.
As far as speed goes, the second statement is slower because the expression (a + b) must be converted to double before applying the division. The division is floating point, which may be slow on platforms without hardware floating point support.
You should not concern yourself on the execution speed of either statement. Of more importance is the correctness and robustness of the program. For example, the two statements above provide different results, which is a very important concern for correctness.
Most Users would wait for a program to produce correct results than have a quick program producing incorrect results or behavior (nobody is in a hurry for a program to crash).
Management would rather you spend time completing the program than wasting time optimizing portions of the program that are executed infrequently.
If a or b is a double or float, shifting will produce incorrect results.