Use of new in C++ - c++

Let's assume I pass a new object to a function like this:
loadContainer->addControlView( new BmpView( BMP_PICTURE ) );
Now, I want to change a specific characteristic of the BmpView before I pass it to addControlView. The way I do this is like this:
Control* newView = new BmpView( BMP_PICTURE );
newView->changeColor( WHITE );
loadContainer->addControlView( newView );
Does this create an extra temporary/local object? Or is there an equal amount of memory allocated in both cases?

The only added memory allocated in your function is a new pointer *newView, which its size is pretty low and doesn't affect by the actual size of the BmpView. It doesn't allocate twice memory for BmpView.
I'm not considering any memory overhead of calling changeColor, which I assume wasn't the point of this question.

In both cases, there is single call to new, hence the amount of memory used is equal. (Note, this is without speculating about any allocation possibly requested by BmpView constructor, changeColor(), etc.)
However, you may wish to refactor your code to ensure some exception safety, avoid potential leaks hence ensure the amount of memory used is under control:
// C++11
std::unique_ptr<Control> newView(new BmpView(BMP_PICTURE));
// C++14, preferred
//auto newView = std::make_unique<BmpView>(BMP_PICTURE);
newView->changeColor( WHITE );
loadContainer->addControlView( newView.release() );

Reference for the below code/assembly :
https://godbolt.org/g/8DgmC1
#include <cstdio>
class ValueClass {
public:
// Class content not important...
int someValue;
};
void PrintValueClass(ValueClass* ptr) {
printf("%d\n", ptr->someValue);
}
int main() {
PrintValueClass(new ValueClass());
ValueClass* pValueClass = new ValueClass();
pValueClass->someValue = 55;
PrintValueClass(pValueClass);
return 1;
}
Compiled Assembly (PrintValueClass redacted as not important to question at hand) :
Example where you pass the (new ValueClass) directly to the function.
main:
push rbp
mov rbp, rsp
mov edi, 4
call operator new(unsigned long)
mov DWORD PTR [rax], 0
mov rdi, rax
call PrintValueClass(ValueClass*)
mov eax, 1
pop rbp
ret
Example where you create a local variable holding the pointer, do something to it, and then pass it to the function.
main:
push rbp
mov rbp, rsp
sub rsp, 16
mov edi, 4
call operator new(unsigned long)
mov DWORD PTR [rax], 0
mov QWORD PTR [rbp-8], rax
mov rax, QWORD PTR [rbp-8]
mov DWORD PTR [rax], 55
mov rax, QWORD PTR [rbp-8]
mov rdi, rax
call PrintValueClass(ValueClass*)
mov eax, 1
leave
ret
Before diving into the assembly, if your question is does the new operation occur twice if you store the pointer in a variable first, the answer is no. As shown through the assembly, the new 'function' is only called once, so only sizeof(ValueClass) is ever being allocated here through some sort of heap allocation function. But I feel the need to answer the question fully, even if it was not exactly intended to ask this question. Is extra memory used? Technically yes, realistically no.
The only difference between these two pieces of code is stack 'allocation', noted by the sub rsp, 16, which essentially means 'allocate' 16 bytes on the stack for local variables. So truly the only difference here is 16 bytes, which will greatly change on what compiler you use, what architecture you target, and many more factors.
At the end of the day, I would go as far to say, you would never care about the extra 16 bytes.

Related

Why are parameters allocated below the frame pointer instead of above?

I have tried to understand this basing on a square function in c++ at godbolt.org . Clearly, return, parameters and local variables use “rbp - alignment” for this function.
Could someone please explain how this is possible?
What then would rbp + alignment do in this case?
int square(int num){
int n = 5;// just to test how locals are treated with frame pointer
return num * num;
}
Compiler (x86-64 gcc 11.1)
Generated Assembly:
square(int):
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-20], edi. ;\\Both param and local var use rbp-*
mov DWORD PTR[rbp-4], 5. ;//
mov eax, DWORD PTR [rbp-20]
imul eax, eax
pop rbp
ret
This is one of those cases where it’s handy to distinguish between parameters and arguments. In short: arguments are the values given by the caller, while parameters are the variables holding them.
When square is called, the caller places the argument in the rdi register, in accordance with the standard x86-64 calling convention. square then allocates a local variable, the parameter, and places the argument in the parameter. This allows the parameter to be used like any other variable: be read, written into, having its address taken, and so on. Since in this case it’s the callee that allocated the memory for the parameter, it necessarily has to reside below the frame pointer.
With an ABI where arguments are passed on the stack, the callee would be able to reuse the stack slot containing the argument as the parameter. This is exactly what happens on x86-32 (pass -m32 to see yourself):
square(int): # #square(int)
push ebp
mov ebp, esp
push eax
mov eax, dword ptr [ebp + 8]
mov dword ptr [ebp - 4], 5
mov eax, dword ptr [ebp + 8]
imul eax, dword ptr [ebp + 8]
add esp, 4
pop ebp
ret
Of course, if you enabled optimisations, the compiler would not bother with allocating a parameter on the stack in the callee; it would just use the value in the register directly:
square(int): # #square(int)
mov eax, edi
imul eax, edi
ret
GCC allows "leaf" functions, those that don't call other functions, to not bother creating a stack frame. The free stack is fair game to do so as these fns wish.

Compiler stops optimizing unused string away when adding characters

I am curious why the following piece of code:
#include <string>
int main()
{
std::string a = "ABCDEFGHIJKLMNO";
}
when compiled with -O3 yields the following code:
main: # #main
xor eax, eax
ret
(I perfectly understand that there is no need for the unused a so the compiler can entirely omit it from the generated code)
However the following program:
#include <string>
int main()
{
std::string a = "ABCDEFGHIJKLMNOP"; // <-- !!! One Extra P
}
yields:
main: # #main
push rbx
sub rsp, 48
lea rbx, [rsp + 32]
mov qword ptr [rsp + 16], rbx
mov qword ptr [rsp + 8], 16
lea rdi, [rsp + 16]
lea rsi, [rsp + 8]
xor edx, edx
call std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_create(unsigned long&, unsigned long)
mov qword ptr [rsp + 16], rax
mov rcx, qword ptr [rsp + 8]
mov qword ptr [rsp + 32], rcx
movups xmm0, xmmword ptr [rip + .L.str]
movups xmmword ptr [rax], xmm0
mov qword ptr [rsp + 24], rcx
mov rax, qword ptr [rsp + 16]
mov byte ptr [rax + rcx], 0
mov rdi, qword ptr [rsp + 16]
cmp rdi, rbx
je .LBB0_3
call operator delete(void*)
.LBB0_3:
xor eax, eax
add rsp, 48
pop rbx
ret
mov rdi, rax
call _Unwind_Resume
.L.str:
.asciz "ABCDEFGHIJKLMNOP"
when compiled with the same -O3. I don't understand why it does not recognize that the a is still unused, regardless that the string is one byte longer.
This question is relevant to gcc 9.1 and clang 8.0, (online: https://gcc.godbolt.org/z/p1Z8Ns) because other compilers in my observation either entirely drop the unused variable (ellcc) or generate code for it regardless the length of the string.
This is due to the small string optimization. When the string data is less than or equal 16 characters, including the null terminator, it is stored in a buffer local to the std::string object itself. Otherwise, it allocates memory on the heap and stores the data over there.
The first string "ABCDEFGHIJKLMNO" plus the null terminator is exactly of size 16. Adding "P" makes it exceed the buffer, hence new is being called internally, inevitably leading to a system call. The compiler can optimize something away if it's possible to ensure that there are no side effects. A system call probably makes it impossible to do this - by constrast, changing a buffer local to the object under construction allows for such a side effect analysis.
Tracing the local buffer in libstdc++, version 9.1, reveals these parts of bits/basic_string.h:
template<typename _CharT, typename _Traits, typename _Alloc>
class basic_string
{
// ...
enum { _S_local_capacity = 15 / sizeof(_CharT) };
union
{
_CharT _M_local_buf[_S_local_capacity + 1];
size_type _M_allocated_capacity;
};
// ...
};
which lets you spot the local buffer size _S_local_capacity and the local buffer itself (_M_local_buf). When the constructor triggers basic_string::_M_construct being called, you have in bits/basic_string.tcc:
void _M_construct(_InIterator __beg, _InIterator __end, ...)
{
size_type __len = 0;
size_type __capacity = size_type(_S_local_capacity);
while (__beg != __end && __len < __capacity)
{
_M_data()[__len++] = *__beg;
++__beg;
}
where the local buffer is filled with its content. Right after this part, we get to the branch where the local capacity is exhausted - new storage is allocated (through the allocate in M_create), the local buffer is copied into the new storage and filled with the rest of the initializing argument:
while (__beg != __end)
{
if (__len == __capacity)
{
// Allocate more space.
__capacity = __len + 1;
pointer __another = _M_create(__capacity, __len);
this->_S_copy(__another, _M_data(), __len);
_M_dispose();
_M_data(__another);
_M_capacity(__capacity);
}
_M_data()[__len++] = *__beg;
++__beg;
}
As a side note, small string optimization is quite a topic on its own. To get a feeling for how tweaking individual bits can make a difference at large scale, I'd recommend this talk. It also mentions how the std::string implementation that ships with gcc (libstdc++) works and changed during the past to match newer versions of the standard.
I was surprised the compiler saw through a std::string constructor/destructor pair until I saw your second example. It didn't. What you're seeing here is small string optimization and corresponding optimizations from the compiler around that.
Small string optimizations are when the std::string object itself is big enough to hold the contents of the string, a size and possibly a discriminating bit used to indicate whether the string is operating in small or big string mode. In such a case, no dynamic allocations occur and the string is stored in the std::string object itself.
Compilers are really bad at eliding unneeded allocations and deallocations, they are treated almost as if having side effects and are thus impossible to elide. When you go over the small string optimization threshold, dynamic allocations occur and the result is what you see.
As an example
void foo() {
delete new int;
}
is the simplest, dumbest allocation/deallocation pair possible, yet gcc emits this assembly even under O3
sub rsp, 8
mov edi, 4
call operator new(unsigned long)
mov esi, 4
add rsp, 8
mov rdi, rax
jmp operator delete(void*, unsigned long)
While the accepted answer is valid, since C++14 it's actually the case that new and delete calls can be optimized away. See this arcane wording on cppreference:
New-expressions are allowed to elide ... allocations made through replaceable allocation functions. In case of elision, the storage may be provided by the compiler without making the call to an allocation function (this also permits optimizing out unused new-expression).
...
Note that this optimization is only permitted when new-expressions are
used, not any other methods to call a replaceable allocation function:
delete[] new int[10]; can be optimized out, but operator
delete(operator new(10)); cannot.
This actually allows compilers to completely drop your local std::string even if it's very long. In fact - clang++ with libc++ already does this (GodBolt), since libc++ uses built-ins __new and __delete in its implementation of std::string - that's "storage provided by the compiler". Thus, we get:
main():
xor eax, eax
ret
with basically any-length unused string.
GCC doesn't do but I've recently opened bug reports about this; see this SO answer for links.

C++ acess object via pointer instead of direct access

In some code i have seen the following:
(&object)->something.
Is there any advantage to object.something ?
Does the compiler somehow optimize such code, or is it faster in any way?
If operator& is not overloaded it's essentially the same https://godbolt.org/g/iPTjRY:
auto v_1 = f_1.get();
auto v_2 = (&f_1)->get();
resolved to pretty much the same:
lea rax, [rbp-12] ; load object address
mov rdi, rax ; move object address into rdi, not sure why not just: 'lea rdi, [rbp-12]'
call Foo::get() const ; invoke the subroutine
mov DWORD PTR [rbp-4], eax ; save the result at [rbp-4]
(already with no optimizations they are the same; with optmizations turned on... the entire calls get discarded, so that's left for the curious reader)

Optimization of raw new[]/delete[] vs std::vector

Let's mess around with very basic dynamically allocated memory. We take a vector of 3, set its elements and return the sum of the vector.
In the first test case I used a raw pointer with new[]/delete[]. In the second I used std::vector:
#include <vector>
int main()
{
//int *v = new int[3]; // (1)
auto v = std::vector<int>(3); // (2)
for (int i = 0; i < 3; ++i)
v[i] = i + 1;
int s = 0;
for (int i = 0; i < 3; ++i)
s += v[i];
//delete[] v; // (1)
return s;
}
Assembly of (1) (new[]/delete[])
main: # #main
mov eax, 6
ret
Assembly of (2) (std::vector)
main: # #main
push rax
mov edi, 12
call operator new(unsigned long)
mov qword ptr [rax], 0
movabs rcx, 8589934593
mov qword ptr [rax], rcx
mov dword ptr [rax + 8], 3
test rax, rax
je .LBB0_2
mov rdi, rax
call operator delete(void*)
.LBB0_2: # %std::vector<int, std::allocator<int> >::~vector() [clone .exit]
mov eax, 6
pop rdx
ret
Both outputs taken from https://gcc.godbolt.org/ with -std=c++14 -O3
In both versions the returned value is computed at compile time so we see just mov eax, 6; ret.
With the raw new[]/delete[] the dynamic allocation was completely removed. With std::vector however, the memory is allocated, set and freed.
This happens even with an unused variable auto v = std::vector<int>(3): call to new, memory is set and then call to delete.
I realize this is most likely a near impossible answer to give, but maybe someone has some insights and some interesting answers might pop out.
What are the contributing factors that don't allow compiler optimizations to remove the memory allocation in the std::vector case, like in the raw memory allocation case?
When using a pointer to a dynamically allocated array (directly using new[] and delete[]), the compiler optimized away the calls to operator new and operator delete even though they have observable side effects. This optimization is allowed by the C++ standard section 5.3.4 paragraph 10:
An implementation is allowed to omit a call to a replaceable global
allocation function (18.6.1.1, 18.6.1.2). When it does so, the storage
is instead provided by the implementation or...
I'll show the rest of the sentence, which is crucial, at the end.
This optimization is relatively new because it was first allowed in C++14 (proposal N3664). Clang supported it since 3.4. The latest version of gcc, namely 5.3.0, doesn't take advantage of this relaxation of the as-if rule. It produces the following code:
main:
sub rsp, 8
mov edi, 12
call operator new[](unsigned long)
mov DWORD PTR [rax], 1
mov DWORD PTR [rax+4], 2
mov rdi, rax
mov DWORD PTR [rax+8], 3
call operator delete[](void*)
mov eax, 6
add rsp, 8
ret
MSVC 2013 also doesn't support this optimization. It produces the following code:
main:
sub rsp,28h
mov ecx,0Ch
call operator new[] ()
mov rcx,rax
mov dword ptr [rax],1
mov dword ptr [rax+4],2
mov dword ptr [rax+8],3
call operator delete[] ()
mov eax,6
add rsp,28h
ret
I currently don't have access to MSVC 2015 Update 1 and therefore I don't know whether it supports this optimization or not.
Finally, here is the assembly code generated by icc 13.0.1:
main:
push rbp
mov rbp, rsp
and rsp, -128
sub rsp, 128
mov edi, 3
call __intel_new_proc_init
stmxcsr DWORD PTR [rsp]
mov edi, 12
or DWORD PTR [rsp], 32832
ldmxcsr DWORD PTR [rsp]
call operator new[](unsigned long)
mov rdi, rax
mov DWORD PTR [rax], 1
mov DWORD PTR [4+rax], 2
mov DWORD PTR [8+rax], 3
call operator delete[](void*)
mov eax, 6
mov rsp, rbp
pop rbp
ret
Clearly, it doesn't support this optimization. I don't have access to the latest version of icc, namely 16.0.
All of these code snippets have been produced with optimizations enabled.
When using std::vector, all of these compilers didn't optimize away the allocation. When a compiler doesn't perform an optimization, it's either because it cannot for some reason or it's just not yet supported.
What are the contributing factors that don't allow compiler
optimizations to remove the memory allocation in the std::vector case,
like in the raw memory allocation case?
The compiler didn't perform the optimization because it's not allowed to. To see this, let's see the rest of the sentence of paragraph 10 from 5.3.4:
An implementation is allowed to omit a call to a replaceable global
allocation function (18.6.1.1, 18.6.1.2). When it does so, the storage
is instead provided by the implementation or provided by extending the
allocation of another new-expression.
What this is saying is that you can omit a call to a replaceable global allocation function only if it originated from a new-expression. A new-expression is defined in paragraph 1 of the same section.
The following expression
new int[3]
is a new-expression and therefore the compiler is allowed to optimize away the associated allocation function call.
On the other hand, the following expression:
::operator new(12)
is NOT a new-expression (see 5.3.4 paragraph 1). This is just a function call expression. In other words, this is treated as a typical function call. This function cannot be optimized away because its imported from another shared library (even if you linked the runtime statically, the function itself calls another imported function).
The default allocator used by std::vector allocates memory using ::operator new and therefore the compiler is not allowed to optimize it away.
Let's test this. Here's the code:
int main()
{
int *v = (int*)::operator new(12);
for (int i = 0; i < 3; ++i)
v[i] = i + 1;
int s = 0;
for (int i = 0; i < 3; ++i)
s += v[i];
delete v;
return s;
}
By compiling using Clang 3.7, we get the following assembly code:
main: # #main
push rax
mov edi, 12
call operator new(unsigned long)
movabs rcx, 8589934593
mov qword ptr [rax], rcx
mov dword ptr [rax + 8], 3
test rax, rax
je .LBB0_2
mov rdi, rax
call operator delete(void*)
.LBB0_2:
mov eax, 6
pop rdx
ret
This is exactly the same as assembly code generated when using std::vector except for mov qword ptr [rax], 0 which comes from the constructor of std::vector (the compiler should have removed it but failed to do so because of a flaw in its optimization algorithms).

Do iota, generate, and a hand rolled loop all perform the same?

Is there a performance difference between these three ways of populating a vector?
#include <vector>
#include <numeric>
#include <algorithm>
#include <iterator>
int main()
{
std::vector<int> v(10);
std::iota(v.begin(), v.end(), 0);
std::vector<int> v2(10);
int i = 0;
std::generate(v2.begin(), v2.end(), [&i](){return i++; });
std::vector<int> v3(10);
i = 0;
for (auto& j : v3)
{
j = i++;
}
return 0;
}
I know that they all produce the same results, I am interested only to know if there is a speed difference for larger vectors. Would the answer be different for a different type?
We can look at the output assembly (I used gcc.godbolt.org , gcc -03, with your code) :
1) First version, with std::iota :
main:
sub rsp, 8
mov edi, 40
call operator new(unsigned long)
mov DWORD PTR [rax], 0
mov DWORD PTR [rax+4], 1
mov rdi, rax
mov DWORD PTR [rax+8], 2
mov DWORD PTR [rax+12], 3
mov DWORD PTR [rax+16], 4
mov DWORD PTR [rax+20], 5
mov DWORD PTR [rax+24], 6
mov DWORD PTR [rax+28], 7
mov DWORD PTR [rax+32], 8
mov DWORD PTR [rax+36], 9
call operator delete(void*)
xor eax, eax
add rsp, 8
ret
2) Version with std::generate and the Lambda :
main:
sub rsp, 8
mov edi, 40
call operator new(unsigned long)
mov DWORD PTR [rax], 0
mov DWORD PTR [rax+4], 1
mov rdi, rax
mov DWORD PTR [rax+8], 2
mov DWORD PTR [rax+12], 3
mov DWORD PTR [rax+16], 4
mov DWORD PTR [rax+20], 5
mov DWORD PTR [rax+24], 6
mov DWORD PTR [rax+28], 7
mov DWORD PTR [rax+32], 8
mov DWORD PTR [rax+36], 9
call operator delete(void*)
xor eax, eax
add rsp, 8
ret
3) And the last version, with hand written loop :
main:
sub rsp, 8
mov edi, 40
call operator new(unsigned long)
mov DWORD PTR [rax], 0
mov DWORD PTR [rax+4], 1
mov rdi, rax
mov DWORD PTR [rax+8], 2
mov DWORD PTR [rax+12], 3
mov DWORD PTR [rax+16], 4
mov DWORD PTR [rax+20], 5
mov DWORD PTR [rax+24], 6
mov DWORD PTR [rax+28], 7
mov DWORD PTR [rax+32], 8
mov DWORD PTR [rax+36], 9
call operator delete(void*)
xor eax, eax
add rsp, 8
ret
Conclusion :
As expected, all three generate the same assembly (all unrolled) with a decent compiler, optimizations enabled.
So no, there is no performance difference.
Note:
I did the test of comparing assemblies with vectors large enough to not have unrolled loops (I don't know GCC heuristics, but it started for sizes >~ 15).
In that case the assembly is still identical for the all 3 cases, I won't copy the output here since it doesn't bring much to the answer, but the thing is that compilers are really very good at optimizing this kind of code.
The proper way to find out is to measure and/or compare the generated code, of course. Since std::vector<T> uses contiguous memory for objects of type T compilers are likely to see through all 3 versions of the loops and generate near identical code. Also, there is pretty little a smart implementation can do for the specific algorithms in your setup. Things would be different, e.g., when using std::deque<T> where algorithms could process segments individually to improve performances (I'm not aware of any implementation which actually does so).
In case performance is you biggest concern and you are using large vectors, you might want to not create a large vector initially as this will probably touch all the memory although it is about to be overwritten. Instead, you'd construct and empty vector, reserve() sufficient memory, and then use a suitable target iterator (e.g., std::back_inserter(v)). The approaches would need to be changed suitably, though. When constructing the object in the algorithm, the algoriths actually can apply some smarts which a naive loop using, e.g., push_back()s or a suitable appending iterator probably doesn't apply: since the algoirthms can see how many objects they are going to create, they can hoist the check against the capacity out of the loop (although it needs some special access through the iterator type). Even if there is no optimization in the algorithm I would expect that doing a single pass over the vector has a much bigger benefit for performance than any tweaks in the algorithms.
You forgot to mention one more standard algorithm - algorithm std::for_each.
For example
std::vector<int> v4(10);
int i = 0;
std::for_each(v4.begin(), v4.end(), [&i]( int &item ){ item = i++; } );
There is no any essential difference between the algorithms and the range-based for statement. In fact they duplicate each other. For example the range-based for statement uses the same methods begin() and end().
So it would be better to pay attention to expressiveness. In this case I would prefer std::iota.
Also maybe it would be interesting to read about my proposal on algorithm std::iota Though the genetal text is written in Russian you will be able to read it using for example google service translate.