What is the purpose of std::launder? - c++

P0137 introduces the function template std::launder and makes many, many changes to the standard in the sections concerning unions, lifetime, and pointers.
What is the problem this paper is solving? What are the changes to the language that I have to be aware of? And what are we laundering?

std::launder is aptly named, though only if you know what it's for. It performs memory laundering.
Consider the example in the paper:
struct X { const int n; };
union U { X x; float f; };
...
U u = {{ 1 }};
That statement performs aggregate initialization, initializing the first member of U with {1}.
Because n is a const variable, the compiler is free to assume that u.x.n shall always be 1.
So what happens if we do this:
X *p = new (&u.x) X {2};
Because X is trivial, we need not destroy the old object before creating a new one in its place, so this is perfectly legal code. The new object will have its n member be 2.
So tell me... what will u.x.n return?
The obvious answer will be 2. But that's wrong, because the compiler is allowed to assume that a truly const variable (not merely a const&, but an object variable declared const) will never change. But we just changed it.
[basic.life]/8 spells out the circumstances when it is OK to access the newly created object through variables/pointers/references to the old one. And having a const member is one of the disqualifying factors.
So... how can we talk about u.x.n properly?
We have to launder our memory:
assert(*std::launder(&u.x.n) == 2); //Will be true.
Money laundering is used to prevent people from tracing where you got your money from. Memory laundering is used to prevent the compiler from tracing where you got your object from, thus forcing it to avoid any optimizations that may no longer apply.
Another of the disqualifying factors is if you change the type of the object. std::launder can help here too:
alignas(int) char data[sizeof(int)];
new(&data) int;
int *p = std::launder(reinterpret_cast<int*>(&data));
[basic.life]/8 tells us that, if you allocate a new object in the storage of the old one, you cannot access the new object through pointers to the old. launder allows us to side-step that.

std::launder is a mis-nomer. This function performs the opposite of laundering: It soils the pointed-to memory, to remove any expectation the compiler might have regarding the pointed-to value. It precludes any compiler optimizations based on such expectations.
Thus in #NicolBolas' answer, the compiler might be assuming that some memory holds some constant value; or is uninitialized. You're telling the compiler: "That place is (now) soiled, don't make that assumption".
If you're wondering why the compiler would always stick to its naive expectations in the first place, and would need to you to conspicuously soil things for it - you might want to read this discussion:
Why introduce `std::launder` rather than have the compiler take care of it?
... which led me to this view of what std::launder means.

I think there are two purposes of std::launder.
A barrier for constant folding/propagation, including devirtualization.
A barrier for fine-grained object-structure-based alias analysis.
Barrier for overaggressive constant folding/propagation (abandoned)
Historically, the C++ standard allowed compilers to assume that the value of a const-qualified or reference non-static data member obtained in some ways to be immutable, even if its containing object is non-const and may be reused by placement new.
In C++17/P0137R1, std::launder is introduced as a functionality that disables the aforementioned (mis-)optimization (CWG 1776), which is needed for std::optional. And as discussed in P0532R0, portable implementations of std::vector and std::deque may also need std::launder, even if they are C++98 components.
Fortunately, such (mis-)optimization is forbidden by RU007 (included in P1971R0 and C++20). AFAIK there's no compiler performing this (mis-)optimization.
Barrier for devirtualization
A virtual table pointer (vptr) can be considered constant during the lifetime of its containing polymorphic object, which is needed for devirtualization. Given that vptr is not non-static data member, compilers is still allowed to perform devirtualization based on the assumption that the vptr is not changed (i.e., either the object is still in its lifetime, or it is reused by a new object of the same dynamic type) in some cases.
For some unusual uses that replace a polymorphic object with a new object of different dynamic type (shown here), std::launder is needed as a barrier for devirtualization.
IIUC Clang implemented std::launder (__builtin_launder) with these semantics (LLVM-D40218).
Barrier for object-structure-based alias analysis
P0137R1 also changes the C++ object model by introducing pointer-interconvertibility. IIUC such change enables some "object-structure-based alias analysis" proposed in N4303.
As a result, P0137R1 makes the direct use of dereferencing a reinterpret_cast'd pointer from an unsigned char [N] array undefined, even if the array is providing storage for another object of correct type. And then std::launder is needed for access to the nested object.
This kind of alias analysis seems overaggressive and may break many useful code bases. AFAIK it's currently not implemented by any compiler.
Relation to type-based alias analysis/strict aliasing
IIUC std::launder and type-based alias analysis/strict aliasing are unrelated. std::launder requires that an living object of correct type to be at the provided address.
However, it seems that they are accidently made related in Clang (LLVM-D47607).

Related

Why introduce `std::launder` rather than have the compiler take care of it?

I've just read
What is the purpose of std::launder?
and frankly, I am left scratching my head.
Let's start with the second example in #NicolBolas' accepted answer:
aligned_storage<sizeof(int), alignof(int)>::type data;
new(&data) int;
int *p = std::launder(reinterpret_cast<int*>(&data));
[basic.life]/8 tells us that, if you allocate a new object in the
storage of the old one, you cannot access the new object through
pointers to the old. std::launder allows us to side-step that.
So, why not just change the language standard so that accessing data through a reinterpret_cast<int*>(&data) is valid/appropriate? In real life, money laundering is a way to hide reality from the law. But we don't have anything to hide - we're doing something perfectly legitimate here. So why can't the compiler just change it's behavior to its std::launder() behavior when it notices we're accessing data this way?
On to the first example:
X *p = new (&u.x) X {2};
Because X is trivial, we need not destroy the old object before
creating a new one in its place, so this is perfectly legal code. The
new object will have its n member be 2.
So tell me... what will u.x.n return?
The obvious answer will be 2. But that's wrong, because the compiler
is allowed to assume that a truly const variable (not merely a const&,
but an object variable declared const) will never change. But we just
changed it.
So why not make the compiler not be allowed to make the assumption when we write this kind of code, accessing the constant field through the pointer?
Why is it reasonable to have this pseudo-function for punching a hole in formal language semantics, rather than setting the semantics to what they need to be depending on whether or not the code does something like in these examples?
depending on whether or not the code does something like in these examples
Because the compiler cannot always know when data is being accessed "this way".
As things currently stand, the compiler is allowed to assume that, for the following code:
struct foo{ int const x; };
void some_func(foo*);
int bar() {
foo f { 123 };
some_func(&f);
return f.x;
}
bar will always return 123. The compiler may generate code that actually accesses the object. But the object model does not require this. f.x is a const object (not a reference/pointer to const), and therefore it cannot be changed. And f is required to always name the same object (indeed, these are the parts of the standard you would have to change). Therefore, the value of f.x cannot be changed by any non-UB means.
Why is it reasonable to have this pseudo-function for punching a hole in formal language semantics
This was actually discussed. That paper brings up how long these issues have existed (ie: since C++03) and often optimizations made possible by this object model have been employed.
The proposal was rejected on the grounds that it would not actually fix the problem. From this trip report:
However, during discussion it came to light that the proposed alternative would not handle all affected scenarios (particularly scenarios where vtable pointers are in play), and it did not gain consensus.
The report doesn't go into any particular detail on the matter, and the discussions in question are not publicly available. But the proposal itself does point out that it wouldn't allow devirtualizing a second virtual function call, as the first call may have build a new object. So even P0532 would not make launder unnecessary, merely less necessary.
I take the opposite approach. I don't like that c++ just plays around with memory like this or at least allows you to. The compiler should throw an error when trying to create a new object and an address that contains a constant value or an object but has a const member. If the programmer says const it's const. Sorry.

QByteArray access and strict aliasing [duplicate]

The accepted answer to What is the strict aliasing rule? mentions that you can use char * to alias another type but not the other way.
It doesn't make sense to me — if we have two pointers, one of type char * and another of type struct something * pointing to the same location, how is it possible that the first aliases the second but the second doesn't alias the first?
if we have two pointers, one of type char * and another of type struct something * pointing to the same location, how is it possible that the first aliases the second but the second doesn't alias the first?
It does, but that's not the point.
The point is that if you have one or more struct somethings then you may use a char* to read their constituent bytes, but if you have one or more chars then you may not use a struct something* to read them.
The wording in the referenced answer is slightly erroneous, so let’s get that ironed out first:
One object never aliases another object, but two pointers can “alias” the same object (meaning, the pointers point to the same memory location — as M.M. pointed out, this is still not 100% correct wording but you get the idea). Also, the standard itself doesn’t (to the best of my knowledge) actually talk about strict aliasing at all, but merely lays out rules that govern through which kinds of expressions an object may be accessed or not. Compiler flags like -fno-strict-aliasing tell the compiler whether it can assume the programmer followed those rules (so it can perform optimizations based on that assumption) or not.
Now to your question: Any object can be accessed through a pointer to char, but a char object (especially a char array) may not be accessed through most other pointer types.
Based on that, the compiler is required to make the following assumptions:
If the type of the actual object itself is not known, both char* and T* pointers may always point to the same object (alias each other) — symmetric relationship.
If types T1 and T2 are not “related” and neither is char, then T1* and T2* may never point to the same object — symmetric relationship.
A char* pointer may point to a char object or an object of any type T.
A T* pointer may not point to a char object — asymmetric relationship.
I believe, the main rationale behind the asymmetric rules about accessing object through pointers is that a char array might not satisfy the alignment requirements of, e.g., an int.
So, even without compiler optimizations based on the strict aliasing rule, writing an int to the location of a 4-byte char array at addresses 0x1, 0x2, 0x3, 0x4, for instance, will — in the best case — result in poor performance and — in the worst case — access a different memory location, because the CPU instructions might ignore the lowest two address bits when writing a 4-byte value (so here this might result in a write to 0x0, 0x1, 0x2, and 0x3).
Please also be aware that the meaning of “related” differs between C and C++, but that is not relevant to your question.
if we have two pointers, one of type char * and another of type struct something * pointing to the same location, how is it possible that the first aliases the second but the second doesn't alias the first?
Pointers don't alias each other; that's sloppy use of language. Aliasing is when an lvalue is used to access an object of a different type. (Dereferencing a pointer gives an lvalue).
In your example, what's important is the type of the object being aliased. For a concrete example let's say that the object is a double. Accessing the double by dereferencing a char * pointing at the double is fine because the strict aliasing rule permits this. However, accessing a double by dereferencing a struct something * is not permitted (unless, arguably, the struct starts with double!).
If the compiler is looking at a function which takes char * and struct something *, and it does not have available the information about the object being pointed to (this is actually unlikely as aliasing passes are done at a whole-program optimization stage); then it would have to allow for the possibility that the object might actually be a struct something *, so no optimization could be done inside this function.
Many aspects of the C++ Standard are derived from the C Standard, which needs to be understood in the historical context when it was written. If the C Standard were being written to describe a new language which included type-based aliasing, rather than describing an existing language which was designed around the idea that accesses to lvalues were accesses to bit patterns stored in memory, there would be no reason to give any kind of privileged status to the type used for storing characters in a string. Having explicit operations to treat regions of storage as bit patterns would allow optimizations to be simultaneously more effective and safer. Had the C Standard been written in such fashion, the C++ Standard presumably would have been likewise.
As it is, however, the Standard was written to describe a language in which a very common idiom was to copy the values of objects by copying all of the bytes thereof, and the authors of the Standard wanted to allow such constructs to be usable within portable programs.
Further, the authors of the Standard intended that implementations process many non-portable constructs "in a documented manner characteristic of the environment" in cases where doing so would be useful, but waived jurisdiction over when that should happen, since compiler writers were expected to understand their customers' and prospective customers' needs far better than the Committee ever could.
Suppose that in one compilation unit, one has the function:
void copy_thing(char *dest, char *src, int size)
{
while(size--)
*(char volatile *)(dest++) = *(char volatile*)(src++);
}
and in another compilation unit:
float f1,f2;
float test(void)
{
f1 = 1.0f;
f2 = 2.0f;
copy_thing((char*)&f2, (char*)&f1, sizeof f1);
return f2;
}
I think there would have been a consensus among Committee members that no quality implementation should treat the fact that copy_thing never writes to an object of type float as an invitation to assume that the return value will always be 2.0f. There are many things about the above code that should prevent or discourage an implementation from consolidating the read of f2 with the preceding write, with or without a special rule regarding character types, but different implementations would have different reasons for their forfearance.
It would be difficult to describe a set of rules which would require that all implementations process the above code correctly without blocking some existing or plausible implementations from implementing what would otherwise be useful optimizations. An implementation that treated all inter-module calls as opaque would handle such code correctly even if it was oblivious to the fact that a cast from T1 to T2 is a sign that an access to a T2 may affect a T1, or the fact that a volatile access might affect other objects in ways a compiler shouldn't expect to understand. An implementation that performed cross-module in-lining and was oblivious to the implications of typecasts or volatile would process such code correctly if it refrained from making any aliasing assumptions about accesses via character pointers.
The Committee wanted to recognize something in the above construct that compilers would be required to recognize as implying that f2 might be modified, since the alternative would be to view such a construct as Undefined Behavior despite the fact that it should be usable within portable programs. The fact that they chose the fact that the access was made via character pointer was the aspect that forced the issue was never intended to imply that compilers be oblivious to everything else, even though unfortunately some compiler writers interpret the Standard as an invitation to do just that.

Can we detect "trivial relocatability" in C++17?

In future standards of C++, we will have the concept of "trivial relocatability", which means we can simply copy bytes from one object to an uninitialized chunk of memory, and simply ignore/zero out the bytes of the original object.
this way, we imitate the C-style way of copying/moving objects around.
In future standards, we will probably have something like std::is_trivially_relocatable<type> as a type trait. currently, the closest thing we have is std::is_pod<type> which will be deprecated in C++20.
My question is, do we have a way in the current standard (C++17) to figure out if the object is trivially relocatable?
For example, std::unique_ptr<type> can be moved around by copying its bytes to a new memory address and zeroing out the original bytes, but std::is_pod_v<std::unique_ptr<int>> is false.
Also, currently the standard mandate that every uninitialized chunk of memory must pass through a constructor in order to be considered a valid C++ object. even if we can somehow figure out if the object is trivially relocatable, if we just move the bytes - it's still UB according to the standard.
So another question is - even if we can detect trivial relocatability, how can we implement trivial relocation without causing UB? simply calling memcpy + memset(src,0,...) and casting the memory address to the right type is UB.
`
Thanks!
The whole point of trivial-relocatability would seem to be to enable byte-wise moving of objects even in the presence of a non-trivial move constructor or move assignment operator. Even in the current proposal P1144R3, this ultimately requires that a user manually mark types for which this is possible. For a compiler to figure out whether a given type is trivially-relocatable in general is most-likely equivalent to solving the halting problem (it would have to understand and reason about what an arbitrary, potentially user-defined move constructor or move assignment operator does)…
It is, of course, possible that you define your own is_trivially_relocatable trait that defaults to std::is_trivially_copyable_v and have the user specialize for types that should specifically be considered trivially-relocatable. Even this is problematic, however, because there's gonna be no way to automatically propagate this property to types that are composed of trivially-relocatable types…
Even for trivially-copyable types, you can't just copy the bytes of the object representation to some random memory location and cast the address to a pointer to the type of the original object. Since an object was never created, that pointer will not point to an object. And attempting to access the object that pointer doesn't point to will result in undefined behavior. Trivial-copyabibility means you can copy the bytes of the object representation from one existing object to another existing object and rely on that making the value of the one object equal to the value of the other [basic.types]/3.
To do this for trivially-relocating some object would mean that you have to first construct an object of the given type at your target location, then copy the bytes of the original object into that, and then modify the original object in a way equivalent to what would have happened if you had moved from that object. Which is essentially a complicated way of just moving the object…
There's a reason a proposal to add the concept of trivial-relocatability to the language exists: because you currently just can't do it from within the langugage itself…
Note that, despite all this, just because the compiler frontend cannot avoid generating constructor calls doesn't mean the optimizer cannot eliminate unnecessary loads and stores. Let's have a look at what code the compiler generates for your example of moving a std::vector or std::unique_ptr:
auto test1(void* dest, std::vector<int>& src)
{
return new (dest) std::vector<int>(std::move(src));
}
auto test2(void* dest, std::unique_ptr<int>& src)
{
return new (dest) std::unique_ptr<int>(std::move(src));
}
As you can see, just doing an actual move often already boils down to just copying and overwriting some bytes, even for non-trivial types…
Author of P1144 here; somehow I'm just seeing this SO question now!
std::is_trivially_relocatable<T> is proposed for some-future-version-of-C++, but I don't predict it'll get in anytime soon (definitely not C++23, I bet not C++26, quite possibly not ever). The paper (P1144R6, June 2022) ought to answer a lot of your questions, especially the ones where people are correctly answering that if you could already implement this in present-day C++, we wouldn't need a proposal. See also my 2019 C++Now talk.
Michael Kenzel's answer says that P1144 "ultimately requires that a user manually mark types for which [trivial relocation] is possible"; I want to point out that that's kind of the opposite of the point. The state of the art for trivial relocatability is manual marking ("warranting") of each and every such type; for example, in Folly, you'd say
struct Widget {
std::string s;
std::vector<int> v;
};
FOLLY_ASSUME_FBVECTOR_COMPATIBLE(Widget);
And this is a problem, because the average industry programmer shouldn't be bothered with trying to figure out if std::string is trivially relocatable on their library of choice. (The annotation above is wrong on 1.5 of the big 3 vendors!) Even Folly's own maintainers can't get these manual annotations right 100% of the time.
So the idea of P1144 is that the compiler can just take care of it for you. Your job changes from dangerously warranting things-you-don't-necessarily-know, to merely (and optionally) verifying things-you-want-to-be-true via static_assert (Godbolt):
struct Widget {
std::string s;
std::vector<int> v;
};
static_assert(std::is_trivially_relocatable_v<Widget>);
struct Gadget {
std::string s;
std::list<int> v;
};
static_assert(!std::is_trivially_relocatable_v<Gadget>);
In your (OP's) specific use-case, it sounds like you need to find out whether a given lambda type is trivially relocatable (Godbolt):
void f(std::list<int> v) {
auto widget = [&]() { return v; };
auto gadget = [=]() { return v; };
static_assert(std::is_trivially_relocatable_v<decltype(widget)>);
static_assert(!std::is_trivially_relocatable_v<decltype(gadget)>);
}
This is something you can't really do at all with Folly/BSL/EASTL, because their warranting mechanisms work only on named types at the global scope. You can't exactly FOLLY_ASSUME_FBVECTOR_COMPATIBLE(decltype(widget)).
Inside a std::function-like type, you're correct that it would be useful to know whether the captured type is trivially relocatable or not. But since you can't know that, the next best thing (and what you should do in practice) is to check std::is_trivially_copyable. That's the currently blessed type trait that literally means "This type is safe to memcpy, safe to skip the destructor of" — basically all the things you're going to be doing with it. Even if you knew that the type was exactly std::unique_ptr<int>, or whatever, it would still be undefined behavior to memcpy it in present-day C++, because the current standard says that you're not allowed to memcpy types that aren't trivially copyable.
(Btw, technically, P1144 doesn't change that fact. P1144 merely says that the implementation is allowed to elide the effects of relocation, which is a huge wink-and-nod to implementors that they should just use memcpy. But even P1144R6 doesn't make it legal for ordinary non-implementor programmers to memcpy non-trivially-copyable types: it leaves the door open for some compiler to implement, and some library implementation to use, a __builtin_trivial_relocate function that is in some magical sense distinguishable from a plain old memcpy.)
Finally, your last paragraph refers to memcpy + memset(src,0,...). That's wrong. Trivial relocation is tantamount to just memcpy. If you care about the state of the source object afterward — if you care that it's all-zero-bytes, for example — then that must mean you're going to look at it again, which means you aren't actually treating it as destroyed, which means you aren't actually doing the semantics of a relocate here. "Copy and null out the source" is more often the semantics of a move. The point of relocation is to avoid that extra work.

Existence of objects created in C functions

It has been established (see below) placement new is required to create objects
int* p = (int*)malloc(sizeof(int));
*p = 42; // illegal, there isn't an int
Yet that is a pretty standard way of creating objects in C.
The question is, does the int exist if it is created in C, and returned to C++?
In other words, is the following guaranteed to be legal? Assume int is the same for C and C++.
foo.h
#ifdef __cplusplus
extern "C" {
#endif
int* foo(void);
#ifdef __cplusplus
}
#endif
foo.c
#include "foo.h"
#include <stdlib.h>
int* foo(void) {
return malloc(sizeof(int));
}
main.cpp
#include "foo.h"
#include<cstdlib>
int main() {
int* p = foo();
*p = 42;
std::free(p);
}
Links to discussions on mandatory nature of placement new:
Is placement new legally required for putting an int into a char array?
https://stackoverflow.com/a/46841038/4832499
https://groups.google.com/a/isocpp.org/forum/#!msg/std-discussion/rt2ivJnc4hg/Lr541AYgCQAJ
https://www.reddit.com/r/cpp/comments/5fk3wn/undefined_behavior_with_reinterpret_cast/dal28n0/
reinterpret_cast creating a trivially default-constructible object
Yes! But only because int is a fundamental type. Its initialization is vacuous operation:
[dcl.init]/7:
To default-initialize an object of type T means:
If T is a (possibly cv-qualified) class type, constructors are considered. The applicable constructors are enumerated
([over.match.ctor]), and the best one for the initializer () is chosen
through overload resolution. The constructor thus selected is called,
with an empty argument list, to initialize the object.
If T is an array type, each element is default-initialized.
Otherwise, no initialization is performed.
Emphasis mine. Since "not initializing" an int is akin to default initialing it, it's lifetime begins once storage is allocated:
[basic.life]/1:
The lifetime of an object or reference is a runtime property of the
object or reference. An object is said to have non-vacuous
initialization if it is of a class or aggregate type and it or one of
its subobjects is initialized by a constructor other than a trivial
default constructor. The lifetime of an object of type T begins when:
storage with the proper alignment and size for type T is obtained, and
if the object has non-vacuous initialization, its initialization is complete,
Allocation of storage can be done in any way acceptable by the C++ standard. Yes, even just calling malloc. Compiling C code with a C++ compiler would be a very bad idea otherwise. And yet, the C++ FAQ has been suggesting it for years.
In addition, since the C++ standard defers to the C standard where malloc is concerned. I think that wording should be brought forth as well. And here it is:
7.22.3.4 The malloc function - Paragraph 2:
The malloc function allocates space for an object whose size is
specified by size and whose value is indeterminate.
The "value is indeterminate" part kinda indicates there's an object there. Otherwise, how could it have any value, let alone an indeterminate one?
I think the question is badly posed. In C++ we only have the concepts of translation units and linkage, the latter simply meaning under which circumstances names declared in different TUs refer to the same entity or not.
Nothing is virtually said about the linking process as such, the correctness of which must be guaranteed by the compiler/linker anyway; even if the code snippets above were purely C++ sources (with malloc replaced with a nice new int) the result would be still implementation defined ( eg. consider object files compiled with incompatible compiler options/ABIs/runtimes ).
So, either we talk in full generality and conclude that any program made of more than one TU is potentially wrong or we must take for granted that the linking process is 'valid' ( only the implementation knows ) and hence take for granted that if a function from some source language ( in this case C ) primises to return a 'pointer to an existing int' then the the same function in the destination language (C++) must still be a 'pointer to an existing int' (otherwise, following [dcl.link], we could't say that the linkage has been 'achieved', returning to the no man's land).
So, in my opinion, the real problem is assessing what an 'existing' int is in C and C++, comparatively. As I read the correponding standards, in both languages an int lifetime basically begins when its storage is reserved for it: in the OP case of an allocated(in C)/dynamic(in c++) storage duration object, this occurs (on C side) when the effective type of the lvalue *pointer_to_int becomes int (eg. when it's assigned a value; until then, the not-yet-an-int may trap(*)).
This does not happen in the OP case, the malloc result has no effective type yet. So, that int does not exist neither in C nor in C++, it's just a reinterpreted pointer.
That said, the c++ part of the OP code assigns just after returning from foo(); if this was intended, then we could say that given that malloc() in C++ is required having C semantics, a placement new on the c++ side would suffice to make it valid (as the provided links show).
So, summarizing, either the C code should be fixed to return a pointer to an existing int (by assigning to it) or the c++ code should be fixed by adding placement new. (sorry for the lengthy arguing ... :))
(*) here I'm not claiming that the only issue is the existence of trap representation; if it were, one could argue that the result of foo() is an indeterminate value on C++ side, hence something that you can safely assign to. Clearly this is not the case because there are also aliasing rules to take into account ...
I can identify two parts of this question that should be addressed separately.
Object lifetime
It has been established (see below) placement new is required to create objects
I posit that this area of the standard contains ambiguity, omission, contradiction, and/or gratuitous incompatibility with existing practice, and should therefore be considered broken.
The only people who should be interested in what a broken part of the standard actually says are the people responsible for fixing the breakage. Other people (language users and language implementors alike) should defer to existing practice and common sense. Both of which say that one does not need new to create an int, malloc is enough.
This document identifies the problem and proposes a fix (thanks #T.C. for the link)
C compatibility
Assume int is the same for C and C++
It is not enough to assume that.
One also needs to assume that int* is the same, that the same memory is accessible by C and C++ functions linked together in a program, and that the C++ implementation does not define the semantics of calls to functions written in the C programming language to be wiping your hard drive and stealing your girlfriend. In other words, that C and C++ implementations are compatible enough.
None of this is stipulated by the standard or should be assumed.
Indeed, there are C implementations that are incompatible with each other, so they cannot be both compatible with the same C++ implementation.
The only thing the standard says is "Every implementation shall provide for linkage to functions written in the C programming language" (dcl.link) What is the semantics of such linkage is left undefined.
Here, as before, the best course of action is to defer to existing practice and common sense. Both of which say that a C++ implementation usually comes bundled with a compatible enough C implementation, with the linkage working as one would expect.
The question is meaningless. Sorry. This is the only "lawyer" answer possible.
It is meaningless because the C++ and the C language ignore each others, as they ignore anything else.
Nothing in either language is described in term of low level implementation (which is ridiculous for languages often described as "high level assembly"). Both C and C++ are specified (if you can call that a specification) at a very abstract level, and the high and low levels are never reconnected. This generates endless debates about what undefined behaviors means in practice, how unions work, etc.
Although neither the C Standard nor, so far as I know, the C++ Standard officially recognizes the concept, almost any platform which allows programs produce by different compilers to be linked together will support opaque functions.
When processing a call to an opaque function, a compiler will start by
ensuring that the value of all objects that might legitimately be examined
by outside code is written to the storage associated with those objects.
Once that is done, it will place the function's arguments in places specified
by the platform's documentation (the ABI, or Application Binary Interface)
and perform the call.
Once the function returns, the compiler will assume that any objects which
an outside function could have written, may have been written, and will thus
reload any such values from the storage associated with those objects the
next time they are used.
If the storage associated with an object holds a particular bit pattern when
an opaque function returns, and if the object would hold that bit pattern
when it has a defined value, then a compiler must behave as though the object
has that defined value without regard for how it came to hold that bit
pattern.
The concept of opaque functions is very useful, and I see no reason that the C and C++ Standards shouldn't recognize it, nor provide a standard "do nothing" opaque function. To be sure, needlessly calling opaque functions will greatly impede what might otherwise be useful optimizations, but being able to force a compiler to treat actions as opaque function calls when needed may make it possible to enable more optimizations elsewhere.
Unfortunately, things seem to be going in the opposite direction, with build systems increasingly trying to apply "whole program" optimization. WPO would be good if there were a way of distinguishing between function calls that were opaque because the full "optimization barrier" was needed, from those which had been treated as opaque simply because there was no way for optimizers to "see" across inter-module boundaries. Unless or until proper barriers are added, I don't know any way to ensure that optimizers won't get "clever" in ways that break code which would have had defined behavior with the barriers in place.
I believe it is legal now, and retroactively since C++98!
Indeed the C++ specification wording till C++20 was defining an object as (e.g. C++17 wording, [intro.object]):
The constructs in a C++ program create, destroy, refer to, access, and
manipulate objects. An object is created by a definition (6.1), by a
new-expression (8.5.2.4), when implicitly changing the active member
of a union (12.3), or when a temporary object is created (7.4, 15.2).
The possibility of creating an object using malloc allocation was not mentioned. Making it a de-facto undefined behavior.
It was then viewed as a problem, and this issue was addressed later by https://wg21.link/P0593R6 and accepted as a DR against all C++ versions since C++98 inclusive, then added into the C++20 spec, with the new wording.
The wording of the standard are quite vague and may even seem to use tautology, defining a well defined implicitly-created objects (6.7.2.11 Object model [intro.object]) as:
implicitly-created objects whose address is the address of the start
of the region of storage, and produce a pointer value that points to
that object, if that value would result in the program having defined
behavior [...]
The example given in C++20 spec is:
#include <cstdlib>
struct X { int a, b; };
X *make_x() {
// The call to std​::​malloc implicitly creates an object of type X
// and its subobjects a and b, and returns a pointer to that X object
// (or an object that is pointer-interconvertible ([basic.compound]) with it),
// in order to give the subsequent class member access operations
// defined behavior.
X *p = (X*)std::malloc(sizeof(struct X));
p->a = 1;
p->b = 2;
return p;
}
It seems that the `object` created in a C-function as in the OP question, falls into this category and is a valid object. Which would be the case also for allocation of C-structs with malloc.
No, the int does not exist, as explained in the linked Q/As. An important standard quote reads like this in C++14:
1.8 The C ++ object model [intro.object]
[...] An object is created by a definition (3.1), by a new-expression (5.3.4) or by the
implementation (12.2) when needed. [...]
(12.2 is a paragraph about temporary objects)
The C++ standard has no rules for interfacing C and C++ code. A C++ compiler can only analyze objects created by C++ code, but not some bits passed to it form an external source like a C program, or a network interface, etc.
Many rules are tailored to make optimizations possible. Some of them are only possible if the compiler does not have to assume uninitialized memory contains valid objects. For example, the rule that one may not read an uninitialized int would not make sense otherwise, because if ints may exist anywhere, why would it be illegal to read an indeterminate int value?
This would be a standard compliant way to write the program:
int main() {
void* p = foo();
int i = 42;
memcpy(p, &i, sizeof(int));
//std::free(p); //this works only if C and C++ use the same heap.
}

Strict aliasing rule and 'char *' pointers

The accepted answer to What is the strict aliasing rule? mentions that you can use char * to alias another type but not the other way.
It doesn't make sense to me — if we have two pointers, one of type char * and another of type struct something * pointing to the same location, how is it possible that the first aliases the second but the second doesn't alias the first?
if we have two pointers, one of type char * and another of type struct something * pointing to the same location, how is it possible that the first aliases the second but the second doesn't alias the first?
It does, but that's not the point.
The point is that if you have one or more struct somethings then you may use a char* to read their constituent bytes, but if you have one or more chars then you may not use a struct something* to read them.
The wording in the referenced answer is slightly erroneous, so let’s get that ironed out first:
One object never aliases another object, but two pointers can “alias” the same object (meaning, the pointers point to the same memory location — as M.M. pointed out, this is still not 100% correct wording but you get the idea). Also, the standard itself doesn’t (to the best of my knowledge) actually talk about strict aliasing at all, but merely lays out rules that govern through which kinds of expressions an object may be accessed or not. Compiler flags like -fno-strict-aliasing tell the compiler whether it can assume the programmer followed those rules (so it can perform optimizations based on that assumption) or not.
Now to your question: Any object can be accessed through a pointer to char, but a char object (especially a char array) may not be accessed through most other pointer types.
Based on that, the compiler is required to make the following assumptions:
If the type of the actual object itself is not known, both char* and T* pointers may always point to the same object (alias each other) — symmetric relationship.
If types T1 and T2 are not “related” and neither is char, then T1* and T2* may never point to the same object — symmetric relationship.
A char* pointer may point to a char object or an object of any type T.
A T* pointer may not point to a char object — asymmetric relationship.
I believe, the main rationale behind the asymmetric rules about accessing object through pointers is that a char array might not satisfy the alignment requirements of, e.g., an int.
So, even without compiler optimizations based on the strict aliasing rule, writing an int to the location of a 4-byte char array at addresses 0x1, 0x2, 0x3, 0x4, for instance, will — in the best case — result in poor performance and — in the worst case — access a different memory location, because the CPU instructions might ignore the lowest two address bits when writing a 4-byte value (so here this might result in a write to 0x0, 0x1, 0x2, and 0x3).
Please also be aware that the meaning of “related” differs between C and C++, but that is not relevant to your question.
if we have two pointers, one of type char * and another of type struct something * pointing to the same location, how is it possible that the first aliases the second but the second doesn't alias the first?
Pointers don't alias each other; that's sloppy use of language. Aliasing is when an lvalue is used to access an object of a different type. (Dereferencing a pointer gives an lvalue).
In your example, what's important is the type of the object being aliased. For a concrete example let's say that the object is a double. Accessing the double by dereferencing a char * pointing at the double is fine because the strict aliasing rule permits this. However, accessing a double by dereferencing a struct something * is not permitted (unless, arguably, the struct starts with double!).
If the compiler is looking at a function which takes char * and struct something *, and it does not have available the information about the object being pointed to (this is actually unlikely as aliasing passes are done at a whole-program optimization stage); then it would have to allow for the possibility that the object might actually be a struct something *, so no optimization could be done inside this function.
Many aspects of the C++ Standard are derived from the C Standard, which needs to be understood in the historical context when it was written. If the C Standard were being written to describe a new language which included type-based aliasing, rather than describing an existing language which was designed around the idea that accesses to lvalues were accesses to bit patterns stored in memory, there would be no reason to give any kind of privileged status to the type used for storing characters in a string. Having explicit operations to treat regions of storage as bit patterns would allow optimizations to be simultaneously more effective and safer. Had the C Standard been written in such fashion, the C++ Standard presumably would have been likewise.
As it is, however, the Standard was written to describe a language in which a very common idiom was to copy the values of objects by copying all of the bytes thereof, and the authors of the Standard wanted to allow such constructs to be usable within portable programs.
Further, the authors of the Standard intended that implementations process many non-portable constructs "in a documented manner characteristic of the environment" in cases where doing so would be useful, but waived jurisdiction over when that should happen, since compiler writers were expected to understand their customers' and prospective customers' needs far better than the Committee ever could.
Suppose that in one compilation unit, one has the function:
void copy_thing(char *dest, char *src, int size)
{
while(size--)
*(char volatile *)(dest++) = *(char volatile*)(src++);
}
and in another compilation unit:
float f1,f2;
float test(void)
{
f1 = 1.0f;
f2 = 2.0f;
copy_thing((char*)&f2, (char*)&f1, sizeof f1);
return f2;
}
I think there would have been a consensus among Committee members that no quality implementation should treat the fact that copy_thing never writes to an object of type float as an invitation to assume that the return value will always be 2.0f. There are many things about the above code that should prevent or discourage an implementation from consolidating the read of f2 with the preceding write, with or without a special rule regarding character types, but different implementations would have different reasons for their forfearance.
It would be difficult to describe a set of rules which would require that all implementations process the above code correctly without blocking some existing or plausible implementations from implementing what would otherwise be useful optimizations. An implementation that treated all inter-module calls as opaque would handle such code correctly even if it was oblivious to the fact that a cast from T1 to T2 is a sign that an access to a T2 may affect a T1, or the fact that a volatile access might affect other objects in ways a compiler shouldn't expect to understand. An implementation that performed cross-module in-lining and was oblivious to the implications of typecasts or volatile would process such code correctly if it refrained from making any aliasing assumptions about accesses via character pointers.
The Committee wanted to recognize something in the above construct that compilers would be required to recognize as implying that f2 might be modified, since the alternative would be to view such a construct as Undefined Behavior despite the fact that it should be usable within portable programs. The fact that they chose the fact that the access was made via character pointer was the aspect that forced the issue was never intended to imply that compilers be oblivious to everything else, even though unfortunately some compiler writers interpret the Standard as an invitation to do just that.