How is seastar::thread better than a stackless coroutine? - c++

I looked at this question: Is seastar::thread a stackful coroutine?
In a comment to the answer, Botond Dénes wrote:
That said, seastar::thread still has its (niche) uses in the post-coroutine world as it supports things like waiting for futures in destructors and catch clauses, allowing for using RAII and cleaner error handling. This is something that coroutines don't and can't support.
Could someone elaborate on this? What are the cases when 'things like waiting for futures in destructors and catch clauses' are impossible with stackless coroutines, but possible with seastar::thread (and alike)?
And more generally, what are the advantages of seastar::thread over C++20 stackless coroutines? Do all stackful coroutine implementations (e.g. those in Boost) have these same advantages?

You have probably heard about the RAII (resource acquisition is initialization) idiom in C++, which is very useful for ensuring that resources get released even in case of exceptions. In the classic, synchronous, C++ world, here is an example:
{
someobject o = get_an_o();
// At this point, "o" is holding some resource (e.g., an open file)
call_some_function(o);
return;
// "o" is destroyed if call_some_function() throws, or before the return.
// When it does, the file it was holding is automatically closed.
}
Now, let's consider asynchronous code using Seastar. Imagine that get_an_o() takes some time to do its work (e.g., open a disk file), and returns a future<someobject>. Well, you might think that you can just use stackless coroutines like this:
someobject o = co_await get_an_o();
// At this point, "o" is holding some resource (e.g., an open file)
call_some_function(o);
return;
But there's a problem here... Although the someobject constructor could be made asynchronous (by using a factory function, get_an_o()in this example), thesomeobject` destructor is still synchronous... It gets called by C++ when unwinding the stack, and it can't return a future! So if the object's destructor does need to wait (e.g., to flush the file), you can't use RAII.
Using a seastar::thread allows you to still use RAII because the destructor now can wait. This is because in a seastar::thread any code - including a destructor, may use get() on a future. This doesn't return a future from the destructor (which we can't) - instead it just "pauses" the stackful coroutine (switches to other tasks and later returns back to this stack when the future resolves).

Related

Is it necessary to call destroy on a std::coroutine_handle?

The std::coroutine_handle is an important part of the new coroutines of C++20. Generators for example often (always?) use it. The handle is manually destroyed in the destructor of the coroutine in all examples that I have seen:
struct Generator {
// Other stuff...
std::coroutine_handle<promise_type> ch;
~Generator() {
if (ch) ch.destroy();
}
}
Is this really necessary? If yes, why isn't this already done by the coroutine_handle, is there a RAII version of the coroutine_handle that behaves that way, and what would happen if we would omit the destroy call?
Examples:
https://en.cppreference.com/w/cpp/coroutine/coroutine_handle (Thanks 463035818_is_not_a_number)
The C++20 standard also mentions it in 9.5.4.10 Example 2 (checked on N4892).
(German) https://www.heise.de/developer/artikel/Ein-unendlicher-Datenstrom-dank-Coroutinen-in-C-20-5991142.html
https://www.scs.stanford.edu/~dm/blog/c++-coroutines.html - Mentiones that it would leak if it weren't called, but does not cite a passage from the standard or why it isn't called in the destructor of std::coroutine_handle.
This is because you want to be able to have a coroutine outlive its handle, a handle should be non-owning. A handle is merely a "view" much like std::string_view -> std::string. You wouldn't want the std::string to destruct itself if the std::string_view goes out of scope.
If you do want this behaviour though, creating your own wrapper around it would be trivial.
That being said, the standard specifies:
The coroutine state is destroyed when control flows off the end of the
coroutine or the destroy member function
([coroutine.handle.resumption]) of a coroutine handle
([coroutine.handle]) that refers to the coroutine is invoked.
The coroutine state will clean up after itself after it has finished running and thus it won't leak unless control doesn't flow off the end.
Of course, in the generator case control typically doesn't flow off the end and thus the programmer has to destroy the coroutine manually. Coroutines have multiple uses though and the standard thus can't really unconditionally mandate the handle destructor call destroy().

Why stackless coroutines require dynamic allocation?

This question is not about coroutines in C++20 but coroutines in general.
I'm learning C++20 coroutines these days. I've learnt about stackful and stackless coroutines from Coroutines Introduction. I've also SO-ed for more infomation.
Here's my understanding about stackless coroutines:
A stackless coroutine does has stack on the caller's stack when it's running.
When it suspends itself, as stackless coroutines can only suspend at the top-level function, its stack is predictable and useful data are stored in a certain area.
When it's not running, it doesn't have a stack. It's bound with a handle, by which the client can resume the coroutine.
The Coroutines TS specifies that the non-array operator new is called when allocating storage for coroutine frames. However, I think this is unnecessary, hence my question.
Some explanation/consideration:
Where to put the coroutine's status instead? In the handle, which originally stores the pointer.
Dynamic allocation doesn't mean storing on the heap. But my intent is to elide calls to operator new, no matter how it is implemented.
From cppreference:
The call to operator new can be optimized out (even if custom allocator is used) if
The lifetime of the coroutine state is strictly nested within the lifetime of the caller, and
the size of coroutine frame is known at the call site
For the first requirement, storing the state directly in the handle is still okay if the coroutine outlives the caller.
For the other, if the caller doesn't know the size, how can it compose the argument to call operator new? Actually, I can't even imagine in which situation the caller doesn't know the size.
Rust seems to have a different implementation, according to this question.
A stackless coroutine does has stack on the caller's stack when it's running.
That right there is the source of your misunderstanding.
Continuation-based coroutines (which is what a "stackless coroutine" is) is a coroutine mechanism that is designed for being able to provide a coroutine to some other code which will resume its execution after some asynchronous process completes. This resumption may take place in some other thread.
As such, the stack cannot be assumed to be "on the caller's stack", since the caller and the process that schedules the coroutine's resumption are not necessarily in the same thread. The coroutine needs to be able to outlive the caller, so the coroutine's stack cannot be on the caller's stack (in general. In certain co_yield-style cases, it can be).
The coroutine handle represents the coroutine's stack. So long as that handle exists, so too does the coroutine's stack.
When it's not running, it doesn't have a stack. It's bound with a handle, by which the client can resume the coroutine.
And how does this "handle" store all of the local variables for the coroutine? Obviously they are preserved (it'd be a bad coroutine mechanism if they weren't), so they have to be stored somewhere. The name given for where a function's local variables are is called the "stack".
Calling it a "handle" doesn't change what it is.
But my intent is to elide calls to operator new, no matter how it is implemented.
Well... you can't. If never calling new is a vital component of writing whatever software you're writing, then you can't use co_await-style coroutine continuations. There is no set of rules you can use that guarantees elision of new in coroutines. If you're using a specific compiler, you can do some tests to see what it elides and what it doesn't, but that's it.
The rules you cite are merely cases that make it possible to elide the call.
For the other, if the caller doesn't know the size, how can it compose the argument to call operator new?
Remember: co_await coroutines in C++ are effectively an implementation detail of a function. The caller has no idea if any function it calls is or is not a coroutine. All coroutines look like regular functions from the outside.
The code for creating a coroutine stack happens within the function call, not outside of it.
The fundamental difference between stackful and stackless coroutines is if the coroutine owns a full, theoretically unbounded stack (but practically bounded) like a thread does.
In a stackful coroutine, the local variables of the coroutine are stored on the stack it owns, like anything else, both during execution and when suspended.
In a stackless coroutine, the local variables to the coroutine can be in the stack while the coroutine is running or not. They are stored in a fixed sized buffer that the stackless coroutine owns.
In theory, a stackless coroutine can be stored on someone else's stack. There is, however, no way to guarantee within C++ code that this happens.
Elision of operator new in the creation of a coroutine is sort of about doing that. If your coroutine object is stored on someone's stack, and new was elided because there is enough room in the coroutine object itself for its state, then the stackless coroutine that lives completely on someone else's stack is possible.
There is no way to guarantee this in the current implementation of C++ coroutines. Attempts to get that in where met with resistance by compiler developers, because the exact minimal capture that a coroutine does happens "later" than the time they need to know how big the coroutine is in their compiler.
This leads to the difference in practice. A stackful coroutine acts more like a thread. You can call normal functions, and those normal functions can interact within their bodies with coroutine operations like suspend.
A stackless coroutine cannot call a function with then interacts with the coroutine machinery. Interacting with the coroutine machinery is only permitted within the stackless coroutine itself.
A stackful coroutine has all of the machinery of a thread without being scheduled on the OS. A stackless coroutine is an augmented function object that has goto labels in it that let it be resumed part way through its body.
There are theoretical implementations of stackless coroutines that don't have the "could call new" feature. The C++ standard doesn't require such a type of stackless coroutine.
Some people proposed them. Their proposals lost out to the current one, in part because the current one was far more polished and closer to being shipped than the alternative proposals where. Some of the syntax of the alternative proposals ended up in the successful proposal.
I believe there was a convincing argument that the "stricter" fixed size no-new coroutine implementations where not ruled out by the current proposal, and could be added on afterwards, and that helped kill the alternative proposals.
Consider this hypothetical case:
void foo(int);
task coroutine() {
int a[100] {};
int * p = a;
while (true) {
co_await awaitable{};
foo (*p);
}
}
p points to the first element of a, if between two resumptions, a's memory location changed, p would not hold the right address.
Memory for what would be the function stack must be allocated in such a way that it is conserved between a suspension and its following resumption. But this memory cannot be moved or copied if some objects refers to objects that are within this memory (or at least not without adding complexity). This is a reason why, sometime, compilers need to allocate this memory on the heap.

noexcept swap and move for classes with mutexes

In general it is a good practice to declare a swap and move noexcept as that allows to provide some exception guarantee.
At the same time writing a thread-safe class often implies adding a mutex protecting the internal resources from races.
If I want to implement a swap function for such a class the straightforward solution is to lock in a safe way the resources of both arguments of the swap and then perform the resource swap as, for example, clearly answered in the answer to this question: Implementing swap for class with std::mutex .
The problem with such an algorithm is that a mutex lock is not noexcept, therefore swap cannot, strictly speaking, be noexcept. Is there a solution to safely swap two objects of a class with a mutex?
The only possibility that comes to my mind is to store the resource as a handle so that the swap becomes a simple pointer swap which can be done atomically.
Otherwise one could consider the lock exceptions as unrecoverable error which should anyway terminate the program, but this solution feels like just a way to put the dust under the carpet.
EDIT:
As came out in the comments, I know that the exceptions thrown by the mutexes are not arbitrary but then the question can be rephrased as such:
Are there robust practices to limit the situation a mutex can throw to those when it is actually an unrecoverable OS problem?
What comes to my mind is to check, in the swap algorithm, whether the two objects to swap are not the same. That is a clear deadlock situation which will trigger an exception in the best case scenario but can be easily checked for.
Are there other similar triggers which one can safely check to make a swap function robust and practically noexcept for all the situation that matter?
On POSIX systems it is common for std::mutex to be a thin wrapper around pthread_mutex_t, for which lock and unlock function can fail when:
There is an attempt to acquire already owned lock
The mutex object is not initialized or has been destroyed already
Both of the above are UB in C++ and are not even guaranteed to be returned by POSIX. On Windows both are UB if std::mutex is a wrapper around SRWLOCK.
So it seems that the main point of allowing lock and unlock functions to throw is to signal about errors in program, not to make programmer expect and handle them.
This is confirmed by the recommended locking pattern: the destructor ~unique_lock is noexcept(true), but is supposed to call unlock which is noexcept(false). That means if exception is thrown by unlock function, the whole program gets terminated by std::terminate.
The standard also mentions this:
The error conditions for error codes, if any, reported by member
functions of the mutex types shall be:
(4.1) — resource_unavailable_try_again — if any native handle type
manipulated is not available.
(4.2) — operation_not_permitted — if the thread does not have the
privilege to perform the operation.
(4.3) — invalid_argument — if any native handle type manipulated as
part of mutex construction is incorrect
In theory you might encounter operation_not_permitted error, but situations when this happens are not really defined in the standard.
So unless you cause UB in your program related to the std::mutex usage or use the mutex in some OS-specific scenario, quality implementations of lock and unlock should never throw.
Among the common implementations, there is at least one that might be of low quality: std::mutex implemented on top of CRITICAL_SECTION in old versions of Windows (I think Windows XP and earlier) can throw after failing to lazily allocate internal event during contention. On the other hand, even earlier versions allocated this event during initialization to prevent failing later, so std::mutex::mutex constructor might need to throw there (even though it is noexcept(true) in the standard).

What special purpose does unique_lock have over using a mutex?

I'm not quite sure why std::unique_lock<std::mutex> is useful over just using a normal lock. An example in the code I'm looking at is:
{//aquire lock
std::unique_lock<std::mutex> lock(queue_mutex);
//add task
tasks.push_back(std::function<void()>(f));
}//release lock
why would this preferred over
queue_mutex.lock();
//add task
//...
queue_mutex.unlock();
do these snippets of code accomplish the same thing?
[Do] these snippets of code accomplish the same thing?
No.
The first one will release the lock at the end of the block, no matter what the block is. The second will not release the lock at the end if the critical section is exited with a break, continue, return, goto, exception, or any other kind of non-local jump that I'm forgetting about.
The use of unique_lock offers resiliency in the face of changes and errors.
If you change the flow to add intermediate "jumps" (return for example)
If an exception is thrown
...
in any case, the lock is automatically released.
On the other hand, if you attempt to do it manually, you may miss a case. And even if you don't right now, a later edit might.
Note: this is a usual idiom in C++, referred to as SBRM (Scoped Bound Resources Management) where you tie down a clean-up action to stack unwinding so you are assured that, unless crash/ungraceful exit, it is executed.
It also shows off RAII (Resources Acquisition is Initialization) since the very construction of unique_lock acquires the resource (here the mutex). Despite its name, this acronym is also colloquially used to refer to deterministic release at destruction time, which covers a broader scope than SBRM since it refers to all kind of deterministic releases, not only those based on stack unwinding.

Do programmers of other languages, besides C++, use, know or understand RAII?

I've noticed RAII has been getting lots of attention on Stackoverflow, but in my circles (mostly C++) RAII is so obvious its like asking what's a class or a destructor.
So I'm really curious if that's because I'm surrounded daily, by hard-core C++ programmers, and RAII just isn't that well known in general (including C++), or if all this questioning on Stackoverflow is due to the fact that I'm now in contact with programmers that didn't grow up with C++, and in other languages people just don't use/know about RAII?
There are plenty of reasons why RAII isn't better known. First, the name isn't particularly obvious. If I didn't already know what RAII was, I'd certainly never guess it from the name. (Resource acquisition is initialization? What does that have to do with the destructor or cleanup, which is what really characterizes RAII?)
Another is that it doesn't work as well in languages without deterministic cleanup.
In C++, we know exactly when the destructor is called, we know the order in which destructors are called, and we can define them to do anything we like.
In most modern languages, everything is garbage-collected, which makes RAII trickier to implement. There's no reason why it wouldn't be possible to add RAII-extensions to, say, C#, but it's not as obvious as it is in C++. But as others have mentioned, Perl and other languages support RAII despite being garbage collected.
That said, it is still possible to create your own RAII-styled wrapper in C# or other languages. I did it in C# a while ago.
I had to write something to ensure that a database connection was closed immediately after use, a task which any C++ programmer would see as an obvious candidate for RAII.
Of course we could wrap everything in using-statements whenever we used a db connection, but that's just messy and error-prone.
My solution was to write a helper function which took a delegate as argument, and then when called, opened a database connection, and inside a using-statement, passed it to the delegate function, pseudocode:
T RAIIWrapper<T>(Func<DbConnection, T> f){
using (var db = new DbConnection()){
return f(db);
}
}
Still not as nice or obvious as C++-RAII, but it achieved roughly the same thing. Whenever we need a DbConnection, we have to call this helper function which guarantees that it'll be closed afterwards.
I use C++ RAII all the time, but I've also developed in Visual Basic 6 for a long time, and RAII has always been a widely-used concept there (although I've never heard anyone call it that).
In fact, many VB6 programs rely on RAII quite heavily. One of the more curious uses that I've repeatedly seen is the following small class:
' WaitCursor.cls '
Private m_OldCursor As MousePointerConstants
Public Sub Class_Inititialize()
m_OldCursor = Screen.MousePointer
Screen.MousePointer = vbHourGlass
End Sub
Public Sub Class_Terminate()
Screen.MousePointer = m_OldCursor
End Sub
Usage:
Public Sub MyButton_Click()
Dim WC As New WaitCursor
' … Time-consuming operation. '
End Sub
Once the time-consuming operation terminates, the original cursor gets restored automatically.
RAII stands for Resource Acquisition Is Initialization. This is not language-agnostic at all. This mantra is here because C++ works the way it works. In C++ an object is not constructed until its constructor completes. A destructor will not be invoked if the object has not been successfully constructed.
Translated to practical language, a constructor should make sure it covers for the case it can't complete its job thoroughly. If, for example, an exception occurs during construction then the constructor must handle it gracefully, because the destructor will not be there to help. This is usually done by covering for the exceptions within the constructor or by forwarding this hassle to other objects. For example:
class OhMy {
public:
OhMy() { p_ = new int[42]; jump(); }
~OhMy() { delete[] p_; }
private:
int* p_;
void jump();
};
If the jump() call in the constructor throws we're in trouble, because p_ will leak. We can fix this like this:
class Few {
public:
Few() : v_(42) { jump(); }
~Few();
private:
std::vector<int> v_;
void jump();
};
If people are not aware of this then it's because of one of two things:
They don't know C++ well. In this case they should open TCPPPL again before they write their next class. Specifically, section 14.4.1 in the third edition of the book talks about this technique.
They don't know C++ at all. That's fine. This idiom is very C++y. Either learn C++ or forget all about this and carry on with your lives. Preferably learn C++. ;)
For people who are commenting in this thread about RAII (resource acquisition is initialisation), here's a motivational example.
class StdioFile {
FILE* file_;
std::string mode_;
static FILE* fcheck(FILE* stream) {
if (!stream)
throw std::runtime_error("Cannot open file");
return stream;
}
FILE* fdup() const {
int dupfd(dup(fileno(file_)));
if (dupfd == -1)
throw std::runtime_error("Cannot dup file descriptor");
return fdopen(dupfd, mode_.c_str());
}
public:
StdioFile(char const* name, char const* mode)
: file_(fcheck(fopen(name, mode))), mode_(mode)
{
}
StdioFile(StdioFile const& rhs)
: file_(fcheck(rhs.fdup())), mode_(rhs.mode_)
{
}
~StdioFile()
{
fclose(file_);
}
StdioFile& operator=(StdioFile const& rhs) {
FILE* dupstr = fcheck(rhs.fdup());
if (fclose(file_) == EOF) {
fclose(dupstr); // XXX ignore failed close
throw std::runtime_error("Cannot close stream");
}
file_ = dupstr;
return *this;
}
int
read(std::vector<char>& buffer)
{
int result(fread(&buffer[0], 1, buffer.size(), file_));
if (ferror(file_))
throw std::runtime_error(strerror(errno));
return result;
}
int
write(std::vector<char> const& buffer)
{
int result(fwrite(&buffer[0], 1, buffer.size(), file_));
if (ferror(file_))
throw std::runtime_error(strerror(errno));
return result;
}
};
int
main(int argc, char** argv)
{
StdioFile file(argv[1], "r");
std::vector<char> buffer(1024);
while (int hasRead = file.read(buffer)) {
// process hasRead bytes, then shift them off the buffer
}
}
Here, when a StdioFile instance is created, the resource (a file stream, in this case) is acquired; when it's destroyed, the resource is released. There is no try or finally block required; if the reading causes an exception, fclose is called automatically, because it's in the destructor.
The destructor is guaranteed to be called when the function leaves main, whether normally or by exception. In this case, the file stream is cleaned up. The world is safe once again. :-D
RAII.
It starts with a constructor and destructor but it is more than that.
It is all about safely controlling resources in the presence of exceptions.
What makes RAII superior to finally and such mechanisms is that it makes code safer to use because it moves responsibility for using an object correctly from the user of the object to the designer of the object.
Read this
Example to use StdioFile correctly using RAII.
void someFunc()
{
StdioFile file("Plop","r");
// use file
}
// File closed automatically even if this function exits via an exception.
To get the same functionality with finally.
void someFunc()
{
// Assuming Java Like syntax;
StdioFile file = new StdioFile("Plop","r");
try
{
// use file
}
finally
{
// close file.
file.close(); //
// Using the finaliser is not enough as we can not garantee when
// it will be called.
}
}
Because you have to explicitly add the try{} finally{} block this makes this method of coding more error prone (i.e. it is the user of the object that needs to think about exceptions). By using RAII exception safety has to be coded once when the object is implemented.
To the question is this C++ specific.
Short Answer: No.
Longer Answer:
It requires Constructors/Destructors/Exceptions and objects that have a defined lifetime.
Well technically it does not need exceptions. It just becomes much more useful when exceptions could potentially be used as it makes controlling the resource in the presence of exceptions very easy.
But it is useful in all situations where control can leave a function early and not execute all the code (e.g. early return from a function. This is why multiple return points in C is a bad code smell while multiple return points in C++ is not a code smell [because we can clean up using RAII]).
In C++ controlled lifetime is achieved by stack variables or smart pointers. But this is not the only time we can have a tightly controlled lifespan. For example Perl objects are not stack based but have a very controlled lifespan because of reference counting.
The problem with RAII is the acronym. It has no obvious correlation to the concept. What does this have to do with stack allocation? That is what it comes down to. C++ gives you the ability to allocate objects on the stack and guarantee that their destructors are called when the stack is unwound. In light of that, does RAII sound like a meaningful way of encapsulating that? No. I never heard of RAII until I came here a few weeks ago, and I even had to laugh hard when I read someone had posted that they would never hire a C++ programmer who'd didn't know what RAII was. Surely the concept is well known to most all competent professional C++ developers. It's just that the acronym is poorly conceived.
A modification of #Pierre's answer:
In Python:
with open("foo.txt", "w") as f:
f.write("abc")
f.close() is called automatically whether an exception were raised or not.
In general it can be done using contextlib.closing, from the documenation:
closing(thing): return a context
manager that closes thing upon
completion of the block. This is
basically equivalent to:
from contextlib import contextmanager
#contextmanager
def closing(thing):
try:
yield thing
finally:
thing.close()
And lets you write code like this:
from __future__ import with_statement # required for python version < 2.6
from contextlib import closing
import urllib
with closing(urllib.urlopen('http://www.python.org')) as page:
for line in page:
print line
without needing to explicitly close
page. Even if an error occurs,
page.close() will be called when the
with block is exited.
Common Lisp has RAII:
(with-open-file (stream "file.ext" :direction :input)
(do-something-with-stream stream))
See: http://www.psg.com/~dlamkins/sl/chapter09.html
First of all I'm very surprised it's not more well known! I totally thought RAII was, at least, obvious to C++ programmers.
However now I guess I can understand why people actually ask about it. I'm surrounded, and my self must be, C++ freaks...
So my secret.. I guess that would be, that I used to read Meyers, Sutter [EDIT:] and Andrei all the time years ago until I just grokked it.
The thing with RAII is that it requires deterministic finalization something that is guaranteed for stackbased objects in C++. Languages like C# and Java that relies on garbage collection doesn't have this guarantee so it has to be "bolted" on somehow. In C# this is done by implementing IDisposable and much of the same usage patterns then crops up basicly that's one of the motivators for the "using" statement, it ensures Disposal and is very well known and used.
So basicly the idiom is there, it just doesn't have a fancy name.
RAII is a way in C++ to make sure a cleanup procedure is executed after a block of code regardless of what happens in the code: the code executes till the end properly or raises an exception. An already cited example is automatically closing a file after its processing, see answer here.
In other languages you use other mechanism to achieve that.
In Java you have try { } finally {} constructs:
try {
BufferedReader file = new BufferedReader(new FileReader("infilename"));
// do something with file
}
finally {
file.close();
}
In Ruby you have the automatic block argument:
File.open("foo.txt") do | file |
# do something with file
end
In Lisp you have unwind-protect and the predefined with-XXX
(with-open-file (file "foo.txt")
;; do something with file
)
In Scheme you have dynamic-wind and the predefined with-XXXXX:
(with-input-from-file "foo.txt"
(lambda ()
;; do something
)
in Python you have try finally
try
file = open("foo.txt")
# do something with file
finally:
file.close()
The C++ solution as RAII is rather clumsy in that it forces you to create one class for all kinds of cleanup you have to do. This may forces you to write a lot of small silly classes.
Other examples of RAII are:
unlocking a mutex after acquisition
closing a database connection after opening
freeing memory after allocation
logging on entry and exit of a block of code
...
It's sort of tied to knowing when your destructor will be called though right? So it's not entirely language-agnostic, given that that's not a given in many GC'd languages.
I think a lot of other languages (ones that don't have delete, for example) don't give the programmer quite the same control over object lifetimes, and so there must be other means to provide for deterministic disposal of resources. In C#, for example, using using with IDisposable is common.
RAII is popular in C++ because it's one of the few (only?) languages that can allocate complex scope-local variables, but does not have a finally clause. C#, Java, Python, Ruby all have finally or an equivalent. C hasn't finally, but also can't execute code when a variable drops out of scope.
I have colleagues who are hard-core, "read the spec" C++ types. Many of them know RAII but I have never really heard it used outside of that scene.
CPython (the official Python written in C) supports RAII because of its use of reference counted objects with immediate scope based destruction (rather than when garbage is collected). Unfortunately, Jython (Python in Java) and PyPy do not support this very useful RAII idiom and it breaks a lot of legacy Python code. So for portable python you have to handle all the exceptions manually just like Java.
RAII is specific to C++. C++ has the requisite combination of stack-allocated objects, unmanaged object lifetimes, and exception handling.