I'm reminded of Raymond Chen's many many blogs[1][2][3](there are a lot more) on why TerminateThread is a bad idea. Not surprised at all the same is true elsewhere. I will say in my own code this is why I tend to prefer cancellable system calls that are alertable. That way the thread can wake up, check if it needs to die and then GTFO.
For interrupting long-running syscalls there is another solution:
Install an empty SIGINT signal handler (without SA_RESTART), then run the loop.
When the thread should stop:
* Set stop flag
* Send a SIGINT to the thread, using pthread_kill or tgkill
* Syscalls will fail with EINTR
* check for EINTR & stop flag , then we know we have to clean up and stop
Of course a lot of code will just retry on EINTR, so that requires having control over all the code that does syscalls, which isn't really feasible when using any libraries.
EDIT: The post describes exactly this method, and what the problem with it is, I just missed it.
If you can swing it (don't need to block on IO indefinitely), I'd suggest just the simple coordination model.
* Some atomic bool controls if the thread should stop or not;
* The thread doesn't make any unbounded wait syscalls;
* And the thread uses pthread_cond_wait (or equivalent C++ std wrappers) in place of sleeping while idle.
To kill the thread, set the stop flag and cond_signal the condvar. (Under the hood on Linux, this uses futex.)
Relying heavily on a check for an atomic bool is prone to race conditions. I think it's cleaner to structure the event loop as a message queue and have a queued message that indicates it's time to stop.
Queuing a stop means you have to process the queue before stopping. Which certainly is stopping cleanly, but if you wanted to stop the thread because its queue was too long and the work requests were stale, it doesn't help much.
You could maybe allow a queue skipping feature to be used for stop messages... But if it's only for stop messages, set an atomic bool stop, then send a stop message. If the thread just misses the stop bool and waits for messages, you'll get the stop message; if the queue is large, you'll get the stop bool.
disagree. i think then it's too tempting down the line for someone to add a message with blocking processing.
a simple clear loop that looks for a requested stop flag with a confirmed stop flag works pretty well. this can be built into a synchronous "stop" function for the caller that sets the flag and then does a timed wait on the confirmation (using condition variables and pthread_cond_timedwait or waitforxxxobject if you're on windows).
that's the point. use nonblocking io and an event polling mechanism with a timeout to keep an eye on an exit flag- that's all you need to handle clean shutdowns.
i think on windows you can wait on both the sockets/file descriptors and condition variables with the same waitforxxxobject blocking mechanism. on linux you can do libevent, epoll, select or pthread_cond_timedwait. all of these have "release on event or after timeout" semantics. you can use eventfd to combine them.
i would not ever recommend relying on signals and writing custom cleanup handlers for them (!).
unless they're blocked waiting for an external event, most system calls tend to return in a reasonable amount of time. handle the external event blocking scenario (stuff that select waits for) and you're basically there. moreover, if you're looking to exit cleanly, you probably don't want to take your chances interrupting syscalls with signals (!) anyway.
> If you can't accept this, maybe don't play with threads, they are dangerous.
too late. when i first started playing with threads, linux didn't really support them.
The tricky part is really point 2 there, that can be harder than it looks (e.g. even simple file I/O can be network drives). Async IO can really shine here, though it’s not exactly trivial designing async cancelletion either.
libcurl dealt with this a few months ago, and the sentiment is about the same: thread cancellation in glibc is hairy. The short summary (which I think is accurate) is that an hostname query via libnss ultimately had to read a config file, and glibc's `open` is a thead cancellation point, so if it's canceled, it'll won't free memory that was allocated before the `open`.
Note that the situation with libcurl is very specific: lookup with libnss is only available as a synchronous call. All other syscalls they make can be done with async APIs, which can easily be cancelled without any of the trickery discussed here.
If you just want to stop and/or kill all child threads, you can read the list of thread IDs from /proc/pid/task, and send a signal to them with tgkill().
Sometimes that doesn't matter - maybe you are just trying to get the process to exit without core dumping due to running threads accessing things that are disappearing.
I'm not sure there's any better solution if you are dealing with a library that creates threads and doesn't provide an API to shut them down.
This seems like a lot of work to do when you have signalfd, no? That + async and non blocking I/O should create the basis of a simple thread cancellation mechanism that exits pretty immediately, no?
As I note in the blog post in various places if one can organize the code so that cancellation is explicit things are indeed easier. I also cite eventfd as one way of doing so. What I meant to convey is that there's no easy way to cancel arbitrary code safely.
I don't really get it. There are two possibilities here:
* You control all the IO, and then you can use some cooperative mechanism to signal cancellation to the thread.
* You don't control IO code at the syscall level (e.g. you're using some library that uses sockets under the hood, such as a database client library)... But then it's just obvious you're screwed. If you could somehow terminate the thread abruptly then you'll leak resources (possibly leaving mutexes locked, as you said), or if you interrupt syscalls with an error code then the library won't understand it. That's too trivial to warrant a blog post fussing about signals.
The only useful discussion to have on the topic of thread cancellation is what happens when you can do a cooperative cancel, so I don't think it's fair to shoot that discussion down.
I have been using signalfd + epoll where it looks like I could use eventfd instead (or just epoll_pwait). Is there a significant benefit to one approach over another? I suspect eventfd might be more efficient (and doesn't use up a signal handler... when are we going to get SIGUSR3 ?!?).
This was a fun read, I didn't know about rseq until today! And before this I reasonably assumed that the naive busy-wait thing would typically be what you'd do in a thread in most circumstances. Or that at least most threads do loop in that manner. I knew that signals and such were a problem but I didn't think just wanting to stop a thread would be so hard! :)
IIRC rseq was originally proposed by Google to support their pure-userspace read-copy-update (RCU) implementation, which relied on per-CPU not per-thread data.
this stuff always seemed a mess. in practice i've always just used async io (non-blocking) and condition variables with shutdown flags.
trying to preemptively terminate a thread in a reliable fashion under linux always seemed like a fool's errand.
fwiw. it's not all that important, they get cleaned up at exit anyway. (and one should not be relying on operating system thread termination facilities for this sort of thing.)
pthread cancelation ends up not being the greatest, but it's important to represent it accurately. It has two modes: asynchronous and deferred. In asynchronous mode, a thread can be canceled any time, even in the middle of a critical section with a lock held. However, in deferred mode, a thread's cancelation can be delayed to the next cancelation point (a subset of POSIX function calls basically) and so it's possible to make that do-stuff-under-lock flow safe with cancelation after all.
That's not to say people do or that it's a good idea to try.
Cancellation points and cancellability state are discussed in the post. In a C codebase that you fully control pthread cancellation _can_ be made to work, but if you control the whole codebase I'd argue you're better off just structuring your program so that you yield cooperatively frequently enough to ensure prompt termination.
The while loop surrounds the whole thread, which does multiple tasks. The conditional is there to surround some work completing in a reasonable time. That's how I understood, at least.
while (true) {
if (stop) { break; }
// Perform some work completing in a reasonable time
}
Be just:
While(!stop){
Do-the-thing;
}
Anyway, the last part:
>> It’s quite frustrating that there’s no agreed upon way to interrupt and stack unwind a Linux thread and to protect critical sections from such unwinding. There are no technical obstacles to such facilities existing, but clean teardown is often a neglected part of software.
I think it is a “design feature”. In C everything is low level, so I have no expectation of a high level feature like “stop this thread and cleanup the mess” IMHO asking that is similar to asking for GC in C.
yes, maybe except if you don't have a single tight loop and stop checks are not just done once in the loop body but manually sprinkled through various places of your code (e.g. thing a long running compute task split into part 1,2(tight loop),3(loop),4 then you probably want a stop check between each of them and in each inner iteration of 3 but probably not in each inner iteration of 2 (as each check is an atomic load).
Maybe. But seems to me there should be better ways to organize the code. In the case you mention there will be many places where you have to cleanup (that is what the article is about) so the code will be hell to debug: multithreaded, with multiple exit points in each thread… I have done relly tons and tons of multithreading and never once needed such a conplicated thing. Typically the code which gets run in parallel is either for managing one resource type OR number crunching w/o resource allocation… if you are spawning threads that do lots of resource allocation, maybe you have architecture problems, or you are solving a very niche problem.
If your threads run "cooperative multi threading" task (e.g. rust tokio runtime, JS in general etc.) then this kinda is a non problem.
Due to task frequently returning to the scheduler the scheduler can do "should stop" check there (also as it might be possible to squeeze it into other atomic state bit maps it might have 0 relevant performance overhead (a single is-bit-set check)). And then properly shut down tasks. Now "properly shut down tasks" isn't as trivial, like the "cleaning up local resources" part normally is, but for graceful shutdown you normally also want to allow cleaning up remote resources, e.g. transaction state. But this comes from the difference of "somewhat forced shutdown" and "grace full shutdown". And in very many cases you want "grace full shutdown" and only if it doesn't work force it. Another reason not to use "naive" forced only shutdown...
Interpreter languages can do something similar in a very transparent manner (if they want to). But run into similar issues wrt. locking and forced unwinding/panics from arbitrary places as C.
Sure a very broken task might block long term. But in that case you often are better of to kill it as part of process termination instead and if that doesn't seem an option for "resilience" reasons than you are already in better use "multiple processes for resilience" (potentially across different servers) territory IMHO.
So as much as forced thread termination looks tempting I found that any time I thought I needed it it was because I did something very wrong else where.
Concepts of cooperate multi threading, co-rutines etc. aren't limited to user space.
Actually they out date the whole "async" movement or whatever you want to call it.
Also the article is about user-space threads, i.e. OS threads, not kernel-space threads (which use kthread_* not pthread_* and kthreads stopping does work by setting a flag to indicate it's supposed to stop, wakes the thread and then waits for exit. I.e. it works much more close to the `if(stop) exit` example then any signal usage.
The right approach is to avoid simple syscalls like sleep() or recv(), and instead call use multiplexing calls like epoll() or io_uring(). These natively support being interrupted by some other thread because you can pass, at minimum, two things for them to wait for: the thing you're actually interested in, and some token that can be signalled from another thread. For example, you could start a unix socket pair which you do a read wait on alongside the real work, then write to it from another thread to signal cancellation. Of course, by the time you're doing that you really could multiplex useful IO too.
You also need to manually check this mechanism from time to time even if you're doing CPU bound work.
If you're using an async framework like asyncio/Trio in Python or ASIO in C++, you can request a callback to be run from another other thread (this is the real foothold because it's effectively interrupting a long sleep/recv/whatever to do other work in the thread) at which point you can call cancellation on whatever IO is still outstanding (e.g. call task.cancel() in asyncio). Then you're effectively allowing this cancellation to happen at every await point.
(In C# you can pass around a CancellationToken, which you can cancel directly from another thread to save that extra bit of indirection.)
But I also disagree with it. Yes, the logical conclusion of starting down that path is that you end up with full on use of coroutines and some IO framework (though I don't see the problem with that). But a simple wrapper for individual calls that is recv+cancel rather than just recv etc is better than any solution mentioned in the blog post.
The fact is, if you want to wait for more than one thing at once at the syscall level (in this case, IO + inter thread cancellation), then the way to do that is to use select or poll or something else actually designed for that.
I had this problem, and I solved it by farming the known-blocking syscalls to a separate thread pool. Then the calling thread can just abandon the wait. To make it a bit better, you can also use bounded timeouts (~1-2 seconds) with retries for some calls like recvfrom() via SO_TIMEOUT so that the termination time becomes bounded.
This is probably the cleanest solution that is portable.
I'm reminded of Raymond Chen's many many blogs[1][2][3](there are a lot more) on why TerminateThread is a bad idea. Not surprised at all the same is true elsewhere. I will say in my own code this is why I tend to prefer cancellable system calls that are alertable. That way the thread can wake up, check if it needs to die and then GTFO.
[1] https://devblogs.microsoft.com/oldnewthing/20150814-00/?p=91...
[2] https://devblogs.microsoft.com/oldnewthing/20191101-00/?p=10...
[3] https://devblogs.microsoft.com/oldnewthing/20140808-00/?p=29...
there are a lot more, I'm not linking them all here.
For interrupting long-running syscalls there is another solution:
Install an empty SIGINT signal handler (without SA_RESTART), then run the loop.
When the thread should stop:
* Set stop flag
* Send a SIGINT to the thread, using pthread_kill or tgkill
* Syscalls will fail with EINTR
* check for EINTR & stop flag , then we know we have to clean up and stop
Of course a lot of code will just retry on EINTR, so that requires having control over all the code that does syscalls, which isn't really feasible when using any libraries.
EDIT: The post describes exactly this method, and what the problem with it is, I just missed it.
This option is described in detail in the blog posts, with its associated problems, see this section: https://mazzo.li/posts/stopping-linux-threads.html#homegrown... .
Ah, fair, I missed it when reading the post because the approach seemed more complicated.
If you can swing it (don't need to block on IO indefinitely), I'd suggest just the simple coordination model.
To kill the thread, set the stop flag and cond_signal the condvar. (Under the hood on Linux, this uses futex.)Relying heavily on a check for an atomic bool is prone to race conditions. I think it's cleaner to structure the event loop as a message queue and have a queued message that indicates it's time to stop.
Queuing a stop means you have to process the queue before stopping. Which certainly is stopping cleanly, but if you wanted to stop the thread because its queue was too long and the work requests were stale, it doesn't help much.
You could maybe allow a queue skipping feature to be used for stop messages... But if it's only for stop messages, set an atomic bool stop, then send a stop message. If the thread just misses the stop bool and waits for messages, you'll get the stop message; if the queue is large, you'll get the stop bool.
ps, hi
> Relying heavily on a check for an atomic bool is prone to race conditions.
It is not, actually. This extremely simple protocol is race-free.
disagree. i think then it's too tempting down the line for someone to add a message with blocking processing.
a simple clear loop that looks for a requested stop flag with a confirmed stop flag works pretty well. this can be built into a synchronous "stop" function for the caller that sets the flag and then does a timed wait on the confirmation (using condition variables and pthread_cond_timedwait or waitforxxxobject if you're on windows).
Making your check less stable doesn't prevent this.
The examples in this article IIRC were something like this.
You're still going to be arbitrarily delayed if do_stuff() (or one one of its callees, maybe deep inside the stack) delays, or the sleep call.If you can't accept this, maybe don't play with threads, they are dangerous.
that's the point. use nonblocking io and an event polling mechanism with a timeout to keep an eye on an exit flag- that's all you need to handle clean shutdowns.
i think on windows you can wait on both the sockets/file descriptors and condition variables with the same waitforxxxobject blocking mechanism. on linux you can do libevent, epoll, select or pthread_cond_timedwait. all of these have "release on event or after timeout" semantics. you can use eventfd to combine them.
i would not ever recommend relying on signals and writing custom cleanup handlers for them (!).
unless they're blocked waiting for an external event, most system calls tend to return in a reasonable amount of time. handle the external event blocking scenario (stuff that select waits for) and you're basically there. moreover, if you're looking to exit cleanly, you probably don't want to take your chances interrupting syscalls with signals (!) anyway.
> If you can't accept this, maybe don't play with threads, they are dangerous.
too late. when i first started playing with threads, linux didn't really support them.
Every event loop is subject to the blocked-due-to-long-running-computation issue. It bites ...
The same is true if you're repeatedly polling an atomic boolean in an event loop.
The tricky part is really point 2 there, that can be harder than it looks (e.g. even simple file I/O can be network drives). Async IO can really shine here, though it’s not exactly trivial designing async cancelletion either.
libcurl dealt with this a few months ago, and the sentiment is about the same: thread cancellation in glibc is hairy. The short summary (which I think is accurate) is that an hostname query via libnss ultimately had to read a config file, and glibc's `open` is a thead cancellation point, so if it's canceled, it'll won't free memory that was allocated before the `open`.
The write-up is on how they're dealing with it starts at https://eissing.org/icing/posts/pthread_cancel/.
Note that the situation with libcurl is very specific: lookup with libnss is only available as a synchronous call. All other syscalls they make can be done with async APIs, which can easily be cancelled without any of the trickery discussed here.
Previously: https://news.ycombinator.com/item?id=38908556
And somehow just a day ago: https://news.ycombinator.com/item?id=45589156
When you are on Linux the easiest way is to use signalfd. No unsafe async signal handling, just handling signals by reading from a fd.
If you just want to stop and/or kill all child threads, you can read the list of thread IDs from /proc/pid/task, and send a signal to them with tgkill().
Yeah, and leave mutexes locked indefinitely.
Sometimes that doesn't matter - maybe you are just trying to get the process to exit without core dumping due to running threads accessing things that are disappearing.
I'm not sure there's any better solution if you are dealing with a library that creates threads and doesn't provide an API to shut them down.
This seems like a lot of work to do when you have signalfd, no? That + async and non blocking I/O should create the basis of a simple thread cancellation mechanism that exits pretty immediately, no?
As I note in the blog post in various places if one can organize the code so that cancellation is explicit things are indeed easier. I also cite eventfd as one way of doing so. What I meant to convey is that there's no easy way to cancel arbitrary code safely.
I don't really get it. There are two possibilities here:
* You control all the IO, and then you can use some cooperative mechanism to signal cancellation to the thread.
* You don't control IO code at the syscall level (e.g. you're using some library that uses sockets under the hood, such as a database client library)... But then it's just obvious you're screwed. If you could somehow terminate the thread abruptly then you'll leak resources (possibly leaving mutexes locked, as you said), or if you interrupt syscalls with an error code then the library won't understand it. That's too trivial to warrant a blog post fussing about signals.
The only useful discussion to have on the topic of thread cancellation is what happens when you can do a cooperative cancel, so I don't think it's fair to shoot that discussion down.
I have been using signalfd + epoll where it looks like I could use eventfd instead (or just epoll_pwait). Is there a significant benefit to one approach over another? I suspect eventfd might be more efficient (and doesn't use up a signal handler... when are we going to get SIGUSR3 ?!?).
This was a fun read, I didn't know about rseq until today! And before this I reasonably assumed that the naive busy-wait thing would typically be what you'd do in a thread in most circumstances. Or that at least most threads do loop in that manner. I knew that signals and such were a problem but I didn't think just wanting to stop a thread would be so hard! :)
Hopefully this improves eventually? Who knows?
IIRC rseq was originally proposed by Google to support their pure-userspace read-copy-update (RCU) implementation, which relied on per-CPU not per-thread data.
Off-Topic: I surprised myself by liking the web site design. Especially the font.
Me too. It is pretty rare for anyone to take so much care. The only other person I can think of now is Gwern Branwen.
this stuff always seemed a mess. in practice i've always just used async io (non-blocking) and condition variables with shutdown flags.
trying to preemptively terminate a thread in a reliable fashion under linux always seemed like a fool's errand.
fwiw. it's not all that important, they get cleaned up at exit anyway. (and one should not be relying on operating system thread termination facilities for this sort of thing.)
pthread cancelation ends up not being the greatest, but it's important to represent it accurately. It has two modes: asynchronous and deferred. In asynchronous mode, a thread can be canceled any time, even in the middle of a critical section with a lock held. However, in deferred mode, a thread's cancelation can be delayed to the next cancelation point (a subset of POSIX function calls basically) and so it's possible to make that do-stuff-under-lock flow safe with cancelation after all.
That's not to say people do or that it's a good idea to try.
Cancellation points and cancellability state are discussed in the post. In a C codebase that you fully control pthread cancellation _can_ be made to work, but if you control the whole codebase I'd argue you're better off just structuring your program so that you yield cooperatively frequently enough to ensure prompt termination.
Not arguing there. I'm just pointing out that the post's claim that " Thread cancellation is incompatible with modern C++" needs more nuance.
> How to stop Linux threads cleanly
kill -HUP ?
while (true) { if (stop) { break; } }
If there only was a way to stop while loop without having to use extra conditional with break...
Feel free to read the article before commenting.
I’ve read it, and I found nothing to justify that piece of code. Can you please explain?
The while loop surrounds the whole thread, which does multiple tasks. The conditional is there to surround some work completing in a reasonable time. That's how I understood, at least.
Does not seem so clear to me. If so it could be stated with more pseudo code. Also the eventual need for multiple exit points…
Should this code:
Be just: Anyway, the last part:>> It’s quite frustrating that there’s no agreed upon way to interrupt and stack unwind a Linux thread and to protect critical sections from such unwinding. There are no technical obstacles to such facilities existing, but clean teardown is often a neglected part of software.
I think it is a “design feature”. In C everything is low level, so I have no expectation of a high level feature like “stop this thread and cleanup the mess” IMHO asking that is similar to asking for GC in C.
yes, maybe except if you don't have a single tight loop and stop checks are not just done once in the loop body but manually sprinkled through various places of your code (e.g. thing a long running compute task split into part 1,2(tight loop),3(loop),4 then you probably want a stop check between each of them and in each inner iteration of 3 but probably not in each inner iteration of 2 (as each check is an atomic load).
Maybe. But seems to me there should be better ways to organize the code. In the case you mention there will be many places where you have to cleanup (that is what the article is about) so the code will be hell to debug: multithreaded, with multiple exit points in each thread… I have done relly tons and tons of multithreading and never once needed such a conplicated thing. Typically the code which gets run in parallel is either for managing one resource type OR number crunching w/o resource allocation… if you are spawning threads that do lots of resource allocation, maybe you have architecture problems, or you are solving a very niche problem.
If your threads run "cooperative multi threading" task (e.g. rust tokio runtime, JS in general etc.) then this kinda is a non problem.
Due to task frequently returning to the scheduler the scheduler can do "should stop" check there (also as it might be possible to squeeze it into other atomic state bit maps it might have 0 relevant performance overhead (a single is-bit-set check)). And then properly shut down tasks. Now "properly shut down tasks" isn't as trivial, like the "cleaning up local resources" part normally is, but for graceful shutdown you normally also want to allow cleaning up remote resources, e.g. transaction state. But this comes from the difference of "somewhat forced shutdown" and "grace full shutdown". And in very many cases you want "grace full shutdown" and only if it doesn't work force it. Another reason not to use "naive" forced only shutdown...
Interpreter languages can do something similar in a very transparent manner (if they want to). But run into similar issues wrt. locking and forced unwinding/panics from arbitrary places as C.
Sure a very broken task might block long term. But in that case you often are better of to kill it as part of process termination instead and if that doesn't seem an option for "resilience" reasons than you are already in better use "multiple processes for resilience" (potentially across different servers) territory IMHO.
So as much as forced thread termination looks tempting I found that any time I thought I needed it it was because I did something very wrong else where.
user-space threads have entirely different semantics from kernel threads. both have their uses, but should generally not be conflated.
Concepts of cooperate multi threading, co-rutines etc. aren't limited to user space.
Actually they out date the whole "async" movement or whatever you want to call it.
Also the article is about user-space threads, i.e. OS threads, not kernel-space threads (which use kthread_* not pthread_* and kthreads stopping does work by setting a flag to indicate it's supposed to stop, wakes the thread and then waits for exit. I.e. it works much more close to the `if(stop) exit` example then any signal usage.
This is just doubling down on the wrong approach.
The right approach is to avoid simple syscalls like sleep() or recv(), and instead call use multiplexing calls like epoll() or io_uring(). These natively support being interrupted by some other thread because you can pass, at minimum, two things for them to wait for: the thing you're actually interested in, and some token that can be signalled from another thread. For example, you could start a unix socket pair which you do a read wait on alongside the real work, then write to it from another thread to signal cancellation. Of course, by the time you're doing that you really could multiplex useful IO too.
You also need to manually check this mechanism from time to time even if you're doing CPU bound work.
If you're using an async framework like asyncio/Trio in Python or ASIO in C++, you can request a callback to be run from another other thread (this is the real foothold because it's effectively interrupting a long sleep/recv/whatever to do other work in the thread) at which point you can call cancellation on whatever IO is still outstanding (e.g. call task.cancel() in asyncio). Then you're effectively allowing this cancellation to happen at every await point.
(In C# you can pass around a CancellationToken, which you can cancel directly from another thread to save that extra bit of indirection.)
This is noted in the blog post, but the problem is that sometimes you don't have the freedom to do so. See this sidenote and the section next to it: https://mazzo.li/posts/stopping-linux-threads.html#fn3 .
I'll admit: I didn't see that.
But I also disagree with it. Yes, the logical conclusion of starting down that path is that you end up with full on use of coroutines and some IO framework (though I don't see the problem with that). But a simple wrapper for individual calls that is recv+cancel rather than just recv etc is better than any solution mentioned in the blog post.
The fact is, if you want to wait for more than one thing at once at the syscall level (in this case, IO + inter thread cancellation), then the way to do that is to use select or poll or something else actually designed for that.
I had this problem, and I solved it by farming the known-blocking syscalls to a separate thread pool. Then the calling thread can just abandon the wait. To make it a bit better, you can also use bounded timeouts (~1-2 seconds) with retries for some calls like recvfrom() via SO_TIMEOUT so that the termination time becomes bounded.
This is probably the cleanest solution that is portable.