| Tweetovi |
| Lewis Baker proslijedio/la je tweet | ||
|
Corentin
@Cor3ntin
|
31. sij |
|
A Universal I/O Abstraction for C++
New blog post about executors, asynchronous I/O, io_uring, coroutines and more !
➡️ cor3ntin.github.io/posts/iouring/ ⬅️ pic.twitter.com/orSvys0cdo
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
28. sij |
|
Non-exceptional unwind would be like the set_done() resumption path described in wg21.link/P1662R0.
i.e. exits scopes, running destructors, but without an exception in-flight.
coroutine_handle::destroy() kind of does this, but combines cancellation unwind with destruction.
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
25. sij |
|
In libunifex the closest thing to this algorithm is `let()` which takes a sender and a function that is invoked with the result and that returns a sender. The result of the returned sender becomes the result of let.
|
||
|
|
||
| Lewis Baker proslijedio/la je tweet | ||
|
Eric Niebler
@ericniebler
|
20. sij |
|
This paper by @ned14 should go a long way toward answering the question: Is the sender/receiver pattern from p0443 fast? Answer: Yes, definitely.
wg21.link/p2052
Many thanks to Niall for demonstrating how to max out an OS's low-level IO with sender/receiver.
#cpp
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
12. pro |
|
Hmm, that coroutine async RAII feature just started looking a lot more important!
wg21.link/P1662R0
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
27. stu |
|
The debugging experience is unfortunately not great. As you have seen, inspecting local variables in a coroutine does not yet work. Stepping is a bit flakey - see bugs.llvm.org/show_bug.cgi?i…
I’ve heard reports of code coverage tools getting confused too.
|
||
|
|
||
| Lewis Baker proslijedio/la je tweet | ||
|
Corentin
@Cor3ntin
|
1. stu |
|
New Blog Post
A Universal Async Abstraction for C++
cor3ntin.github.io/posts/executor…
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
28. lis |
|
To clarify, there was no EWG vote on ripping coroutines out at Cologne.
EWG voted against adopting the proposed design changes into the WD for C++20.
|
||
|
|
||
| Lewis Baker proslijedio/la je tweet | ||
|
Corentin
@Cor3ntin
|
23. lis |
|
Here are a couple of papers aiming to improve customization points in C++
wg21.link/p1292r0 - @CppSage
wg21.link/p1895r0 - @lewissbaker @ericniebler @kirkshoop (also wg21.link/p1665r0)
Ship it.
Ship _something_ .
The current state of affair is unwieldy 😢
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
22. lis |
|
A lazy schedule(ex) has many of the same performance advantages as “oneway” execute(ex, f). But it also provides a generic and composable way of chaining work on the end, giving you most of the functionality of “twoway” execution but without the overhead inherent in eager twoway.
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
22. lis |
|
I feel that the distinction between “oneway” and “twoway” execution is less meaningful once we have the ability to chain work lazily, without the overhead of allocating shared state, type erasure of the continuation and synchronisation inherent in eager twoway style APIs.
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
22. lis |
|
I tend to avoid using oneway or sentinel callbacks as I find they can be a bit of an antipattern for achieving structured concurrency.
They can create detached work which makes it difficult to write code that shuts down cleanly. Similar to issues with std::thread::detach().
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
22. lis |
|
“one way” execution is a fire-and-forget execution function. ie executor.execute(f)
“two way” execution is a term used for execution of a function on an execution context where you can later retrieve the result and chain more work onto it. eg by returning a future-like thing
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
21. lis |
|
You need an executor running an event loop on the main thread. Then 'co_await ex.schedule()' can suspend the coroutine and enqueue it to be resumed on the main thread.
If the main thread doesn't have anything else to do you can use sync_wait(task) to block until task completes.
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
18. lis |
|
Have the io thread start an async read/poll on an eventfd. When enqueuing work from another thread do a lock-free push of the work onto a queue and then if the queue was empty then write to the eventfd.
This will post a completion to the io thread which will wake and process work
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
18. lis |
|
Usually you'd have an io_uring per I/O thread.
You can either have separate scheduler per I/O thread or you can build an I/O thread pool scheduler and dispatch work to one of the I/O threads, if necessary, which then submits I/O using its local io_uring.
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
16. lis |
|
|
||
|
Lewis Baker
@lewissbaker
|
16. lis |
|
I generally prefer ‘co_await foo()’ where this is unambiguous, but use a prefix in cases where we have different flavours of foo().
e.g. ‘co_await co_foo()’ or ‘co_await task_foo()’ when we also have ‘future_foo().then(...)’ and ‘semifuture_foo().defer(...)’.
|
||
|
|
||
| Lewis Baker proslijedio/la je tweet | ||
|
boolean[] - 🌻
@vector_of_bool
|
15. lis |
|
Among the best and most enlightening presentations I've ever seen. Anyone interested in asynchronous programming should give this a watch. youtube.com/watch?v=tF-Nz4…
Great work @ericniebler and @TheWholeDavid!
|
||
|
|
||
|
Lewis Baker
@lewissbaker
|
11. lis |
|
Yes, cppcoro::when_all() avoids the heap allocation for the shared state by storing it on the coroutine frame that awaits it.
This is offset by needing to allocate a new coroutine frame to await each child op. The compiler is allowed to elide these allocations but rarely does.
|
||
|
|
||