Highly concurrent yet natural
programming
mefyl
quentin.hocquet@infinit.io
Version 1.2
Infinit & me
Me
• Quentin "mefyl" Hocquet
• Epita CSI (LRDE) 2008.
• Ex Gostai
• Into language theory
• Joined Infinit ear...
Infinit & me
Me
• Quentin "mefyl" Hocquet
• Epita CSI (LRDE) 2008.
• Ex Gostai
• Into language theory
• Joined Infinit ear...
Concurrent and parallel
programming
Know the difference
Parallel programming
Aims at running two tasks simultaneously. It is a matter of performances.
Concurr...
Task 1 Task 2
Know the difference
Task 1 Task 2
Know the difference
Sequential
Task 1 Task 2
Know the difference
Parallel
Task 1 Task 2
Know the difference
Concurrent
Sequential Concurrent
Know the difference
Parallel
Sequential Concurrent
Know the difference
Parallel
Sequential Concurrent Parallel
CPU usage N N N
Execution time Long Shor...
Sequential Concurrent
Know the difference
Parallel
Sequential Concurrent Parallel
CPU usage N N N
Execution time Long Shor...
TV
Commercials
TV
Peeling
Some real life examples
You are the CPU. You want to:
• Watch a film on TV.
• Peel potatoes.
Sequential
TV
Commercials
TV
Peeling
Concurrent
TV
Peeling
TV
Peeling
Some real life examples
Parallel
TV Peeling
Commerci...
Load
Unload
Load
Unload
Some real life examples
You are the CPU. You want to:
• Do the laundry.
• Do the dishes.
Sequential
Load
Unload
Load
Unload
Concurrent
Load
Load
Unload
Unload
Some real life examples
Parallel
Load Load
Unload Un...
Some programming examples
Video encoding: encode a raw 2GB raw file to mp4.
• CPU bound.
• File chunks can be encoded sepa...
Parallel
Encode
first half
Encode
second half
Sequential
Concurrent
Encode first
half
Encode
second half
Some programming ...
Some programming examples
An IRC server: handle up to 50k IRC users chatting.
• IO bound.
• A huge number of clients that ...
Concurrent Parallel
Some programming examples
An IRC server: handle up to 50k IRC users chatting.
• IO bound.
• A huge num...
Know the difference
Parallelism
• Is never needed for correctness.
• Is about performances, not correct behavior.
• Is abo...
Know the difference
Parallelism
• Is never needed for correctness.
• Is about performances, not correct behavior.
• Is abo...
Know the difference
Parallelism
• Is never needed for correctness.
• Is about performances, not correct behavior.
• Is abo...
Who's best ?
If you are parallel, you are concurrent. So why bother ?
Who's best ?
If you are parallel, you are concurrent. So why bother ?
• Being parallel is much, much more difficult. That'...
Who's best ?
If you are parallel, you are concurrent. So why bother ?
• Being parallel is much, much more difficult. That'...
Threads, callbacks
So, how do you write an echo server ?
The sequential echo server
TCPServer server;
server.listen(4242);
while (true)
{
TCPSocket client = server.accept();
}
The sequential echo server
TCPServer server;
server.listen(4242);
while (true)
{
TCPSocket client = server.accept();
while...
The sequential echo server
TCPServer server;
server.listen(4242);
while (true)
{
TCPSocket client = server.accept();
try
{...
The sequential echo server
TCPServer server;
server.listen(4242);
while (true)
{
TCPSocket client = server.accept();
serve...
The sequential echo server
TCPServer server;
server.listen(4242);
while (true)
{
TCPSocket client = server.accept();
serve...
The sequential echo server
TCPServer server;
server.listen(4242);
while (true)
{
TCPSocket client = server.accept();
serve...
The parallel echo server
TCPServer server;
server.listen(4242);
while (true)
{
TCPSocket client = server.accept();
serve_c...
The parallel echo server
TCPServer server;
server.listen(4242);
std::vector<std::thread> threads;
while (true)
{
TCPSocket...
The parallel echo server
TCPServer server;
server.listen(4242);
std::vector<std::thread> threads;
while (true)
{
TCPSocket...
But parallelism is too much
• Not scalable: you can't run 50k threads.
But parallelism is too much
• Not scalable: you can't run 50k threads.
• Induces unwanted complexity: race conditions.
But parallelism is too much
• Not scalable: you can't run 50k threads.
• Induces unwanted complexity: race conditions.
int...
But parallelism is too much
• Not scalable: you can't run 50k threads.
• Induces unwanted complexity: race conditions.
int...
We need concurrency without threads
We need to accept, read and write to socket without threads so without
blocking.
We need concurrency without threads
We need to accept, read and write to socket without threads so without
blocking.
• Use...
We need concurrency without threads
We need to accept, read and write to socket without threads so without
blocking.
• Use...
The callback-based echo server
Reactor reactor;
TCPServer server(reactor);
server.accept(&handle_connection);
reactor.run(...
The callback-based echo server
Reactor reactor;
TCPServer server(reactor);
server.accept(&handle_connection);
reactor.run(...
The callback-based echo server
Reactor reactor;
TCPServer server(reactor);
server.accept(&handle_connection);
reactor.run(...
The callback-based echo server
Reactor reactor;
TCPServer server(reactor);
server.accept(&handle_connection);
reactor.run(...
How do we feel now ?
• This one scales to thousands of client.
How do we feel now ?
• This one scales to thousands of client.
• Yet to add the concurrency property, we had to completely...
How do we feel now ?
• This one scales to thousands of client.
• Yet to add the concurrency property, we had to completely...
Counting lines with threads
try
{
while (true)
{
std::string line = client.read_until("n");
client.send(line);
}
}
catch (...
Counting lines with threads
int lines_count = 0;
try
{
while (true)
{
std::string line = client.read_until("n");
++lines_c...
Counting lines with callbacks
void
handle_connection(TCPSocket& client)
{
int* count = new int(0);
client.read_until(
"n",...
Counting lines with callbacks
void
handle_connection(TCPSocket& client);
void
handle_read(TCPSocket& c, std::string const&...
Counting lines with callbacks
void
handle_connection(TCPSocket& client);
void
handle_read(TCPSocket& c, std::string const&...
Callback-based programming considered harmful
• Code is structured with callbacks.
Callback-based programming considered harmful
• Code is structured with callbacks.
• Asynchronous operation break the flow...
Callback-based programming considered harmful
• Code is structured with callbacks.
• Asynchronous operation break the flow...
Callback-based programming considered harmful
• Code is structured with callbacks.
• Asynchronous operation break the flow...
Are we screwed ?
Threads
• Respect your beloved semantic and expressiveness.
• Don't scale and introduce race conditions.
Are we screwed ?
Threads
• Respect your beloved semantic and expressiveness.
• Don't scale and introduce race conditions.
...
Are we screwed ?
Threads
• Respect your beloved semantic and expressiveness.
• Don't scale and introduce race conditions.
...
Are we screwed ?
Threads
• Respect your beloved semantic and expressiveness.
• Don't scale and introduce race conditions.
...
Coroutines
Also known as:
• green threads
• userland threads
• fibers
• contexts
• ...
Coroutines
• Separate execution contexts like system threads.
• Userland: no need to ask the kernel.
• Non-parallel.
• Coo...
Coroutines
• Separate execution contexts like system threads.
• Userland: no need to ask the kernel.
• Non-parallel.
• Coo...
Coroutines-based scheduler
• Make a scheduler that holds coroutines .
• Embed a reactor in there.
• Write a neat Socket cl...
Coroutines-based scheduler
• Make a scheduler that holds coroutines .
• Embed a reactor in there.
• Write a neat Socket cl...
Coroutines-based echo server
TCPServer server; server.listen(4242);
std::vector<Thread> threads;
int lines_count = 0;
whil...
What we built at Infinit: the reactor.
What we built at Infinit: the reactor.
• Coroutine scheduler: simple round robin
• Sleeping, waiting
• Timers
• Synchroniz...
Coroutine scheduling
reactor::Scheduler sched;
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::yield();
print("...
Coroutine scheduling
reactor::Scheduler sched;
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::yield();
print("...
Sleeping and waiting
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::sleep(500_ms);
print("Bye 1");
});
reactor...
Sleeping and waiting
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::sleep(500_ms);
print("Bye 1");
});
reactor...
Sleeping and waiting
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::sleep(500_ms);
print("Bye 1");
});
reactor...
Sleeping and waiting
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::sleep(500_ms);
print("Bye 1");
});
reactor...
Sleeping and waiting
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::sleep(500_ms);
print("Bye 1");
});
reactor...
Sleeping and waiting
reactor::Thread t1(sched,
[&]
{
print("Hello 1");
reactor::sleep(500_ms);
print("Bye 1");
});
reactor...
Synchronization: signals
reactor::Signal task_available;
std::vector<Task> tasks;
reactor::Thread handler([&] {
while (tru...
Synchronization: signals
reactor::Signal task_available;
std::vector<Task> tasks;
reactor::Thread handler([&] {
while (tru...
Synchronization: signals
reactor::Signal task_available;
std::vector<Task> tasks;
reactor::Thread handler([&] {
while (tru...
Synchronization: channels
reactor::Channel<Task> tasks;
reactor::Thread handler([&] {
while (true)
{
Task t = tasks.get();...
Mutexes
But you said no race conditions! You lied again!
Mutexes
But you said no race conditions! You lied again!
reactor::Thread t([&] {
while (true)
{
for (auto& socket: sockets...
Mutexes
But you said no race conditions! You lied again!
reactor::Mutex mutex;
reactor::Thread t([&] {
while (true)
{
reac...
Mutexes
But you said no race conditions! You lied again!
reactor::Mutex mutex;
reactor::Thread t([&] {
while (true)
{
reac...
Networking: TCP
We saw a good deal of TCP networking:
try
{
reactor::TCPSocket socket("battle.net", 4242, 10_sec);
// ...
...
Networking: TCP
We saw a good deal of TCP networking:
void
serve(TCPSocket& client)
{
try
{
std::string auth = server.read...
Networking: SSL
Transparent client handshaking:
reactor::network::SSLSocket socket("localhost", 4242);
socket.write(...);
Networking: SSL
Transparent server handshaking:
reactor::network::SSLServer server(certificate, key);
server.listen(4242);...
Networking: SSL
Transparent server handshaking:
SSLSocket SSLServer::accept()
{
auto socket = this->_tcp_server.accept();
...
Networking: SSL
Transparent server handshaking:
reactor::Channel<SSLSocket> _sockets;
void SSLServer::_handshake_thread()
...
Networking: SSL
Transparent server handshaking:
void SSLServer::_handshake_thread()
{
while (true)
{
auto socket = this->_...
HTTP
std::string google = reactor::http::get("google.com");
HTTP
std::string google = reactor::http::get("google.com");
reactor::http::Request r("kissmetrics.com/api",
reactor::http:...
HTTP
std::string google = reactor::http::get("google.com");
reactor::http::Request r("kissmetrics.com/api",
reactor::http:...
HTTP streaming
std::string content = reactor::http::get(
"my-api.infinit.io/transactions");
auto json = json::parse(conten...
HTTP streaming
std::string content = reactor::http::get(
"my-api.infinit.io/transactions");
auto json = json::parse(conten...
HTTP streaming
std::string content = reactor::http::get(
"my-api.infinit.io/transactions");
auto json = json::parse(conten...
Better concurrency: futures, ...
std::string transaction_id = reactor::http::put(
"my-api.production.infinit.io/transactio...
Better concurrency: futures, ...
std::string transaction_id = reactor::http::put(
"my-api.production.infinit.io/transactio...
Version 1
Wait meta
Ask files
Wait meta
Wait AWS
Version 2
Ask files
Better concurrency: futures, ...
Version 2
Ask files
How does it perform for us ?
• Notification server does perform:
◦ 10k clients per instance
◦ 0.01 load average
◦ 1G resid...
How does it perform for us ?
• Notification server does perform:
◦ 10k clients per instance
◦ 0.01 load average
◦ 1G resid...
Questions ?
Upcoming SlideShare
Loading in...5
×

Highly concurrent yet natural programming

129
-1

Published on

Infinit's reactor C++ framework allows developers to program in a natural way without having to deal with complex thread-based flows that decrease maintainability and efficiency.

Published in: Software
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
129
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Highly concurrent yet natural programming

  1. 1. Highly concurrent yet natural programming mefyl quentin.hocquet@infinit.io Version 1.2
  2. 2. Infinit & me Me • Quentin "mefyl" Hocquet • Epita CSI (LRDE) 2008. • Ex Gostai • Into language theory • Joined Infinit early two years ago.
  3. 3. Infinit & me Me • Quentin "mefyl" Hocquet • Epita CSI (LRDE) 2008. • Ex Gostai • Into language theory • Joined Infinit early two years ago. Infinit • Founded my Julien "mycure" Quintard, Epita SRS 2007 • Based on his thesis at Cambridge • Decentralized filesystem in byzantine environment • Frontend: file transfer application based on the technology. • Strong technical culture
  4. 4. Concurrent and parallel programming
  5. 5. Know the difference Parallel programming Aims at running two tasks simultaneously. It is a matter of performances. Concurrent programming Aims at running two tasks without inter-blocking. It is a matter of behavior.
  6. 6. Task 1 Task 2 Know the difference
  7. 7. Task 1 Task 2 Know the difference Sequential
  8. 8. Task 1 Task 2 Know the difference Parallel
  9. 9. Task 1 Task 2 Know the difference Concurrent
  10. 10. Sequential Concurrent Know the difference Parallel
  11. 11. Sequential Concurrent Know the difference Parallel Sequential Concurrent Parallel CPU usage N N N Execution time Long Short Shorter
  12. 12. Sequential Concurrent Know the difference Parallel Sequential Concurrent Parallel CPU usage N N N Execution time Long Short Shorter Need to run in parallel No No Yes
  13. 13. TV Commercials TV Peeling Some real life examples You are the CPU. You want to: • Watch a film on TV. • Peel potatoes.
  14. 14. Sequential TV Commercials TV Peeling Concurrent TV Peeling TV Peeling Some real life examples Parallel TV Peeling Commercials TV
  15. 15. Load Unload Load Unload Some real life examples You are the CPU. You want to: • Do the laundry. • Do the dishes.
  16. 16. Sequential Load Unload Load Unload Concurrent Load Load Unload Unload Some real life examples Parallel Load Load Unload Unload
  17. 17. Some programming examples Video encoding: encode a raw 2GB raw file to mp4. • CPU bound. • File chunks can be encoded separately and then merged later.
  18. 18. Parallel Encode first half Encode second half Sequential Concurrent Encode first half Encode second half Some programming examples Video encoding: encode a raw 2GB raw file to mp4. • CPU bound. • File chunks can be encoded separately and then merged later. Parallelism is a plus, concurrency doesn't apply.
  19. 19. Some programming examples An IRC server: handle up to 50k IRC users chatting. • IO bound. • A huge number of clients that must be handled concurrently and mostly waiting.
  20. 20. Concurrent Parallel Some programming examples An IRC server: handle up to 50k IRC users chatting. • IO bound. • A huge number of clients that must be handled concurrently and mostly waiting. Concurrency is needed, parallelism is superfluous.
  21. 21. Know the difference Parallelism • Is never needed for correctness. • Is about performances, not correct behavior. • Is about exploiting multi-core and multi-CPU architectures. Concurrent programming • Can be needed for correctness. • Is about correct behavior, sometimes about performances too. • Is about multiple threads being responsive in concurrent.
  22. 22. Know the difference Parallelism • Is never needed for correctness. • Is about performances, not correct behavior. • Is about exploiting multi-core and multi-CPU architectures. Concurrent programming • Can be needed for correctness. • Is about correct behavior, sometimes about performances too. • Is about multiple threads being responsive in concurrent. A good video encoding app: • Encodes 4 times faster on a 4-core CPU. That's parallelism.
  23. 23. Know the difference Parallelism • Is never needed for correctness. • Is about performances, not correct behavior. • Is about exploiting multi-core and multi-CPU architectures. Concurrent programming • Can be needed for correctness. • Is about correct behavior, sometimes about performances too. • Is about multiple threads being responsive in concurrent. A good video encoding app: • Encodes 4 times faster on a 4-core CPU. That's parallelism. • Has a responsive GUI while encoding. That's concurrency.
  24. 24. Who's best ? If you are parallel, you are concurrent. So why bother ?
  25. 25. Who's best ? If you are parallel, you are concurrent. So why bother ? • Being parallel is much, much more difficult. That's time, money and programmer misery.
  26. 26. Who's best ? If you are parallel, you are concurrent. So why bother ? • Being parallel is much, much more difficult. That's time, money and programmer misery. • You can't be efficiently parallel past your hardware limit. Those are system calls, captain.
  27. 27. Threads, callbacks So, how do you write an echo server ?
  28. 28. The sequential echo server TCPServer server; server.listen(4242); while (true) { TCPSocket client = server.accept(); }
  29. 29. The sequential echo server TCPServer server; server.listen(4242); while (true) { TCPSocket client = server.accept(); while (true) { std::string line = client.read_until("n"); client.send(line); } }
  30. 30. The sequential echo server TCPServer server; server.listen(4242); while (true) { TCPSocket client = server.accept(); try { while (true) { std::string line = client.read_until("n"); client.send(line); } } catch (ConnectionClosed const&) {} }
  31. 31. The sequential echo server TCPServer server; server.listen(4242); while (true) { TCPSocket client = server.accept(); serve_client(client); }
  32. 32. The sequential echo server TCPServer server; server.listen(4242); while (true) { TCPSocket client = server.accept(); serve_client(client); } • Dead simple: you got it instantly. It's natural. • But wrong: we handle only one client at a time. • We need ...
  33. 33. The sequential echo server TCPServer server; server.listen(4242); while (true) { TCPSocket client = server.accept(); serve_client(client); } • Dead simple: you got it instantly. It's natural. • But wrong: we handle only one client at a time. • We need ... concurrency !
  34. 34. The parallel echo server TCPServer server; server.listen(4242); while (true) { TCPSocket client = server.accept(); serve_client(client); }
  35. 35. The parallel echo server TCPServer server; server.listen(4242); std::vector<std::thread> threads; while (true) { TCPSocket client = server.accept(); std::thread client_thread( [&] { serve_client(client); }); client_thread.run(); vectors.push_back(std::move(client_thread)); }
  36. 36. The parallel echo server TCPServer server; server.listen(4242); std::vector<std::thread> threads; while (true) { TCPSocket client = server.accept(); std::thread client_thread( [&] { serve_client(client); }); client_thread.run(); vectors.push_back(std::move(client_thread)); } • Almost as simple and still natural, • To add the concurrency property, we just added a concurrency construct to the existing.
  37. 37. But parallelism is too much • Not scalable: you can't run 50k threads.
  38. 38. But parallelism is too much • Not scalable: you can't run 50k threads. • Induces unwanted complexity: race conditions.
  39. 39. But parallelism is too much • Not scalable: you can't run 50k threads. • Induces unwanted complexity: race conditions. int line_count = 0; while (true) { TCPSocket client = server.accept(); while (true) { std::string line = client.read_until("n"); client.send(line); ++line_count; } }
  40. 40. But parallelism is too much • Not scalable: you can't run 50k threads. • Induces unwanted complexity: race conditions. int line_count = 0; while (true) { TCPSocket client = server.accept(); std::thread client_thread( [&] { while (true) { std::string line = client.read_until("n"); client.send(line); ++line_count; } }); }
  41. 41. We need concurrency without threads We need to accept, read and write to socket without threads so without blocking.
  42. 42. We need concurrency without threads We need to accept, read and write to socket without threads so without blocking. • Use select to monitor all sockets at once. • Register actions to be done when something is ready. • Wake up only when something needs to be performed.
  43. 43. We need concurrency without threads We need to accept, read and write to socket without threads so without blocking. • Use select to monitor all sockets at once. • Register actions to be done when something is ready. • Wake up only when something needs to be performed. This is abstracted with the reactor design pattern: • libevent • Boost ASIO • Python Twisted • ...
  44. 44. The callback-based echo server Reactor reactor; TCPServer server(reactor); server.accept(&handle_connection); reactor.run();
  45. 45. The callback-based echo server Reactor reactor; TCPServer server(reactor); server.accept(&handle_connection); reactor.run(); void handle_connection(TCPSocket& client) { client.read_until("n", &handle_read); }
  46. 46. The callback-based echo server Reactor reactor; TCPServer server(reactor); server.accept(&handle_connection); reactor.run(); void handle_connection(TCPSocket& client); void handle_read(TCPSocket& c, std::string const& l, Error e) { if (!e) c.send(l, &handle_sent); }
  47. 47. The callback-based echo server Reactor reactor; TCPServer server(reactor); server.accept(&handle_connection); reactor.run(); void handle_connection(TCPSocket& client); void handle_read(TCPSocket& c, std::string const& l, Error e); void handle_sent(TCPSocket& client, Error error) { if (!e) client.read_until("n", &handle_read); }
  48. 48. How do we feel now ? • This one scales to thousands of client.
  49. 49. How do we feel now ? • This one scales to thousands of client. • Yet to add the concurrency property, we had to completely change the way we think.
  50. 50. How do we feel now ? • This one scales to thousands of client. • Yet to add the concurrency property, we had to completely change the way we think. • A bit more verbose and complex, but nothing too bad ... right ?
  51. 51. Counting lines with threads try { while (true) { std::string line = client.read_until("n"); client.send(line); } } catch (ConnectionClosed const&) { }
  52. 52. Counting lines with threads int lines_count = 0; try { while (true) { std::string line = client.read_until("n"); ++lines_count; client.send(line); } } catch (ConnectionClosed const&) { std::cerr << "Client sent " << lines_count << "linesn"; }
  53. 53. Counting lines with callbacks void handle_connection(TCPSocket& client) { int* count = new int(0); client.read_until( "n", std::bind(&handle_read, count)); }
  54. 54. Counting lines with callbacks void handle_connection(TCPSocket& client); void handle_read(TCPSocket& c, std::string const& l, Error e, int* count) { if (e) std::cerr << *count << std::endl; else c.send(l, std::bind(&handle_sent, count)); }
  55. 55. Counting lines with callbacks void handle_connection(TCPSocket& client); void handle_read(TCPSocket& c, std::string const& l, Error e, int* count); void handle_sent(TCPSocket& client, Error error, int* count) { if (e) std::cerr << *count << std::endl; else client.read_until( "n", std::bind(&handle_read, count)); }
  56. 56. Callback-based programming considered harmful • Code is structured with callbacks.
  57. 57. Callback-based programming considered harmful • Code is structured with callbacks. • Asynchronous operation break the flow arbitrarily.
  58. 58. Callback-based programming considered harmful • Code is structured with callbacks. • Asynchronous operation break the flow arbitrarily. • You lose all syntactic scoping expression (local variables, closure, exceptions, ...).
  59. 59. Callback-based programming considered harmful • Code is structured with callbacks. • Asynchronous operation break the flow arbitrarily. • You lose all syntactic scoping expression (local variables, closure, exceptions, ...). • This is not natural. Damn, this is pretty much as bad as GOTO.
  60. 60. Are we screwed ? Threads • Respect your beloved semantic and expressiveness. • Don't scale and introduce race conditions.
  61. 61. Are we screwed ? Threads • Respect your beloved semantic and expressiveness. • Don't scale and introduce race conditions. Callbacks • Scale. • Ruins your semantic. Painful to write, close to impossible to maintain.
  62. 62. Are we screwed ? Threads • Respect your beloved semantic and expressiveness. • Don't scale and introduce race conditions. Callbacks • Scale. • Ruins your semantic. Painful to write, close to impossible to maintain. I lied when I said: we need concurrency without threads.
  63. 63. Are we screwed ? Threads • Respect your beloved semantic and expressiveness. • Don't scale and introduce race conditions. Callbacks • Scale. • Ruins your semantic. Painful to write, close to impossible to maintain. I lied when I said: we need concurrency without threads. We need concurrency without system threads.
  64. 64. Coroutines Also known as: • green threads • userland threads • fibers • contexts • ...
  65. 65. Coroutines • Separate execution contexts like system threads. • Userland: no need to ask the kernel. • Non-parallel. • Cooperative instead of preemptive: they yield to each other.
  66. 66. Coroutines • Separate execution contexts like system threads. • Userland: no need to ask the kernel. • Non-parallel. • Cooperative instead of preemptive: they yield to each other. By building on top of that, we have: • Scalability: no system thread involved. • No arbitrary race-conditions: no parallelism. • A stack, a context: the code is natural.
  67. 67. Coroutines-based scheduler • Make a scheduler that holds coroutines . • Embed a reactor in there. • Write a neat Socket class.
  68. 68. Coroutines-based scheduler • Make a scheduler that holds coroutines . • Embed a reactor in there. • Write a neat Socket class. When read, it: ◦ Unschedules itself. ◦ Asks the reactor to read ◦ Pass a callback to reschedule itself ◦ Yield control back.
  69. 69. Coroutines-based echo server TCPServer server; server.listen(4242); std::vector<Thread> threads; int lines_count = 0; while (true) { TCPSocket client = server.accept(); Thread t([client = std::move(client)] { try { while (true) { ++lines_count; client.send(client.read_until("n")); } } catch (ConnectionClosed const&) {} }); threads.push_back(std::move(t)); }
  70. 70. What we built at Infinit: the reactor.
  71. 71. What we built at Infinit: the reactor. • Coroutine scheduler: simple round robin • Sleeping, waiting • Timers • Synchronization • Mutexes, semaphores • TCP networking • SSL • UPnP • HTTP client (Curl based)
  72. 72. Coroutine scheduling reactor::Scheduler sched; reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::yield(); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("Bye 2"); }); ); sched.run();
  73. 73. Coroutine scheduling reactor::Scheduler sched; reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::yield(); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("Bye 2"); }); ); sched.run(); Hello 1 Hello 2 Bye 1 Bye 2
  74. 74. Sleeping and waiting reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::sleep(500_ms); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("World 2"); reactor::yield(); print("Bye 2"); }); );
  75. 75. Sleeping and waiting reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::sleep(500_ms); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("World 2"); reactor::yield(); print("Bye 2"); }); ); Hello 1 Hello 2 World 2 Bye 2
  76. 76. Sleeping and waiting reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::sleep(500_ms); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("World 2"); reactor::yield(); print("Bye 2"); }); ); Hello 1 Hello 2 World 2 Bye 2 Bye 1
  77. 77. Sleeping and waiting reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::sleep(500_ms); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("World 2"); reactor::wait(t1); // Wait print("Bye 2"); }); );
  78. 78. Sleeping and waiting reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::sleep(500_ms); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("World 2"); reactor::wait(t1); // Wait print("Bye 2"); }); ); Hello 1 Hello 2 World 2
  79. 79. Sleeping and waiting reactor::Thread t1(sched, [&] { print("Hello 1"); reactor::sleep(500_ms); print("Bye 1"); }); reactor::Thread t2(sched, [&] { print("Hello 2"); reactor::yield(); print("World 2"); reactor::wait(t1); // Wait print("Bye 2"); }); ); Hello 1 Hello 2 World 2 Bye 1 Bye 2
  80. 80. Synchronization: signals reactor::Signal task_available; std::vector<Task> tasks; reactor::Thread handler([&] { while (true) { if (!tasks.empty()) { std::vector mytasks = std::move(tasks); for (auto& task: tasks) ; // Handle task } else reactor::wait(task_available); } });
  81. 81. Synchronization: signals reactor::Signal task_available; std::vector<Task> tasks; reactor::Thread handler([&] { while (true) { if (!tasks.empty()) { std::vector mytasks = std::move(tasks); for (auto& task: tasks) ; // Handle task } else reactor::wait(task_available); } }); tasks.push_back(...); task_available.signal();
  82. 82. Synchronization: signals reactor::Signal task_available; std::vector<Task> tasks; reactor::Thread handler([&] { while (true) { if (!tasks.empty()) // 1 { std::vector mytasks = std::move(tasks); for (auto& task: tasks) ; // Handle task } else reactor::wait(task_available); // 4 } }); tasks.push_back(...); // 2 task_available.signal(); // 3
  83. 83. Synchronization: channels reactor::Channel<Task> tasks; reactor::Thread handler([&] { while (true) { Task t = tasks.get(); // Handle task } }); tasks.put(...);
  84. 84. Mutexes But you said no race conditions! You lied again!
  85. 85. Mutexes But you said no race conditions! You lied again! reactor::Thread t([&] { while (true) { for (auto& socket: sockets) socket.send("YO"); } }); { socket.push_back(...); }
  86. 86. Mutexes But you said no race conditions! You lied again! reactor::Mutex mutex; reactor::Thread t([&] { while (true) { reactor::wait(mutex); for (auto& socket: sockets) socket.send("YO"); mutex.unlock(); } }); { reactor::wait(mutex); socket.push_back(...); mutex.unlock(); }
  87. 87. Mutexes But you said no race conditions! You lied again! reactor::Mutex mutex; reactor::Thread t([&] { while (true) { reactor::Lock lock(mutex); for (auto& socket: sockets) socket.send("YO"); } }); { reactor::Lock lock(mutex); socket.push_back(...); }
  88. 88. Networking: TCP We saw a good deal of TCP networking: try { reactor::TCPSocket socket("battle.net", 4242, 10_sec); // ... } catch (reactor::network::ResolutionFailure const&) { // ... } catch (reactor::network::Timeout const&) { // ... }
  89. 89. Networking: TCP We saw a good deal of TCP networking: void serve(TCPSocket& client) { try { std::string auth = server.read_until("n", 10_sec); if (!check_auth(auth)) // Impossible with callbacks throw InvalidCredentials(); while (true) { ... } } catch (reactor::network::Timeout const&) {} }
  90. 90. Networking: SSL Transparent client handshaking: reactor::network::SSLSocket socket("localhost", 4242); socket.write(...);
  91. 91. Networking: SSL Transparent server handshaking: reactor::network::SSLServer server(certificate, key); server.listen(4242); while (true) { auto socket = server.accept(); reactor::Thread([&] { ... }); }
  92. 92. Networking: SSL Transparent server handshaking: SSLSocket SSLServer::accept() { auto socket = this->_tcp_server.accept(); // SSL handshake return socket }
  93. 93. Networking: SSL Transparent server handshaking: reactor::Channel<SSLSocket> _sockets; void SSLServer::_handshake_thread() { while (true) { auto socket = this->_tcp_server.accept(); // SSL handshake this->_sockets.put(socket); } } SSLSocket SSLServer::accept() { return this->_accepted.get; }
  94. 94. Networking: SSL Transparent server handshaking: void SSLServer::_handshake_thread() { while (true) { auto socket = this->_tcp_server.accept(); reactor::Thread t( [&] { // SSL handshake this->_sockets.put(socket); }); } }
  95. 95. HTTP std::string google = reactor::http::get("google.com");
  96. 96. HTTP std::string google = reactor::http::get("google.com"); reactor::http::Request r("kissmetrics.com/api", reactor::http::Method::PUT, "application/json", 5_sec); r.write("{ event: "login"}"); reactor::wait(r);
  97. 97. HTTP std::string google = reactor::http::get("google.com"); reactor::http::Request r("kissmetrics.com/api", reactor::http::Method::PUT, "application/json", 5_sec); r.write("{ event: "login"}"); reactor::wait(r); • Chunking • Cookies • Custom headers • Upload/download progress • ... pretty much anything Curl supports (i.e., everything)
  98. 98. HTTP streaming std::string content = reactor::http::get( "my-api.infinit.io/transactions"); auto json = json::parse(content);
  99. 99. HTTP streaming std::string content = reactor::http::get( "my-api.infinit.io/transactions"); auto json = json::parse(content); reactor::http::Request r( "my-api.production.infinit.io/transactions"); assert(r.status() == reactor::http::Status::OK); // JSON is parsed on the fly; auto json = json::parse(r);
  100. 100. HTTP streaming std::string content = reactor::http::get( "my-api.infinit.io/transactions"); auto json = json::parse(content); reactor::http::Request r( "my-api.production.infinit.io/transactions"); assert(r.status() == reactor::http::Status::OK); // JSON is parsed on the fly; auto json = json::parse(r); reactor::http::Request r( "youtube.com/upload", http::reactor::Method::PUT); std::ifstream input("~/A new hope - BrRIP.mp4"); std::copy(input, r);
  101. 101. Better concurrency: futures, ... std::string transaction_id = reactor::http::put( "my-api.production.infinit.io/transactions"); // Ask the user files to share. reactor::http::post("my-api.infinit.io/transaction/", file_list); std::string s3_token = reactor::http::get( "s3.aws.amazon.com/get_token?key=..."); // Upload files to S3
  102. 102. Better concurrency: futures, ... std::string transaction_id = reactor::http::put( "my-api.production.infinit.io/transactions"); // Ask the user files to share. reactor::http::post("my-api.infinit.io/transaction/", file_list); std::string s3_token = reactor::http::get( "s3.aws.amazon.com/get_token?key=..."); // Upload files to S3 reactor::http::Request transaction( "my-api.production.infinit.io/transactions"); reactor::http::Request s3( "s3.aws.amazon.com/get_token?key=..."); // Ask the user files to share. auto transaction_id = transaction.content(); reactor::http::Request list( "my-api.infinit.io/transaction/", file_list); auto s3_token = transaction.content(); // Upload files to S3
  103. 103. Version 1 Wait meta Ask files Wait meta Wait AWS Version 2 Ask files Better concurrency: futures, ... Version 2 Ask files
  104. 104. How does it perform for us ? • Notification server does perform: ◦ 10k clients per instance ◦ 0.01 load average ◦ 1G resident memory ◦ Cheap monocore 2.5 Ghz (EC2)
  105. 105. How does it perform for us ? • Notification server does perform: ◦ 10k clients per instance ◦ 0.01 load average ◦ 1G resident memory ◦ Cheap monocore 2.5 Ghz (EC2) • Life is so much better: ◦ Code is easy and pleasant to write and read ◦ Everything is maintainable ◦ Send metrics on login without slowdown? No biggie. ◦ Try connecting to several interfaces and keep the first to respond? No biggie.
  106. 106. Questions ?
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×