Historically, multitasking architecture was one of the key properties that define web server/application. One process per request in CGI met its limits rather soon. One thread per request in web server like Apache was a model that dominated for some time, but it did not scale either. Web service became really concurrent and when you need to handle ten thousands of requests simultaneously, having context overhead of something as heavy as a thread is not an option. Thread pools help mitigate context creation issues, but once you get to those numbers of concurrent requests, it won't help either.
Practice have shown that the most performant approach is to have worker thread count close to processor count and somehow process lot of requests at the same time in one context. It is quite trivial in most cases but blocking I/O becomes an important issue. Any time you read a file, run database query or make network request to another host, thread will just sit down there waiting and doing nothing until data arrives. Whole eternity requests are not being processed and no real work is done. It wasn't an issue in old multithreaded applications as there has always been some other thread to execute when this one sleeps.
Asynchronous I/O came to the rescue here. You do not wait until your data arrives, you do something else and sometimes check if it has already arrived. In web services this approach was pushed to its extent by Node.js, where you simply provide callbacks to every I/O operations that will be executed once the data is ready. This does scale pretty well and allows to utilize all available processor time reasonably well with new each separate request handling costs staying low.
It does look weird though. Some does consider this a feature but code that consists mostly of callbacks hardly looks right for someone who has been used to traditional mostly sequential coding style. Vibe.d does one step forward here and says that being asynchronous is just an implementation details. Here is basic vibe.d application taken directly from their homepage:
import vibe.d;It looks quite straightforward syntax-wise. There is one callback for request handling but the most interesting part is module constructor (
void handleRequest(HttpServerRequest req,
res.writeBody("Hello, World!", "text/plain");
shared static this()
auto settings = new HttpServerSettings;
settings.port = 8080;
static this). It looks like a typical blocking code other that it actually isn't. In fact
listenHttpprepares data needed to start HTTP server and schedules event to start one upon program start. When you import
vibe.dmodule, it automatically creates a
mainfunction for you that in essence comes down to this code:
import vibe.vibe;Thus all module constructor does is event preparation, despite those look like a typical blocking function calls.
Similar approach is taken with all I/O related modules vibe.d provide. It uses fibers to simplify simultaneous handling of multiple requests in one thread and, for example, database driver modules are written to silently yield when query is sent and switch to other fibers. All event and fiber related code is separated clean enough so that you actually can hack that database module without even knowing how it works.
Feels like magic first time you encounter this but boils down to quite simple and elegant implementation.