Hunchentoot: taskmaster code review

09b August 16, 2019 -- (tech tmsr)

This post is part of a series on Common Lisp WWWism, more specifically of ongoing work to saw apart Hunchentoot into pieces and own them and the whole of this Common Lisp web server. The second fundamental component of Hunchentoot, next to the so-called acceptor, is the so-called taskmaster.

Hunchentoot taskmasters are, as the name suggests, an abstraction for work management, or in other words, a mechanism implementing work distribution among processing units. On the surface they look very similar to Apache's MPM model, which (again, viewed very superficially) suggests Hunchentoot was an attempt at a Lisp-style (re)implementation of Apache. This, as a side note, doesn't really speak in its favour. We do know, however, what the current coad is made of, and it's yet to be seen whether it can even be retrofitted into something that fits in head, as it's otherwise by all looks as usable as a CL-on-Unix-on-top-of-TCP application can be. It's not like there's a better CL item to work with anyway, so that's that.

As presented in the earlier architectural overview, taskmasters expose the following methods: execute-acceptor, handle-incoming-connection and shutdown. In very broad lines, execute-acceptor gets called on start and sets up an execution context where accept-connections is called; handle-incoming-connection is then called on each new connection, where it calls process-connection; and finally, shutdown is called by stop, in order to suspend request processing on all threads. Additionally, (more specific) taskmasters may expose other methods1 specific to their operation.

The taskmaster class provides three subclasses: a single-threaded implementation, an abstract "multi-threaded" taskmaster and a concrete "one thread per connection" taskmaster descending from the multi-threaded one -- otherwise the coad under examination is also structured to accommodate the usual ifdefisms, which we'll blissfully ignore. This being said, let us look at each taskmaster and the method it implements. First, we notice that shutdown is given an implementation for all taskmasters:

[s3] shutdown: This is a generic implementation, that can be (and is, in one case, as shown below) overriden by subclasses; this implementation doesn't do anything except returning the taskmaster object.

[[]] single-threaded-taskmaster: The simplest taskmaster implementation; probably not very useful for battlefield use, because of poor load handling, blocking operations and so on.

[stt-ea] execute-acceptor: Calls accept-connections.

[stt-hic] handle-incoming-connection: Calls process-connection.

Shutdown is provided by the generic implementation above.

[[]] multi-threaded-taskmaster: Defines a new acceptor-process field denoting a new thread whose sole work is accepting new connections. Thus, this acceptor only provides an implementation for:

[mtt-ea] execute-acceptor: Starts a new "listener" thread which waits for new connections; essentially the same as the single-threaded version, only now accept-connections runs on a separate thread.

All the other methods are implemented by sub-multi-threaded-taskmasters, i.e. by one-thread-per-connection-taskmaster2.

[[]] one-thread-per-connection-taskmaster: As the name suggests, each new connection will spawn a new thread when it is to be handled. Up to max-thread-count connections are accepted, and then the remaining connections up to max-accept-count are queued waiting for a thread to be allocated for them. If both counts are exceeded (or if the former is exceeded and the latter is not set) then a HTTP 503 is sent to the client.

Below I'll detail (top-down) the implementation of handle-incoming-connection, shutdown and (the additional) initialize-instance -- the execute-acceptor in use is that of the father-class.

[otpct-ii] initialize-instance: This is an ":after" method which, given a new taskmaster instance, does some sanity checking, i.e. that: a. if max-accept-count is supplied, then so is max-thread-count, and b. that the former is higher than the latter. The idea here is that the number of Hunchentoot worker threads is either unlimited3, or limited by max-thread-count -- in the former case, max-accept-count doesn't really make sense, because new connections never get blocked in the wait-queue.

[otpct-hic] handle-incoming-connection: Calls create-request-handler-thread; in other words, it creates a new thread to handle requests associated with the current connection.

[otpct-s3] shutdown: Joins (in the Unix sense of "thread join") the acceptor-process, i.e. the listener thread and returns the current taskmaster.

As observed, these methods are implemented using the following "support" methods and functions:

[otpct-itac] increment-taskmaster-accept-count: Atomically increments the accept-count at a given time.

[otpct-dtac] decrement-taskmaster-accept-count: Atomically decrements the accept-count at a given time.

[otpct-ittc] increment-taskmaster-thread-count: Atomically increments the thread-count at a given time.

[otpct-dttc] decrement-taskmaster-thread-count: Atomically decrements the thread-count at a given time; when thread-count falls under max-accept-count, notifies listener via note-free-connection that new connections may be handled.

[otpct-nfc] note-free-connection: Signals the taskmaster's wait-queue; as the name suggests, it's used when there are "free" "slots" available for connections to be handled.

[otpct-wffc] wait-for-free-connection: Waits for "free" connection "slots" on the taskmaster's wait-queue; used in handle-incoming-connection% when there aren't (yet) enough resources to process a given connection.

[otpct-tmtr] too-many-taskmaster-requests: Calls acceptor-log-message; logs the situation when the taskmaster's wait-queue is full or, if max-accept-count isn't set, that the thread-count has reached the ceiling i.e. max-thread-count.

[otpct-crht] create-request-handler-thread: a. Wrapped in a handler-case*; b. start a new thread; c. which calls handle-incoming-connection%. In case of errors, d1. close the current connection's socket stream, aborting the connection; and d2. log the error.

[otpct-hic2] handle-incoming-connection%: The description contained in the function's definition is pretty good, but nevertheless, let's look at this in more detail: a. increment-taskmaster-accept-count; b. create a local binding for process-connection%, which b1. calls process-connection b2. with the thread-count incremented; c. implement the logic described below.

c1. if thread-count is null, then process-connection; otherwise, c2. if either max-accept-count is set and accept-count is at this threshold, or max-accept-count isn't set and thread-count is at the max-thread-count threshold, then call too-many-taskmaster-requests and send-service-unavailable-reply, which ends the current connection; otherwise, c3. if max-accept-count is set and thread-count is larger than max-thread-count then wait-for-free-connection, then, when unblocked, process-connection%; otherwise, c4. process-connection%.

As can be observed, handle-incoming-connection% implements the bulk of the decision-making process for one-thread-per-connection taskmasters. This isn't very difficult to wrap one's head around, despite the apparent gnarl; simplifications, at the very least of an aesthetic nature, are possible, I'll leave them as a potential exploration exercise for a later date -- or, if the reader desires to chime in...

[otpct-ssur] send-service-unavailable-reply: Yet another pile of gnarl. Wraps everything in an unwind-protect and catches all potential conditions. In this context, it sends a http-service-unavailable message with the content set to the text returned by acceptor-status-message.

At the end, decrement-taskmaster-accept-count and flush and close the connection stream.

[otpct-cas] client-as-string: Convenience function used by create-request-handler-thread to give a name to the thread to be created, of the form "address:port".

To conclude: as can be seen, this is still a monster, albeit more organized than its acceptor brother. At this point, the big remaining pieces are requests, replies and handler dispatchers, which should provide us with almost4 everything we need to actually have a Hunchentoot.

Meta: blog comments are for now still missing; thus readers are invited to leave them in #spyked or (assuming they know where they're heading) #trilema. Existing comments are preserved here in 5.


  1. Though all methods are piled together in the same place and, say, default "thread count" implementations are provided for some taskmasters that have nothing to do with multithreading. This is IMHO a very early sign that whoever wrote the code must have been fucked in the head.

  2. Which isn't to say there couldn't be more than one multi-threaded-taskmaster implementation, we just haven't seen any others. The very same, however, could be said about e.g. shit: sure, it could come from a human, a cow or a horse, but for all intents and purposes it's still shit, so why the additional layer of indirection?

    Oh, it could be extended? Well, show me one of these extensions. And if there are any, why aren't they in Hunchentoot?

  3. Which is in principle a very stupid idea, since computing practice shows that resources are always limited, no matter how many extra compute nodes or how much disk space, bandwidth etc. you're willing to get. Then again, who's to say that the webserver implementation oughta tell the operator how he should run his program?

  4. There's tons of glue poured around this set of fundamentally TCPistic web-server-pieces. Some of this glue I've already reviewed proceeding for the components that use it; some of it I haven't, and sooner or later I will, if only to establish what's to stay and what's to go once I start cleaning the whole thing up.

  5. No comments thus far.