[libdispatch-dev] libdispatch's thread model

Stacey Son sson at FreeBSD.org
Thu Jan 28 10:43:22 PST 2010


On Jan 28, 2010, at 11:36 AM, Michael Ash wrote:

> On Thu, Jan 28, 2010 at 12:20 PM, Daniel A. Steffen <dsteffen at apple.com> wrote:
>> 
>> On Jan 28, 2010, at 7:07 AM, Mario Schwalbe wrote:
>>> 1. Let's assume someone manages to submit n jobs that block each other causing a
>>> deadlock on a machine having n processors. The thread pool implementation isn't able
>>> to execute any other jobs anymore, so the application can be considered erroneous/dead.
>>> The work queue implementation is still able to execute jobs waiting in the queue, so
>>> the application (as a whole) can still make some progress, but cannot finish either,
>>> because - assuming the results of those blocked jobs are important - it at some point
>>> has to wait for their results (dispatch_group_wait()) which will block forever as well.
>>> 
>>> 2. A deadlock requires a cycle in the dependency graph. But jobs submitted that are
>>> still waiting in the queue, won't be executed yet, and, hence, won't be able to acquire
>>> any resource to prevent other jobs from executing.
>> 
>> not necessarily a cycle, a dependency chain longer than the size of the thread pool is sufficient:
>>        sem1 = dispatch_semaphore_create(0);
>>        sem2 = dispatch_semaphore_create(0);
>>        dispatch_async(q, ^{dispatch_semaphore_wait(sem1);});
>>        dispatch_async(q, ^{dispatch_semaphore_wait(sem2); dispatch_signal(sem1);});
>>        dispatch_async(q, ^{dispatch_signal(sem2);});
>> 
>> this will deadlock if the pool size is 2 but work correctly with a larger pool (or with the workqueue implementation)
> 
> Or to put it another way, every dispatch block has an implicit
> dependency on the thread pool itself, so when you add that to the
> analysis, this situation *does* have a cycle, because the first two
> acquire thread pool resources, and the last two wait on thread pool
> resources before they can run.
> 
> This is something to watch out for even on the Mac, because the thread
> pool maxes out at, I believe, 512 threads. It's tough, but not
> impossible, to hit that number inadvertently and have everything seize
> up.

Yes, there is a finite limit to threads even on a mac.  :)  The thread workqueue code in the kernel hooks before and after the thread context switch so it can monitor how long threads have been blocked on I/O (besides hooking thread yields).   If threads have been blocked for a while it will add more threads to the work pool.  If threads have been "parked" (or not doing anything) for a while it will also reduce the pool size as well.

Before doing a similar implementation like apple's I partially implemented some kernel code that would send "hints" (via kevents) to an user level thread manager about creating new threads.  Unfortunately that didn't seem to work so well.   It seemed that it would stall for a relatively long period of time when a bunch of work items where submitted to the work queue.  Having the kernel create the new threads seems to do much better.

-stacey.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/libdispatch-dev/attachments/20100128/d8926b37/attachment.html>


More information about the libdispatch-dev mailing list