[libdispatch-dev] Queue size

David Leimbach leimy2k at gmail.com
Thu Oct 7 08:54:31 PDT 2010


On Thu, Oct 7, 2010 at 8:14 AM, Dave Zarzycki <zarzycki at apple.com> wrote:

>
> On Oct 7, 2010, at 10:57 AM, Thomas Clement wrote:
>
> On Oct 7, 2010, at 3:10 PM, Dave Zarzycki wrote:
>
> On Oct 7, 2010, at 8:35 AM, Thomas Clement wrote:
>
>
> Queues cannot have variable widths - it's either 1 or "infinite".  That
> idea was considered but ultimately dropped given that it only made
> submitting things to queues even less deterministic in terms of where/how
> they would run, and there is already a way (again, semaphores) to get that
> behavior, so this would have only added complexity to all queues for little
> overall gain.
>
>
> I'm confused. What about the dispatch_queue_set_width() private function?
>
> Isn't this already implemented and functional?
>
>
> "Implemented and functional" is not the bar we use for making an API
> publicly available. It also has to be sustainable and supportable.
>
>
> Unfortunately, dispatch_queue_set_width() fails at the latter goals. That
> API encourages bad design, and practically speaking, it was only created so
> that developers can workaround underlying bugs (latent serialization). We'd
> much rather see developers fix (or file bugs) against the underlying
> problems than see a hierarchy of long term bandaids and workarounds be
> created.
>
>
> I understand.
> The problem I'm facing is that some of my dispatched code at some point
> pauses its execution (basically locking on pthread_cond_wait() or similar)
> and libdispatch spawns way too many threads as a result (hundreds of them).
>
>
> Thomas,
>
> That would be a "latent serialization" problem that was hinted at in my
> response. The dirty secret of our industry is that many thread safe
> libraries are not actually designed to achieve any concurrency when invoked
> concurrently. This creates huge problems for developers like yourself that
> are trying to use them concurrently.
>
> It is difficult if not impossible to remove these locks. I guess the
> solution is to limit the number of dispatched blocks on my queues using
> dispatch semaphores.
>
>
> The solution is to desynchronize the subsystem in question and switch to
> completion callbacks. In other words, do this kind of transform:
>
> Result *foo(Input *x);
>
> …becomes:
>
> void foo(Input *x, dispatch_queue_t completion_queue, void
> (^completion_callback)(Result *));
>

A concurrent producer/consumer pattern of sorts?  That's pretty nice, as
long as you know how to match the inputs to the processing queue with the
output in the results queue. I guess the queuing of the work must happen in
FIFO order, but if there is another level of scheduling of that work, by
placing the input requests on different work queues, you'd still need to
have a way to match an output with the original input.

One could fairly easily come up with a monotonically increasing tag pool or
something to allow results to come back to the  completion queue in a
different order than the worker queue had them submitted.

I use this pattern in other concurrent systems now actually, and some file
system protocols can work this way (9P2000 for example)

Dave

>
>
> davez
>
>
> _______________________________________________
> libdispatch-dev mailing list
> libdispatch-dev at lists.macosforge.org
> http://lists.macosforge.org/mailman/listinfo.cgi/libdispatch-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/libdispatch-dev/attachments/20101007/6a31843a/attachment.html>


More information about the libdispatch-dev mailing list