Skip to content


In July I published (GCD – Beta) the first thoughts about GCD changes announced at WWDC 2016.

The GCD team has been hard a work since then and has fixed some of the issues I found while testing some of the newly released API at the time. I am going to revisit in full some of the changes that most interested me at the moment and will leave the GCD – Beta post up as reference.

Instantiating a serial queue:

Concurrency default configuration when creating a queue is serial

Let’s create a concurrent queue:

Used to be able to pass in .serial in attributes, not anymore, for serial use previous example, serial is the default behavior.
autoreleaseFrequency is another very interesting addition. No documentation is available yet so, stay tuned for more information. I find this addition very useful for imports routines where memory can be tight and more control on draining pools is needed. For now I am going to use .workItem assuming that pool is drained for each work item. I make this assumption because of the documentation found in the libdispatch source:
Dispatch queues with this autorelease frequency inherit the behavior from their target queue. This is the default behavior for manually created queues.
Dispatch queues with this autorelease frequency push and pop an autorelease pool around the execution of every block that was submitted to it asynchronously.
Dispatch queues with this autorelease frequency never set up an individual autorelease pool around the execution of a block that is submitted to it asynchronously. This is the behavior of the global concurrent queues.

So we now have a serial queue and a concurrent queue and we understand how the pool behavior works. Let’s look at DispatchWorkItem.

Given a concurrent queue we dispatch work items with a low QoS (.utility) onto the queue, right after we dispatch a work item with a very high QoS (.userInteractive) and we pass in “.enforceQoS” as one of its flags. Why would this be useful? Lets say the concurrent queue is used in our app to process some user objects, at any given point a certain user object (userA) becomes very important due to some UI interactions, we don’t want to disrupt what has already been dispatch but we do want to accelerate the work to be done on userA relative to other future dispatched items. So with this new API we have the ability to have work done in a queue with a much lower priority at a much higher priority. It is very important to note that GCD will not guarantee any specific order simply because we give a much higher QoS to userA work, but we make it possible for GCD to prioritize userA work over other work items with lower QoS onto the queue with the combination of the QoS specified in the Work Item and its flag .enforceQoS.

Running this code shows the highPriorityItem run before other lower priority items even though the lower priority items were asynced before the highPriorityItem.

Now lets take a look at dispatch group. Given the function:

The result is this:

So, pretty much same overall concept as the well known dispatch_group_async, again cleaner API. One thing to note is that perform() can be called multiple times on the same DispatchWorkItem if needed.


We could cover many more GCD examples, GCD can be used for a large variety of applications to solve concurrent programming needs. Here we’ve just taken a look at some of the final API released by Apple and some of its concepts. A few more examples and new APIs can be found here (these examples were written before some of the recent final changes to the API, overall the concepts are still pretty much the same, but you’ll notice some syntax changes for example in the way queues are instantiated and more).

One Comment

  1. dh dh

    Several thanks for sharing this fine piece. Really fascinating tips! (as always, btw)

Leave a Reply

Your email address will not be published. Required fields are marked *