قالب وردپرس درنا توس
Home / IOS Development / mikeash.com: Friday Q & A 2017-10-27: Locks, Thread Security and Swift: 2017 Edition

mikeash.com: Friday Q & A 2017-10-27: Locks, Thread Security and Swift: 2017 Edition



Friday Q & A 2017-10-27: Locks, Thread Security and Swift: 2017 Edition

In the dark age of Swift 1 I wrote an article about locks and wire safety in Swift. The March march has made it quite outdated, and the reader Seth Willits suggested that I update it for the modern age, so here it is!

This article will repeat some material from the old, with changes to get it updated, and some discussion about how things have changed. It is not necessary to read the previous article before reading this.

A quick mention on locks
A lock or mutex is a construction that ensures that only one thread is active in a particular region of code at any time. They are usually used to ensure that multiple threads that access a reusable data structure, all see a consistent view of it. There are several types of locks:

  1. Blocking locks sleep a thread while waiting for another thread to release the lock. This is the usual behavior.
  2. Spinlocks uses a busy loop to see if a lock has been released. This is more effective if the waiting time is rare, but wasted CPU time if waiting is common.
  3. Locks / author locks allow multiple browser threads to enter a region at the same time, but exclude all other threads (including readers) when an "author" thread buys the lock. This can be useful as many data structures are safe to read from multiple threads simultaneously, but unsafe to write while other threads either read or write.
  4. Recursive locks allow a single thread to retrieve the same lock multiple times. Non-recurring locks can be blocked, crashed or otherwise misunderstood when typed from the same thread.

APIs
Apple APIs have a variety of mutex facilities. This is a long but not exhaustive list:

  1. pthread_mutex_t .
  2. pthread_rwlock_t .
  3. DispatchQueue .
  4. OperationQueue when configured to be serial.
  5. NSLock .
  6. os_unfair_lock .

In addition to this, Objective-C @synchronized ] provides language construction, currently implemented on top of pthread_mutex_t . Unlike the others, @synchronized does not use an explicit lock object, but rather treats any Objective-C object as if it were a lock. A @synchronized (someObject) section will block access to other @synchronized sections using the same object pointers.

  1. pthread_mutex_t is a lock lock that can optionally be configured as a recursive lock.
  2. pthread_rwlock_t is a blocking browser / author lock.
  3. DispatchQueue can be used as a blocking lock. It can be used as a browser / writer lock by configuring it as a contemporary queue and using barrier blocks. It also supports asynchronous execution of the locked area.
  4. OperationQueue can be used as a locking lock. Like dispatch_queue_t it supports the asynchronous execution of the locked area.
  5. NSLock blocks lock as a Lens-C class.
  6. Finally NSRecursiveLock is a recursive lock, as the name suggests.
  7. os_unfair_lock is a less sophisticated locking lock on the lower level. synchronized is a blocking recursive lock.

    Spinlocks, lack of
    I mentioned spinlocks as a type of lock, but none of the APIs listed here are spinlocks. This is a big change from the previous article, and is the main reason I'm writing this update.

    Spinlocks are very simple and effective under the right conditions. Unfortunately, they are a little too simple for the complexity of the modern world.

    The problem is thread priorities. When there are more runnable threads than CPU cores, higher priority threads preference. This is a useful term, because CPU kernels are always a limited resource and you do not want any time-backed background network operation that steals time from your user interface while the user tries to use it.

    When a high priority thread gets stuck and must wait for a low priority thread to complete some work, but the highest priority thread prevents low priority thread from performing this work can lead to long hands or to and with a permanent deadlock. [1

    9659004] The Deadlock scenario looks like this, where H is a high priority and L is a low priority:

    1. L buys the spinlock.
    2. L starts doing some work.
    3. H gets ready to run and preempts L.
    4. H tries to get the spinlock but fails because L still holds it.
    5. H begins to turn on the spin, repeatedly tries to acquire it and monopolizes the CPU.
    6. H Can not continue until L finishes work. L can not finish his work unless H stops the spin on the spin.
    7. Sadness.

    There are ways to solve this problem. For example, H can donate his priority to L in step 4 so that L can finish his work on time. It is possible to create a spinlock that solves this problem, but Apple's old spinlock API, OSSpinLock does not.

    This was a long time since the priorities of the thread were not widely used on Apple's platforms, and the priority system used dynamic priorities that kept the death from being too long. In recent times, the quality of service classes has made different priorities more common, making it more difficult to maintain the death rate.

    OSSpinLock who did a great job for so long, ended up being a good idea with the release of iOS 8 and macOS 10.10. It has now been formally written off. The replacement is os_unfair_lock which fills the same overall purpose as a low-level, unsophisticated, cheap lock, but is sufficiently sophisticated to avoid issues with priorities.

    Value Types
    Note at pthread_mutex_t pthread_rwlock_t and os_unfair_lock are value types, not reference types. This means that if you use = on them, you create a copy. This is important because these types can not be copied! If you copy one of the pthread types, the copy will be useless and may crash when you try to use it. pthread works as work with these types, assuming that the values ​​are in the same memory addresses as where they were initialized, and putting them somewhere else afterwards is a bad idea. os_unfair_lock will not crash, but you get a completely separate lock out of what's never what you want.

    If you use these types, be careful not to copy them either explicitly with an = operator, or implicitly by, for example, posting them into a structure or catch them in a closure.

    In addition, as locks are inherently sociable objects, this means that you must be instead of la .

    The others are reference types, which means they can be transferred to will and can be declared by la la .

    Initialization
    You must be careful about pthread locks because you can create a value using the empty () initializer, but that value will not be a valid lock. These locks must be initialized separately using pthread_mutex_init or pthread_rwlock_init :

          var    mutex    =    pthread_mutex_t   () 
          pthread_mutex_init [19659055]]   and   mutex    nil ) 
    

    It is tempting to write an extension of these types that break up the initialization. However, there is no guarantee that initializers work with the variable directly, rather than on a copy. Since these types can not be copied safely, such an extension can not be written safely unless you get it to return a pointer or wrapper class.

    If you use these APIs, do not forget to call the corresponding destroy the function when it's time to unlock the lock.

    Use
    DispatchQueue has a recall-based API that makes it natural to use it safely. Depending on whether you need the protected code to run synchronously or asynchronously, call sync or async and send the code to run:

          queue .   sync   (  perform :    {   ...  }).       ASYNC   (  perform  :    {   ...  }) 
    

    The sync file is The API nice enough to capture the return value from the protected code and give it as the return value of sync method:

          la    value    =    queue .   sync     perform :   ]    return    yourself .   protectedProperty  }) 
    

    You can even throw errors inside the protected block and they will propagate. [19659004] OperationQueue is similar, although it does not have a built-in way to propagate return values ​​or errors. You must build it yourself or use DispatchQueue instead.

    The other APIs require separate locking and unlocking, which can be exciting when you forget one of them. The conversation looks like this:

          pthread_mutex_lock   (  &   mutex ) 
          ... 
          pthread_mutex_unlock   (  &   mutex   ]) 
    
          nslock .   lock   () 
          ... 
          nslock .   lock   () 
    
          os_unfair_lock_lock   (  and   lock ) 
          ... 
          os_unfair_lock_unlock   (  &   lock ) 
    

    Since the APIs are almost identical, I want to use nslock for further examples. The others are the same, but with different names.

    When the protected code is simple, this works well. But what if it's more complicated? For example:

          nslock .   lock   () 
  8. ] { nil } 19659055] () nslock . unlock () return value

Ups, sometimes you do not unlock the lock! This is a good way to make difficult finding bugs. Perhaps you are always disciplined with your return statements and never do this. What if you throw an error?

      nslock .      
                                                calculate   () 
      nslock .   lock   () 
      return    value [19659062] Same problem! Perhaps you are very disciplined and will never do this either. Then you're safe, but even then the code is a bit ugly: 

      nslock .   lock   () 
      la    value    =    calculate [19659054]   .   unlock   () 
      return    value 

The obvious solution for this is to use the Swift's defer mechanism. When locking, expose the locking ring. So no matter how you quit the code, the lock will be released:

      nslock .   lock   () 
      defer    {   nslock .  ]   ()  } 
      return    compute   () 

This works too early return, erases errors, or just plain code.

It's still annoying to write two lines so we can pack it all up in a callback feature like DispatchQueue has:

      func    with Locked   <  T >     _    lock :    NSLock     _    f :    ()    casts    ]    T ]    rethrows    -.>    T    {
          lock     lock   () 
          expose [19659072] {   lock .   lock   ()  } 
          return    sample    f   () 
    } 

      la [19459053] value    =    withLocked     lock     {   return    yourself .   protectedProperty  }) [19659062] When implementing this for value types, you must be sure to take a  pointer  to the lock instead of the lock itself. Remember, you do not want to copy these things!   pthread  The version will look like this: 

      func    with Locked   <  T >   (  _    mutexPtr :    UnsafeMutablePointer   [  pthread_mutex_t >     _    f :    ()    kaster    ->    T [19659052]    rethrows    ->    T    {
          pthread_mutex_lock   (  mutexPtr ) 
          expose    {   pthread_mutex_unlock [19659054] ]   mutexPtr )  } 
          return    sample    f   () 
    } 
la
value = [19659051] with mutex return by yourself . protectedProperty })

Select Your Lock -API
DispatchQueue is an obvious favorite . It has a nice Swifty API and is nice to use. The dispatch library gets a lot of attention from Apple, which means it can be expected to work well, dependably and get many great new features.

DispatchQueue enables a lot of nifty advanced applications, such as scheduling timers or event sources to burn directly to the queue you use as a lock so that the handlers are synchronized with other things using the queue . The ability to set target queues allows for expression of complex lock hierarchies. Custom simultaneous queues can easily be used as read author lock. You only need to change a single letter to perform protected code asynchronously on a background thread instead of synchronously. And the API is easy to use and hard to misuse. There is a victory around. That's why GCD quickly became one of my favorite APIs, and is still one to this day.

Most of the time, it's not perfect. A shipping shoe is represented by an object in memory, so it's a bit overhead. They lack some niche features, such as condition variables or recursivity. Each time, it is useful to make individual locking and unlock calls rather than being forced to use a recall-based API. DispatchQueue is usually the right choice and is a good standard if you do not know what to choose, but sometimes there is reason to use others.

os_unfair_lock can be a good choice when per-lock overhead is important (because for some reason you have a large number of them) and you do not need fancy features. It is implemented as a 32-bit integer that you can place where you need it, so the overhead is small.

As the name hint, one of the features that is missing os_unfair_lock justice. Lock justice means that it is at least an attempt to ensure that different threads waiting for a lock all get the chance to get it. Without justice, it is possible for a wire that quickly releases and recovers the lock to monopolize it while other threads await.

Whether this is a problem depends on what you are doing. There are some instances where justice is needed, and some where there is nothing at all. The lack of justice allows os_unfair_lock to have better performance, so it can provide an edge in cases where justice is not necessary.

pthread_mutex is a place in the middle. It's significantly larger than os_unfair_lock at 64 bytes, but you can still check where it's stored. It implements justice, although this is a detail of Apple's implementation, is not part of the API specification. It also provides various other advanced features, such as the ability to do the recruitment of mutex and fancy thread tracks.

pthread_rwlock provides a browser / writer lock. It takes up a big 200 bytes and does not give much in the way of interesting features, so there seems to be no reason to use it over a single DispatchQueue .

NSLock is a cover around pthread_mutex . It's hard to come up with a state of use for this, but it may be useful if you need explicit lock / unlock conversations, but you will not have trouble manually initializing and destroying a pthread_mutex .

OperationQueue offers recall-based API like DispatchQueue with some advanced features for things like dependencies between operations, but without many of the other features offered by DispatchQueue . There is little reason to use OperationQueue as a locking API, although it may be useful for other things.

In short: DispatchQueue is probably the right choice. Under certain circumstances, os_unfair_lock may be better. The others are usually not the ones to be used.

Conclusion
Swift has no language sync for thread synchronization, but the APIs allow. GCD is one of Apple's crown jewels, and the Swift API for that is great. For the rare cases where it fits, there are many other options to choose from. We have not @synchronized or nuclear properties, but we have things that are better.

It breaks it up for this time. Check back again for more fun stuff. If you're bored in the meantime, buy one of my books! Friday Q & A is powered by reading ideas so if you have a topic you want to see covered here, please send it in!

Do you like this article? I sell all the books full of them! Volumes II and III are now out! They are available as ePub, PDF, Print, and on iBooks and Kindle. Click here for more information.


Comments:


Comments RSS feed for this page

Add your thoughts, post a comment:

Spam and off topic posts are deleted without notice. Culprits can be humiliated publicly in my own discretion.

Code syntax highlighting thanks to Pygments.




Source link