Tidbits

Semaphore implementation with adjustable concurrency limit

A Semaphore can be used to block threads from continuing execution while another thread uses specific code or resources. But unlike a lock or Monitor, they can allow more than one thread access at a time.
For that purpose the existing implementation receives a limit when initializing, which represents the maximum amount of threads that may “enter”, before any thread has to wait. This limit cannot be changed afterwards anymore. A feature I recently needed.
My solution was my own Semaphore class, that mimics the behavior, but without worrying about the details about how to safely and efficiently block a thread until the counter ticks down.

/// <summary>
/// A simple alternative to <see cref="Semaphore"/> that allows for changes to the thread limit
/// </summary>
public class VariableLimitSemaphore : IDisposable
{
    private readonly EventWaitHandle _waitHandle;
    private readonly object _entryLock;
    private readonly object _counterLock;
    private int _limit;
    private int _counter;
 
    /// <summary>
    /// The current amount of threads that have been granted entry
    /// </summary>
    public int CurrentCounter
    {
        get
        {
            lock (_counterLock)
                return _counter;
        }
    }
 
    /// <summary>
    /// The maximum number of threads allowed entry
    /// </summary>
    public int Limit
    {
        get
        {
            lock (_counterLock)
                return _limit;
        }
        set
        {
            if (value < 1)
                throw new ArgumentOutOfRangeException(nameof(value));
 
            lock (_counterLock)
            {
                _limit = value;
                if (_limit <= _counter)
                    _waitHandle.Reset();
            }
        }
    }
 
    /// <summary>
    /// Creates a new <see cref="VariableLimitSemaphore"/>
    /// </summary>
    /// <param name="initialLimit">The initial limit for concurrent threads</param>
    /// <exception cref="ArgumentOutOfRangeException"><paramref name="initialLimit"/> is less than 1</exception>
    public VariableLimitSemaphore(int initialLimit)
    {
        if (initialLimit < 1)
            throw new ArgumentOutOfRangeException(nameof(initialLimit));
 
        _limit = initialLimit;
        _counter = 0;
 
        _waitHandle = new EventWaitHandle(trueEventResetMode.AutoReset);
        _entryLock = new object();
        _counterLock = new object();
    }
 
    /// <summary>
    /// Blocks the current thread until entry is permitted
    /// </summary>
    public void Wait()
    {
        lock (_entryLock)
        {
            _waitHandle.WaitOne();
            lock (_counterLock)
            {
                if (++_counter < _limit)
                    _waitHandle.Set();
            }
        }
    }
 
    /// <summary>
    /// Frees up a single entry for use by another (waiting) thread
    /// </summary>
    public void Release()
    {
        lock (_counterLock)
        {
            if (--_counter < _limit)
                _waitHandle.Set();
        }
    }
 
    /// <inheritdoc />
    public void Dispose()
    {
        _waitHandle?.Dispose();
    }
}

Core of this is an EventWaitHandle, which is used to let threads pass in a controlled manner, one by one. Each time a thread “enters” it increments the counter, compares it to the limit, and lets another thread enter if the limit allows it.
Similarly, when a thread “leaves”, the counter is decremented, compared with the limit and another thread might be granted access.
The only tricky part was eliminating race conditions. For example, if I didn’t use _entryLock to queue up threads even before the WaitHandle, threads could enter, pause before incrementing the counter and that way make another thread that is leaving believe, that there is extra space for yet another thread.

Things you might want to change are overloads for Wait, to include cancellation or timeouts.
Also, this implementation starts accepting threads as soon as it’s initialized, but could easily be extended for more options.
Furthermore, if you have a look at the System.Threading.Semphore source code, you’ll notice a lot of work to make sure that thing runs reliably. A lot of consideration that I didn’t put into this. This is enough for my simple use cases, but I wouldn’t trust it with critical code!

Tidbits

Single thread worker queue

This is a simple class, that allows you to queue up Actions for processing them in a single, dedicated thread.
Once the queue is empty, the thread terminates.
When new items are queued up after the thread terminated, a new one is created.

Code please!


What is this for?
Not sure if you have that situation actually. So maybe nothing?

Basically, I had some logic that needed to do some work at irregular intervals.
Now you’re thinking “go for Task or ThreadPool“.
Both good solutions.
But the things I needed them to perform were resource hogs (well, still are), mostly in terms of processing time.
That’s why I wanted a dedicated thread, ideally one for which I could set a higher priority.

The downside of that is of course to either have a thread constantly running idle or the overhead of creating new threads all the time.
Luckily the stuff the thread was meant for usually came in bulk, so a long time of nothing followed by a quick burst of a handful of actions. Which in turn meant I could just keep the thread running for a bunch of actions, saving some overhead, but terminate it until the next batch came in, preventing idle threads.

In the end this system is a compromise between the two options. Not perfect, not optimal, but good enough for me.
And maybe for you? The basic premise isn’t exactly uncommon, and it’s one of those things you spend less than ten minutes and a few lines of code on when you actually need it. This is just a more formal implementation for reuse.


The code should be pretty self explanatory.
Admittedly, this is a bit over designed for what it is…
Again, it’s one of those things you just write yourself when needed, with only what you actually need. And usually this thing is mixed into something else, rather than having its own class. At least that’s how I see it.

/// <summary>
/// Processes a queue of actions on a dedicated thread that stays alive until the queue is finished
/// </summary>
public class QueuedThreadInvoker
{
    private readonly object _lock;
    private readonly Queue<Action_queue;
 
    private readonly string _name;
    private readonly ThreadPriority _priority;
    private readonly ApartmentState _apartmentState;
    private readonly bool _isBackground;
    private Thread _thread;
 
    /// <summary>
    /// Creates a new <see cref="QueuedThreadInvoker"/> for queueing actions on a custom thread
    /// </summary>
    /// <param name="name">The name of the thread</param>
    /// <param name="priority">The priority of the thread</param>
    /// <param name="apartmentState">THe apartment state of the thread</param>
    /// <param name="isBackground">Whether to run the thread in the back- or foreground</param>
    public QueuedThreadInvoker(string name = nullThreadPriority priority = ThreadPriority.NormalApartmentState apartmentState = ApartmentState.MTAbool isBackground = true)
    {
        _name = name ?? nameof(QueuedThreadInvoker);
        _priority = priority;
        _apartmentState = apartmentState;
        _isBackground = isBackground;
 
        _lock = new object();
        _queue = new Queue<Action>();
        _thread = null;
    }
 
    /// <summary>
    /// Triggered from the worker thread when an action throws an exception
    /// </summary>
    public event Action<ExceptionUnhandledException;
 
    /// <summary>
    /// Adds an action to the queue. If no thread is active, a new thread is created for processing it.
    /// </summary>
    /// <param name="action">The action to queue up</param>
    /// <returns>True if a new thread was created, false if the action was added to an active queue</returns>
    /// <exception cref="NullReferenceException"><paramref name="action"/> is null</exception>
    public bool Invoke(Action action)
    {
        if (action is null)
            throw new ArgumentNullException(nameof(action));
 
        lock (_lock)
        {
            _queue.Enqueue(action);
 
            if (!(_thread is null))
                return false;
 
            _thread = new Thread(ProcessQueue)
            {
                Name = _name,
                Priority = _priority
            };
            _thread.SetApartmentState(_apartmentState);
            _thread.IsBackground = _isBackground;
            _thread.Start();
        }
 
        return true;
    }
 
    /// <summary>
    /// Blocks the current thread until the active queue is completed
    /// </summary>
    public void WaitForQueueToFinish()
    {
        Thread thread;
        lock (_lock)
            thread = _thread;
 
        thread?.Join();
    }
 
    private void ProcessQueue()
    {
        bool itemsInQueue;
        Action action = null;
 
        lock (_lock)
        {
            itemsInQueue = _queue.Count > 0;
            if (itemsInQueue)
                action = _queue.Dequeue();
            else
                _thread = null;
        }
 
        while (itemsInQueue)
        {
            try
            {
                action.Invoke();
            }
            catch (Exception e)
            {
                try
                {
                    UnhandledException?.Invoke(e);
                }
                catch
                {
                    // ignored
                }
            }
 
            lock (_lock)
            {
                itemsInQueue = _queue.Count > 0;
                if (itemsInQueue)
                    action = _queue.Dequeue();
                else
                    _thread = null;
            }
        }
    }
}
Tidbits

Wrapper for locking IList

Something simple and quick. Might not fit your needs, so check that first.

/// <summary>
/// A wrapper for <see cref="IList{T}"/> that blocks simultaneous access from separate threads.
/// </summary>
/// <typeparam name="T"></typeparam>
public class LockedListWrapper<T> : IList<T>
{
    private readonly IList<T_list;
    private readonly object _lockObject;
 
    /// <summary>
    /// Creates a new <see cref="LockedListWrapper{T}"/> with a private lock
    /// </summary>
    /// <param name="list">The list to wrap around</param>
    public LockedListWrapper(IList<Tlist) : this(listnew object())
    {
    }
 
    /// <summary>
    /// Creates a new <see cref="LockedListWrapper{T}"/> using a specific object to lock access
    /// </summary>
    /// <param name="list">The list to wrap around</param>
    /// <param name="lockObject">The object to lock access with</param>
    public LockedListWrapper(IList<Tlistobject lockObject)
    {
        _list = list ?? throw new ArgumentNullException(nameof(list));
        _lockObject = lockObject ?? throw new ArgumentNullException(nameof(lockObject));
    }
 
    /// <inheritdoc />
    public void Add(T item)
    {
        lock (_lockObject)
            _list.Add(item);
    }
 
    /// <inheritdoc />
    public void Clear()
    {
        lock (_lockObject)
            _list.Clear();
    }
 
    /// <inheritdoc />
    public bool Contains(T item)
    {
        lock (_lockObject)
            return _list.Contains(item);
    }
 
    /// <inheritdoc />
    public void CopyTo(T[] arrayint arrayIndex)
    {
        lock (_lockObject)
            _list.CopyTo(arrayarrayIndex);
    }
 
    /// <inheritdoc />
    public bool Remove(T item)
    {
        lock (_lockObject)
            return _list.Remove(item);
    }
 
    /// <inheritdoc />
    public int Count
    {
        get
        {
            lock (_lockObject)
                return _list.Count;
        }
    }
 
    /// <inheritdoc />
    public bool IsReadOnly
    {
        get
        {
            lock (_lockObject)
                return _list.IsReadOnly;
        }
    }
 
    /// <inheritdoc />
    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }
 
    /// <inheritdoc />
    public IEnumerator<TGetEnumerator()
    {
        lock (_lockObject)
            return new List<T>(_list).GetEnumerator();
    }
 
    /// <inheritdoc />
    public int IndexOf(T item)
    {
        lock (_lockObject)
            return _list.IndexOf(item);
    }
 
    /// <inheritdoc />
    public void Insert(int indexT item)
    {
        lock (_lockObject)
            _list.Insert(indexitem);
    }
 
    /// <inheritdoc />
    public void RemoveAt(int index)
    {
        lock (_lockObject)
            _list.RemoveAt(index);
    }
 
    /// <inheritdoc />
    public T this[int index]
    {
        get
        {
            lock (_lockObject)
                return _list[index];
        }
        set
        {
            lock (_lockObject)
                _list[index] = value;
        }
    }
}

What is this good for though?


Imagine a class like this:

public class Example
{
    public int SimpleValue { getset; }
 
    public List<intNotSoSimpleValue { get; }
 
    public Example()
    {
        NotSoSimpleValue = new List<int>();
    }
}

What if you need to share this between threads? You might get away with the integer, but a complex object like List will run into issues at some point.

So you try to add locks:

public class Example
{
    private readonly object _lock;
    private readonly List<int_notSoSimpleValue;
    private int _simpleValue;
 
    public int SimpleValue
    {
        get
        {
            lock (_lock)
            {
                return _simpleValue;
            }
        }
        set
        {
            lock (_lock)
            {
                _simpleValue = value;
            }
        }
    }
 
    public List<intNotSoSimpleValue
    {
        get
        {
            lock (_lock)
            {
                return _notSoSimpleValue;
            }
        }
    }
 
    public Example()
    {
        _lock = new object();
        _notSoSimpleValue = new List<int>();
    }
}

And sure, you don’t have to worry about the integer being half written while another thread reads it.
But List is only a reference to the value. The lock prevents two threads from getting that reference at the same time, but not from interacting with what it references.

The wrapper above now allows us to add locks to that reference as well:

public class Example
{
    private readonly object _lock;
    private readonly List<int_notSoSimpleValue;
    private int _simpleValue;
 
    public int SimpleValue
    {
        get
        {
            lock (_lock)
            {
                return _simpleValue;
            }
        }
        set
        {
            lock (_lock)
            {
                _simpleValue = value;
            }
        }
    }
 
    public LockedListWrapper<intNotSoSimpleValue { get; }
 
    public Example()
    {
        _lock = new object();
        _notSoSimpleValue = new List<int>();
        NotSoSimpleValue = new LockedListWrapper<int>(_notSoSimpleValue);
    }
}

Next, since we have a reference to the original List, and the object used for the lock can be passed in the constructor, we can sync operations in the list with the original objects locking mechanism:

public Example()
{
    _lock = new object();
    _notSoSimpleValue = new List<int>();
    NotSoSimpleValue = new LockedListWrapper<int>(_notSoSimpleValue_lock);
}
 
public int Sum()
{
    int sum = 0;
 
    lock (_lock)
    {
        for (int i = 0; i < _notSoSimpleValue.Count; ++i)
            sum += _notSoSimpleValue[i];
    }
 
    return sum;
}

If we didn’t add all the values inside the list within a single lock block, we might mix two different states of the list. Not very useful.

Lastly, GetEnumerator. While most operations in IList can be performed quickly without much worry, the enumerator is giving us the same problem we had originally: Returning a reference to something we have no direct control over.
And even if we could prevent access to that in some way, we would effectively block any changes to the list while someone uses the enumerator.

To prevent that the class I shared with you copies the list into a buffer, which is then used to iterate. It forces iterations to happen in a snapshot of the list, rather than the original.
This has of course the downside of extra memory consumption, including the overhead for copying the values over.

As an alternative you could implement a custom enumerator that allows for changes to the list in between reading individual indices.


A little addition to the original class that can be misused to block access permanently, but is rather useful sometimes:

/// <summary>
/// Acquires and keeps a lock for the duration of an action
/// </summary>
/// <param name="action">An action to perform with exclusive access to the list</param>
public void RunLocked(Action<LockedListWrapper<T>> action)
{
    if (action is null)
        return;
 
    lock (_lockObject)
        action?.Invoke(new LockedListWrapper<T>(_list));
}

This method allows outside code to join multiple operations on the list inside a single lock, without needing direct access to the internal lock or list.

NotSoSimpleValue.RunLocked(l =>
{
    for (int i = 0; i < l.Count; ++i)
        ++l[i];
});