Tidbits

Using WPF to create templates for paper documents

How to screenshot a WPF window: Original SO post here, my code here.

I recently needed to create a template for some documents (character sheets for a TRPG).
There are of course hundreds of ways to do this, using countless different programs.
I didn’t want to invest the time to learn one of the more applicable ones or tinker for hours with one that I know, but that isn’t intended for it.
Then I remembered: I’m already doing this kind of thing all the time. Only in those cases the template can be clicked on or typed in. Making a screenshot of those would give me a perfectly fine template.


The biggest question was of course, how to create a screenshot of a window? I could make it the classic way, but having the program take one itself would be much less work in the long run. And as I realized later, it’s the only way to create templates from windows that don’t fit on your screen (assuming you want to keep the quality high).

Taking the screenshot was done based on code from user “aloisdg” on SO (where else): https://stackoverflow.com/questions/30095895/take-a-screenshot-of-a-wpf-window-with-the-predefined-image-size-without-losing
You’ll find my adaption of the code below. For now let’s talk about some aspects of setting up the window.


Some initial values you should decide on are the background color and size of the window. Take advantage of the built-in, automatic conversions available in WPF. I had an exact size in pixels I wanted, but you can also use things like cm or in (the example below matches A4 paper).
And while not required, changing the style to None is helpful for designing.

MinHeight="29.7cm" MinWidth="21cm" MaxHeight="29.7cm" MaxWidth="21cm" Background="White" WindowStyle="None"

You might be wondering why I didn’t just set width and height directly. That might work for you, but in my case the rendering stopped at the edge of my screen or distorted the image. Limiting width and height in this manner was simply the easiest way to force a size.

The actual design of the UI is of course up to you. Don’t bother making “good UI”. The window you create doesn’t need to work, or be easy to maintain, just look a certain way. Grid and Rectangle are probably your best friends, but get creative.

<TextBox Text="Name" FontSize="0.35cm" Foreground="Gray"
         BorderThickness="2" BorderBrush="Black"
         Width="10cm" Height="1.5cm"/>

As you can see, a TextBox for example can look like a GroupBox with the label inside rather than on the line.


Now for the fun part: Making screenshots.
To do that we first have to change the build action of App.xaml to Page and remove the StartupUri. Now we can add a Main method in App.xaml.cs from which to trigger the screenshot.
Here’s what the code looks like for me:

[STAThread]
public static void Main(string[] args)
{
    MainWindow mainWindow = new MainWindow();
    mainWindow.Show();
    using (FileStream fileStream = new FileStream("Screenshot.png"FileMode.CreateFileAccess.WriteFileShare.None))
        DrawWindow(mainWindow, fileStream);
}
 
static void DrawWindow(Window windowStream targetStream)
{
    DrawingVisual drawingVisual = new DrawingVisual();
    using (DrawingContext drawingContext = drawingVisual.RenderOpen())
        drawingContext.DrawRectangle(new VisualBrush(window), nullnew Rect(new Point(), new Size(window.Widthwindow.Height)));
    
    RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap((int)window.Width, (int)window.Height, 96, 96, PixelFormats.Pbgra32);
    renderTargetBitmap.Render(drawingVisual);
    
    PngBitmapEncoder pngBitmapEncoder = new PngBitmapEncoder();
    pngBitmapEncoder.Frames.Add(BitmapFrame.Create(renderTargetBitmap));
    pngBitmapEncoder.Save(targetStream);
}

Instead of using a Main method you could probably just add the screenshot code in the windows load event.

Tidbits

Wrapper to keep streams open

Recently I had to work with a CryptoStream in .NET Standard 2.0. Unfortunately (in Standard 2.0), we have no available constructor overload for keeping the passed Stream open when the CryptoStream is disposed.
Alternative solutions I have seen include creating a custom CryptoStream, playing with fields using reflections or simply not disposing the CryptoStream directly. What I’m sure several people have thought off, but which I didn’t see at that time, was a custom Stream implementation for the stream we pass along:

/// <summary>
/// A simple wrapper to prevent streams from being closed/disposed by other code
/// </summary>
public class KeepOpenStreamWrapper : Stream
{
    /// <summary>
    /// The original stream this instance wraps around
    /// </summary>
    public Stream WrappedStream { get; }
 
    /// <inheritdoc />
    public override bool CanRead => WrappedStream.CanRead;
 
    /// <inheritdoc />
    public override bool CanSeek => WrappedStream.CanSeek;
 
    /// <inheritdoc />
    public override bool CanWrite => WrappedStream.CanWrite;
 
    /// <inheritdoc />
    public override long Length => WrappedStream.Length;
 
    /// <inheritdoc />
    public override long Position
    {
        get => WrappedStream.Position;
        set => WrappedStream.Position = value;
    }
 
    private KeepOpenStreamWrapper(Stream stream) => WrappedStream = stream;
 
    /// <summary>
    /// Creates a new instance of <see cref="KeepOpenStreamWrapper"/> for the given <see cref="Stream"/>
    /// </summary>
    /// <param name="stream"></param>
    /// <returns>A new instance wrapping around <paramref name="stream"/>, or null if <paramref name="stream"/> is null</returns>
    public static KeepOpenStreamWrapper Wrap(Stream stream) => stream is null ? null : new KeepOpenStreamWrapper(stream);
 
    /// <inheritdoc />
    public override void Close()
    {
        //Do nothing
    }
 
    /// <inheritdoc />
    public override void Flush() => WrappedStream.Flush();
 
    /// <inheritdoc />
    public override long Seek(long offsetSeekOrigin origin) => WrappedStream.Seek(offsetorigin);
 
    /// <inheritdoc />
    public override void SetLength(long value) => WrappedStream.SetLength(value);
 
    /// <inheritdoc />
    public override int Read(byte[] bufferint offsetint count) => WrappedStream.Read(bufferoffsetcount);
 
    /// <inheritdoc />
    public override void Write(byte[] bufferint offsetint count) => WrappedStream.Write(bufferoffsetcount);
 
    /// <inheritdoc />
    protected override void Dispose(bool disposing)
    {
        //Do nothing
    }
}

Very simple implementation. Essentially all Stream methods/properties are redirected into the actual stream. Only Close and Dispose are special, in that they don’t do anything at all.
You can still close/dispose the original stream, as long as you use its reference. CryptoStream would only get the wrapper though, meaning the stream was not closed after disposing it.

Tidbits

Things I didn’t know until checking the IL

I have grown the habit of doing a couple of things when writing code that I initially thought would improve performance, even if only by a teensy tiny bit.
Well, turns out most of that was pretty much useless. Not hurting anything, but not helping either.
How did I learn that? By having a look at the IL. And here’s a collection of things I found out, that I didn’t know about before.

The code was compiled in Visual Studio for the .NET Framework 4.8 with MSBuild 16.2.0 and the Optimize flag set to true.
The IL was read with the IL Viewer of JetBrains ReSharper.


i++ or ++i

My assumption always was, that the later is slightly faster, since the old value doesn’t need to be “remembered” for use in whatever context it’s used in. Turns out, not really.
These two lines result in the exact same IL:

++foo;
foo++;

If there is a bit more going on, the code actually has a difference:

foo = ++foo;
    IL_0002: ldloc.0      // foo
    IL_0003: ldc.i4.1
    IL_0004: add
    IL_0005: dup
    IL_0006: stloc.0      // foo
    IL_0007: stloc.0      // foo
foo = foo++;
    IL_000c: ldloc.0      // foo
    IL_000d: dup
    IL_000e: ldc.i4.1
    IL_000f: add
    IL_0010: stloc.0      // foo
    IL_0011: stloc.0      // foo

The “remembering” part is simply the duplication moved further up, before the increment.
Same operations, different order.
The only real difference this makes, is that the first option has at most 2 elements on the stack, the second 3. Slight win for my preference? But generally not important. Especially since what ever else you’re doing is probably going to need a bigger stack anyway.


Variable declarations inside or outside of loops

Not sure where I got this from, but basically I used to always declare variables outside of a loop, even if I only need it inside. I think the idea in my head was, that declaring it would require resources at that point in the program, and declaring it inside a loop would cost me those resources every iteration.
Nope.

while (true)
{
    string line = Console.ReadLine();
    Console.WriteLine($"Input was: {line}");
 
    if (line == "exit")
        break;
}

string line;
while (true)
{
    line = Console.ReadLine();
    Console.WriteLine($"Input was: {line}");
 
    if (line == "exit")
        break;
}

Both result in the same IL … until you add a second loop.

string line;
while (true)
{
    line = Console.ReadLine();
    Console.WriteLine($"Input was: {line}");
 
    if (line == "exit")
        break;
}
while (true)
{
    line = Console.ReadLine();
    Console.WriteLine($"Input was: {line}");
 
    if (line == "exit")
        break;
}

while (true)
{
    string line = Console.ReadLine();
    Console.WriteLine($"Input was: {line}");
 
    if (line == "exit")
        break;
}
while (true)
{
    string line = Console.ReadLine();
    Console.WriteLine($"Input was: {line}");
 
    if (line == "exit")
        break;
}

Declaring line outside the loop means the exact same variable is used both times, or better, the exact same memory address. In the second example the compiler ignores that option and uses two different variables, one for each loop.
I suspect that at runtime the JIT compiler, which does a lot more optimization, sees the obvious potential for reuse and makes the IL difference meaningless.
I do tend to declare variables inside the loop nowadays, unless I explicitly want them kept “alive” beyond their scope.


for or foreach

Let’s assume you got an array and want to iterate through it. Do you use a for or foreach loop?
I used to think foreach adds overhead, because it treats the array as an IEnumerable and creates an Enumerator.
It doesn’t.
These two loops

for (int i = 0; i < bar.Length; ++i)
    sum += bar[i];
foreach (int i in bar)
    sum += i;

do look different in the IL, but they do the same basic thing.
The foreach loop uses a variable it increments each iteration to use as index for the array.
Once that index reaches the arrays length, the loop ends. Pretty much like the for loop. This only applies to arrays though, List for example has an indexer, but foreach will use the enumerator!


! is null or is object

In the new C# versions there is a neat language addition, but if var is not null is not an option for you, there is either

if (!(something is null))

or

if (something is object)

Both have the same effect. But does the later do more stuff? No. Same IL.

    IL_0006: ldloc.0      // something
    IL_0007: brfalse.s    IL_000e

Same with these two.

var = !(something is null);
var = something is object;
    IL_0006: ldloc.0      // something
    IL_0007: ldnull
    IL_0008: cgt.un
    IL_000a: stloc.1      // var

switch or else if

If you have several different branches depending on a single value, you often use a switch. But what if there are only two options? Would an if/else if have a better performance? Short answer: Maybe, but don’t bother. Because the compiler makes that choice for you already.
Actually, switch is quite often compiled into something closer to if/else if, you just get special switch IL when you have neatly arranged integer values. And even then there are some interesting constellations, like splitting your switch into several individual ones.
So when a switch makes your code more readable, or simply easier to expand in the future, just use a switch.


Repeat typeof or store Type

nameof is evaluated during compilation, and all that remains in the compiled code is a string constant. That means I can use it as often as I want, without worrying about performance.
How about typeof though? What does it do in the IL? If I reference the type several times, can I still reuse typeof or is it better to cache the value?

string typeFullName = typeof(A).FullName;
bool typeIsEnum = typeof(A).IsEnum;
bool typeIsSerializable = typeof(A).IsSerializable;
    // [14 13 - 14 54]
    IL_0000: ldtoken      IlCode.A
    IL_0005: call         class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
    IL_000a: callvirt     instance string [mscorlib]System.Type::get_FullName()

    // [15 13 - 15 48]
    IL_000f: ldtoken      IlCode.A
    IL_0014: call         class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
    IL_0019: callvirt     instance bool [mscorlib]System.Type::get_IsEnum()
    IL_001e: stloc.0      // typeIsEnum

    // [16 13 - 16 64]
    IL_001f: ldtoken      IlCode.A
    IL_0024: call         class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
    IL_0029: callvirt     instance bool [mscorlib]System.Type::get_IsSerializable()
    IL_002e: stloc.1      // typeIsSerializable

As you can see, each typeof results in a method call to Type.GetTypeFromHandle. Storing the result seems the better option to me.

Type type = typeof(A);
string typeFullName = type.FullName;
bool typeIsEnum = type.IsEnum;
bool typeIsSerializable = type.IsSerializable;
    // [14 13 - 14 35]
    IL_0000: ldtoken      IlCode.A
    IL_0005: call         class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)

    // [15 13 - 15 49]
    IL_000a: dup
    IL_000b: callvirt     instance string [mscorlib]System.Type::get_FullName()
    IL_0010: stloc.0      // typeFullName

    // [16 13 - 16 43]
    IL_0011: dup
    IL_0012: callvirt     instance bool [mscorlib]System.Type::get_IsEnum()
    IL_0017: stloc.1      // typeIsEnum

    // [17 13 - 17 59]
    IL_0018: callvirt     instance bool [mscorlib]System.Type::get_IsSerializable()
    IL_001d: stloc.2      // typeIsSerializable

Good thing I did it this way already!

Tidbits

Annoyingly hard to find WPF Binding “Error”

TL;DR: Did you really implement INotifyPropertyChanged? Are you sure?

Happened to me recently. Took me an embarrassingly long time to find.
Have a look at this WPF Window:

Here’s the (relevant) XAML:

<StackPanel Margin="5">
        <CheckBox IsChecked="{Binding Bar}" Content="A is selected"/>
        <ComboBox Text="{Binding Foo}">
            <ComboBox.Items>
                <ComboBoxItem>A</ComboBoxItem>
                <ComboBoxItem>B</ComboBoxItem>
                <ComboBoxItem>C</ComboBoxItem>
            </ComboBox.Items>
        </ComboBox>
        <TextBlock Text="{Binding Foo}"/>
    </StackPanel>

And the underlying model:

private string _foo;
 
public string Foo
{
    get => _foo;
    set
    {
        if (value == _fooreturn;
        _foo = value;
        OnPropertyChanged();
        OnPropertyChanged(nameof(Bar));
    }
}
 
public bool Bar
{
    get => _foo=="A";
    set
    {
        if (value)
            Foo = "A";
    }
}

Essentially: In the ComboBox I can select A, B, or C, and whatever is selected will be shown in the TextBlock below it.
Both are bound to the same property in the model.
The CheckBox is bound to a computed property, based on the selection, and can change it to A.

A while back I was working an a WPF project that involved a scenario like this. For some reason though, it suddenly stopped working properly. Applying the “error” to this demo, the TextBlock would still reflect whatever you select in the ComboBox. The CheckBox however, would not react. And checking the CheckBox would not set the ComboBox or TextBlock to A anymore.
My first thought was, that I messed up the event call somewhere. But reading the events thrown by the model gave exactly the right data. And the values stored in the model would also change as they were supposed to.
Still, the UI refused to update.

I found the solution only after a good nights sleep. While refactoring the model, I accidentally removed the reference to INotifyPropertyChanged:

public class Model : INotifyPropertyChanged
public class Model

Which unfortunately was really hard to notice, because:

  1. The events themselves were still fired, none of that infrastructure was changed
  2. The link between the ComboBox and the TextBlock works without the event as well, just by having them both bind to a plain, simple property

All in all, easy fix, annoying issue.

Tidbits

Single thread worker queue

This is a simple class, that allows you to queue up Actions for processing them in a single, dedicated thread.
Once the queue is empty, the thread terminates.
When new items are queued up after the thread terminated, a new one is created.

Code please!


What is this for?
Not sure if you have that situation actually. So maybe nothing?

Basically, I had some logic that needed to do some work at irregular intervals.
Now you’re thinking “go for Task or ThreadPool“.
Both good solutions.
But the things I needed them to perform were resource hogs (well, still are), mostly in terms of processing time.
That’s why I wanted a dedicated thread, ideally one for which I could set a higher priority.

The downside of that is of course to either have a thread constantly running idle or the overhead of creating new threads all the time.
Luckily the stuff the thread was meant for usually came in bulk, so a long time of nothing followed by a quick burst of a handful of actions. Which in turn meant I could just keep the thread running for a bunch of actions, saving some overhead, but terminate it until the next batch came in, preventing idle threads.

In the end this system is a compromise between the two options. Not perfect, not optimal, but good enough for me.
And maybe for you? The basic premise isn’t exactly uncommon, and it’s one of those things you spend less than ten minutes and a few lines of code on when you actually need it. This is just a more formal implementation for reuse.


The code should be pretty self explanatory.
Admittedly, this is a bit over designed for what it is…
Again, it’s one of those things you just write yourself when needed, with only what you actually need. And usually this thing is mixed into something else, rather than having its own class. At least that’s how I see it.

/// <summary>
/// Processes a queue of actions on a dedicated thread that stays alive until the queue is finished
/// </summary>
public class QueuedThreadInvoker
{
    private readonly object _lock;
    private readonly Queue<Action_queue;
 
    private readonly string _name;
    private readonly ThreadPriority _priority;
    private readonly ApartmentState _apartmentState;
    private readonly bool _isBackground;
    private Thread _thread;
 
    /// <summary>
    /// Creates a new <see cref="QueuedThreadInvoker"/> for queueing actions on a custom thread
    /// </summary>
    /// <param name="name">The name of the thread</param>
    /// <param name="priority">The priority of the thread</param>
    /// <param name="apartmentState">THe apartment state of the thread</param>
    /// <param name="isBackground">Whether to run the thread in the back- or foreground</param>
    public QueuedThreadInvoker(string name = nullThreadPriority priority = ThreadPriority.NormalApartmentState apartmentState = ApartmentState.MTAbool isBackground = true)
    {
        _name = name ?? nameof(QueuedThreadInvoker);
        _priority = priority;
        _apartmentState = apartmentState;
        _isBackground = isBackground;
 
        _lock = new object();
        _queue = new Queue<Action>();
        _thread = null;
    }
 
    /// <summary>
    /// Triggered from the worker thread when an action throws an exception
    /// </summary>
    public event Action<ExceptionUnhandledException;
 
    /// <summary>
    /// Adds an action to the queue. If no thread is active, a new thread is created for processing it.
    /// </summary>
    /// <param name="action">The action to queue up</param>
    /// <returns>True if a new thread was created, false if the action was added to an active queue</returns>
    /// <exception cref="NullReferenceException"><paramref name="action"/> is null</exception>
    public bool Invoke(Action action)
    {
        if (action is null)
            throw new ArgumentNullException(nameof(action));
 
        lock (_lock)
        {
            _queue.Enqueue(action);
 
            if (!(_thread is null))
                return false;
 
            _thread = new Thread(ProcessQueue)
            {
                Name = _name,
                Priority = _priority
            };
            _thread.SetApartmentState(_apartmentState);
            _thread.IsBackground = _isBackground;
            _thread.Start();
        }
 
        return true;
    }
 
    /// <summary>
    /// Blocks the current thread until the active queue is completed
    /// </summary>
    public void WaitForQueueToFinish()
    {
        Thread thread;
        lock (_lock)
            thread = _thread;
 
        thread?.Join();
    }
 
    private void ProcessQueue()
    {
        bool itemsInQueue;
        Action action = null;
 
        lock (_lock)
        {
            itemsInQueue = _queue.Count > 0;
            if (itemsInQueue)
                action = _queue.Dequeue();
            else
                _thread = null;
        }
 
        while (itemsInQueue)
        {
            try
            {
                action.Invoke();
            }
            catch (Exception e)
            {
                try
                {
                    UnhandledException?.Invoke(e);
                }
                catch
                {
                    // ignored
                }
            }
 
            lock (_lock)
            {
                itemsInQueue = _queue.Count > 0;
                if (itemsInQueue)
                    action = _queue.Dequeue();
                else
                    _thread = null;
            }
        }
    }
}
Tidbits

Wrapper for locking IList

Something simple and quick. Might not fit your needs, so check that first.

/// <summary>
/// A wrapper for <see cref="IList{T}"/> that blocks simultaneous access from separate threads.
/// </summary>
/// <typeparam name="T"></typeparam>
public class LockedListWrapper<T> : IList<T>
{
    private readonly IList<T_list;
    private readonly object _lockObject;
 
    /// <summary>
    /// Creates a new <see cref="LockedListWrapper{T}"/> with a private lock
    /// </summary>
    /// <param name="list">The list to wrap around</param>
    public LockedListWrapper(IList<Tlist) : this(listnew object())
    {
    }
 
    /// <summary>
    /// Creates a new <see cref="LockedListWrapper{T}"/> using a specific object to lock access
    /// </summary>
    /// <param name="list">The list to wrap around</param>
    /// <param name="lockObject">The object to lock access with</param>
    public LockedListWrapper(IList<Tlistobject lockObject)
    {
        _list = list ?? throw new ArgumentNullException(nameof(list));
        _lockObject = lockObject ?? throw new ArgumentNullException(nameof(lockObject));
    }
 
    /// <inheritdoc />
    public void Add(T item)
    {
        lock (_lockObject)
            _list.Add(item);
    }
 
    /// <inheritdoc />
    public void Clear()
    {
        lock (_lockObject)
            _list.Clear();
    }
 
    /// <inheritdoc />
    public bool Contains(T item)
    {
        lock (_lockObject)
            return _list.Contains(item);
    }
 
    /// <inheritdoc />
    public void CopyTo(T[] arrayint arrayIndex)
    {
        lock (_lockObject)
            _list.CopyTo(arrayarrayIndex);
    }
 
    /// <inheritdoc />
    public bool Remove(T item)
    {
        lock (_lockObject)
            return _list.Remove(item);
    }
 
    /// <inheritdoc />
    public int Count
    {
        get
        {
            lock (_lockObject)
                return _list.Count;
        }
    }
 
    /// <inheritdoc />
    public bool IsReadOnly
    {
        get
        {
            lock (_lockObject)
                return _list.IsReadOnly;
        }
    }
 
    /// <inheritdoc />
    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }
 
    /// <inheritdoc />
    public IEnumerator<TGetEnumerator()
    {
        lock (_lockObject)
            return new List<T>(_list).GetEnumerator();
    }
 
    /// <inheritdoc />
    public int IndexOf(T item)
    {
        lock (_lockObject)
            return _list.IndexOf(item);
    }
 
    /// <inheritdoc />
    public void Insert(int indexT item)
    {
        lock (_lockObject)
            _list.Insert(indexitem);
    }
 
    /// <inheritdoc />
    public void RemoveAt(int index)
    {
        lock (_lockObject)
            _list.RemoveAt(index);
    }
 
    /// <inheritdoc />
    public T this[int index]
    {
        get
        {
            lock (_lockObject)
                return _list[index];
        }
        set
        {
            lock (_lockObject)
                _list[index] = value;
        }
    }
}

What is this good for though?


Imagine a class like this:

public class Example
{
    public int SimpleValue { getset; }
 
    public List<intNotSoSimpleValue { get; }
 
    public Example()
    {
        NotSoSimpleValue = new List<int>();
    }
}

What if you need to share this between threads? You might get away with the integer, but a complex object like List will run into issues at some point.

So you try to add locks:

public class Example
{
    private readonly object _lock;
    private readonly List<int_notSoSimpleValue;
    private int _simpleValue;
 
    public int SimpleValue
    {
        get
        {
            lock (_lock)
            {
                return _simpleValue;
            }
        }
        set
        {
            lock (_lock)
            {
                _simpleValue = value;
            }
        }
    }
 
    public List<intNotSoSimpleValue
    {
        get
        {
            lock (_lock)
            {
                return _notSoSimpleValue;
            }
        }
    }
 
    public Example()
    {
        _lock = new object();
        _notSoSimpleValue = new List<int>();
    }
}

And sure, you don’t have to worry about the integer being half written while another thread reads it.
But List is only a reference to the value. The lock prevents two threads from getting that reference at the same time, but not from interacting with what it references.

The wrapper above now allows us to add locks to that reference as well:

public class Example
{
    private readonly object _lock;
    private readonly List<int_notSoSimpleValue;
    private int _simpleValue;
 
    public int SimpleValue
    {
        get
        {
            lock (_lock)
            {
                return _simpleValue;
            }
        }
        set
        {
            lock (_lock)
            {
                _simpleValue = value;
            }
        }
    }
 
    public LockedListWrapper<intNotSoSimpleValue { get; }
 
    public Example()
    {
        _lock = new object();
        _notSoSimpleValue = new List<int>();
        NotSoSimpleValue = new LockedListWrapper<int>(_notSoSimpleValue);
    }
}

Next, since we have a reference to the original List, and the object used for the lock can be passed in the constructor, we can sync operations in the list with the original objects locking mechanism:

public Example()
{
    _lock = new object();
    _notSoSimpleValue = new List<int>();
    NotSoSimpleValue = new LockedListWrapper<int>(_notSoSimpleValue_lock);
}
 
public int Sum()
{
    int sum = 0;
 
    lock (_lock)
    {
        for (int i = 0; i < _notSoSimpleValue.Count; ++i)
            sum += _notSoSimpleValue[i];
    }
 
    return sum;
}

If we didn’t add all the values inside the list within a single lock block, we might mix two different states of the list. Not very useful.

Lastly, GetEnumerator. While most operations in IList can be performed quickly without much worry, the enumerator is giving us the same problem we had originally: Returning a reference to something we have no direct control over.
And even if we could prevent access to that in some way, we would effectively block any changes to the list while someone uses the enumerator.

To prevent that the class I shared with you copies the list into a buffer, which is then used to iterate. It forces iterations to happen in a snapshot of the list, rather than the original.
This has of course the downside of extra memory consumption, including the overhead for copying the values over.

As an alternative you could implement a custom enumerator that allows for changes to the list in between reading individual indices.


A little addition to the original class that can be misused to block access permanently, but is rather useful sometimes:

/// <summary>
/// Acquires and keeps a lock for the duration of an action
/// </summary>
/// <param name="action">An action to perform with exclusive access to the list</param>
public void RunLocked(Action<LockedListWrapper<T>> action)
{
    if (action is null)
        return;
 
    lock (_lockObject)
        action?.Invoke(new LockedListWrapper<T>(_list));
}

This method allows outside code to join multiple operations on the list inside a single lock, without needing direct access to the internal lock or list.

NotSoSimpleValue.RunLocked(l =>
{
    for (int i = 0; i < l.Count; ++i)
        ++l[i];
});