Tidbits

Add a hammer to Build and Clean

Just show me the code.
Just quick snippets of two MSBuild targets I use variations of to make my life a little easier. They solve two problems:

  1. Output files of dependent projects aren’t copied into the current output.
    Project dependencies are handled really well in most cases, but I do occasionally run into this issue especially in scenarios like
    “A depends on B depends on C”,
    where A has only an indirect dependency on C.
  2. Clean doesn’t remove supplementary files.
    Basically clean only removes files that are generated by the project. Which is nice, don’t get me wrong. But I did spent many frustrating hours taking my code apart only to find out the issue was an outdated resource file in my bin folder.

Here’s a target for the first problem. I stretch the a part because you’ll likely want to adapt it based on your needs and preferences.

<Target Name="CopyDependentOutput" AfterTargets="AfterBuild" Condition="@(_ResolvedProjectReferencePaths)!=''">
  <Message Importance="high" Text="Copying output from: @(_ResolvedProjectReferencePaths->'%(Filename)')"/>
  <Exec Command="ROBOCOPY &quot;%(_ResolvedProjectReferencePaths.RelativeDir) &quot; &quot;$(MSBuildProjectDirectory)\$(OutputPath) &quot; /e > NUL" IgnoreExitCode="true"/>
</Target>

The big chunk of work is already done by one of the many tasks that come with MSBuild and are run as part of your standard build process.
The item _ResolvedProjectReferencePaths contains references to all the projects that needed to be resolved before the current one could be build. It’s all in the name really. The neat part is, it contains all projects, regardless of whether they are referenced directly or indirectly.

The easiest way to transfer all their data for me was to use a windows command, since it can copy files and folders recursively.
Should you also go for this solution, think about evaluating the exit code to abort an active build when the copy process didn’t work. In the above snippet we ignore any error.


And this is an example of how to tackle the second issue:

<Target Name="ClearOutput" AfterTargets="AfterClean">
  <ItemGroup>
    <FilesToDelete Include="$(OutputPath)*.*"/>
    <FoldersToDelete Include="$([System.IO.Directory]::GetDirectories('$(OutputPath)'))"/>
  </ItemGroup>
 
  <Message Importance="high" Text="Cleaning @(FilesToDelete->Count()) file(s) and @(FoldersToDelete->Count()) folder(s)"/>
  <Delete Files="@(FilesToDelete)" Condition="@(FilesToDelete)!=''"/>
  <RemoveDir Directories="@(FoldersToDelete)" Condition="@(FoldersToDelete)!=''"/>
</Target>

A bit more involved this one. The easiest solution would have been to just delete the entire output folder, of course. In my case however, I wanted to keep it intact and just remove its content.

The task is split into two parts: Clear all top level files and then recursively all folders.
The files are easy enough to do with a simple wildcard.
The folders are best listed with a static method call.
Actual removal is done with the predefined MSBuild tasks Delete and RemoveDir for files and directories respectively.


Both those targets are hooked to the end of the Build and Clean tasks through their AfterTargets attributes.
That means, as long as those targets are evaluated at all when our projects are processed, we’re done.
The most direct approach for that would be to add them “as is” in your project files.

The best solution for me though was to make use of Directory.Build.targets.
In case you’re not familiar with it:
Imagine a project in C:\Dev\Foo\Bar.csproj.
When you process that project, MSBuild will iterate upwards through your path (first C:\Dev\Foo\, then C:\Dev\ and finally C:\) until it finds two specific files or has nowhere left to go . Those two files are called Directory.Build.props and Directory.Build.targets.
Should they be found, their content is imported as if your project directly referenced them (which it kind of does if you follow the breadcrumbs through the forest of references and imports).

So what we can do now, is to put a file called Directory.Build.targets at the root of our solution, next to the .sln file, containing something similar to this:
<Project>
  <Target Name="CopyDependentOutput" AfterTargets="AfterBuild" Condition="@(_ResolvedProjectReferencePaths)!=''">
    <Message Importance="high" Text="Copying output from: @(_ResolvedProjectReferencePaths->'%(Filename)')"/>
    <Exec Command="ROBOCOPY &quot;%(_ResolvedProjectReferencePaths.RelativeDir) &quot; &quot;$(MSBuildProjectDirectory)\$(OutputPath) &quot; /e > NUL" IgnoreExitCode="true"/>
  </Target>
 
  <Target Name="ClearOutput" AfterTargets="AfterClean">
    <ItemGroup>
      <FilesToDelete Include="$(OutputPath)*.*"/>
      <FoldersToDelete Include="$([System.IO.Directory]::GetDirectories('$(OutputPath)'))"/>
    </ItemGroup>
 
    <Message Importance="high" Text="Cleaning @(FilesToDelete->Count()) file(s) and @(FoldersToDelete->Count()) folder(s)"/>
    <Delete Files="@(FilesToDelete)" Condition="@(FilesToDelete)!=''"/>
    <RemoveDir Directories="@(FoldersToDelete)" Condition="@(FoldersToDelete)!=''"/>
  </Target>
</Project>

Now all projects within the solution will automatically trigger one of our targets on every Build, Clean or Rebuild.


Some salt to add to this:
These targets add a lot of possibly unneeded overhead to your builds.
Make sure you actually need them and try different filters to limit what they work on.

Tidbits

Finally is no catch-all

This is just a short “nice to know” for C# exception handling, that i only learned about after half an hour of frustrated debugging.


Imagine some code which changes persistent data. Might be files, a database or some OS settings. Just something that stays even after your program closes.
Now imagine you have to change this data to make an important operation, but have to change it back after you’re done so you won’t mess up something else.

SetSystemValue();
DoWork();
ResetSystemValue();

What if something unexpected happens in DoWork?
The changes we made won’t be changed back. Can’t have that.
So, we do what we learned is the proper way of dealing with the unexpected:

try
{
    SetSystemValue();
    DoWork();
}
finally
{
    ResetSystemValue();
}

Now, no matter what happens in the try block, the changes we’ve made are going to be reversed.


Only, there is a condition to finally you usually don’t need to think about:
It isn’t executed until after catch.
And when there is no catch, the program just terminates.

See the problem?
Usually there will be a catch somewhere up the chain to handle the exception.
And if there isn’t and the program terminates, we don’t have to worry about some invalid data in our program anyway.

In this case however, the data exists outside the program, meaning any issues persist with it.


Solution:
Add a catch somewhere. Specifically a catch that doesn’t (re-)throw an exception. Having a try/catch at your programs root for logging might be useful anyway.
Of course, if you already have a try/finally, why not simply add a catch in there as well?
In my case I simply doubled down on the reset part (to avoid calling it twice), which now looks somewhat like this:

try
{
    SetSystemValue();
    DoWork();
}
catch
{
    ResetSystemValue();
    return;
}
 
ResetSystemValue();
Tidbits

Semantic Versioning is amazing!

This post will be nothing but me repeating what’s already beeing said on semver.org, so have a look over there. If you want me to tell you how incredibly useful it is and why you definitely (maybe) should use it, you can still come back later to get a quick reminder.


Sematic Versioning is a set of simple rules that tell you how to change the version of whatever library you’re developing, depending on the kind of changes you’re making. Though while libraries are where it shines, i did find myself using it for fully fledged programs before, in sligthly adapted forms (but for similar reasons).
To explain these rules we’re going to use an example: A calculator.
Our initial code might look like this

public static class Calc
{
    public static int Add(int aint b)
    {
        return a + b;
    }
 
    public static int Sub(int aint b)
    {
        return a + b;
    }
}

Nothing but a static class with two static methods for addition and substraction.
Now, with Semantic Versioning, this might be our version 1.0.0 (we’re going to ignore suffixes for now). Assume this to be the state you’re releasing your library in, which will now get used by people and integrated into their own projects.

Even if “people” includes only you yourself, proper versioning can still prevent a lot of frustration. Having more people involved makes it only more useful.


But what’s this? Your substraction performs an addition instead? No time to figure out why there was no automated test to catch this obvious error before. We need to fix this and release a newer version!

public static class Calc
{
    public static int Add(int aint b)
    {
        return a + b;
    }
 
    public static int Sub(int aint b)
    {
        return a - b;
    }
}

There, much better. Now we need to make this public in a way that shows others, that there was a change – we need to update our version.
While our change had a huge impact on what our library can do, it doesn’t require anyone to change how they use it. It’s a fix to restore behaviour to what was expected already, meaning anybody using our library will have included it in a way matching our update. Therefore we make the least significant change to our version number possible: 1.0.1
Simple, right?


After some time passed, and more people play arround with your library, you might get a request for additional features. And you’re happy to add them, beeing unsatisfied with your meager two methods yourself.

public static class Calc
{
    public static int Add(int aint b)
    {
        return a + b;
    }
 
    public static int Sub(int aint b)
    {
        return a - b;
    }
 
    public static int Mul(int aint b)
    {
        return a * b;
    }
 
    public static int Div(int aint b)
    {
        return a / b;
    }
}

Wow, two whole new features. People are gonna love this! Of course we need to update the version, too. This one’s a bit different though:
To make use of the new features, someone using your library needs to make changes to their own code as well. So we need to mark this update as something bigger than just a fix: 1.1.0

And why stop here? There are plenty of operations waiting to be added! Modulo? 1.2.0. Sinus? 1.3.0. Messed up the sinus and made a fix for it? 1.3.1.


At some point however, you’ll look back at your library and notice it blowing up in the wrong way. You can simplify it, maybe add options for custom operations, overall clean up your code. Time to refactor!
And you start by unifying all those methods from before.

public static class Calc
{
    public static int Calculate(IOperation operationint aint b)
    {
        return operation.GetResult(ab);
    }
}
 
public interface IOperation
{
    int GetResult(params int[] values);
}

Looks much better than all those static methods. Now people can write their own operations!
Unfortunately, unlike the changes we made before, this one can actually break whatever someone using your library made. That Add-method that they called before? It doesn’t exist anymore. Even if they get arround the compiler by loading your files at runtime, it will fail eventually.
To make sure they know to rewrite their things, even if they don’t plan to use any new features, we need to make a big change when updating our version: 2.0.0
With this, people can see they absolutely need to rewrite parts of their projects (there are exceptions of course, but those still require them to at least verify compatibility).

By the way, in a situation like this, where you retire parts of your code and add replacements, it’s a good idea to make a hybrid version first (if possible). This way people can quickly grab the newest update to your library without it causing issues, but it also gives them the new code they then can safely migrate to in the time it takes for your next release. Just make sure to mark your code as deprecated!

public static class Calc
{
    [Obsolete]
    public static int Add(int aint b)
    {
        return a + b;
    }
 
    [Obsolete]
    public static int Sub(int aint b)
    {
        return a - b;
    }
 
    [Obsolete]
    public static int Mul(int aint b)
    {
        return a * b;
    }
 
    [Obsolete]
    public static int Div(int aint b)
    {
        return a / b;
    }
 
    public static int Calculate(IOperation operationint aint b)
    {
        return operation.GetResult(ab);
    }
}
 
public interface IOperation
{
    int GetResult(params int[] values);
}

All in all, with the most basic interpretation of Semantic Versioning, changes in the version (in the format Major.Minor.Patch) reflect these changes to the code:

Patch – Bugfixes, optimzation, etc.
None of what you did requires action by someone using your library.

Minor – New features
Your old code does the same as always, but new things are available. Users only need to make changes to their code if they want to use any of it.

Major – Removed or changed functionality
You removed things that people used or had direct access to before, or something changed the way it behaves.
Existing code (that depends on your library) requires rewrites to work with these changes.


Something i didn’t mention at all before:
Additional markers to differentiate various stages in your development cycle may be added to the version number. For example 1.0.0-beta might sound like a full release going by the numbers alone, but the suffix tells us it’s not. Instead your first release might be 1.0.1-release.
Basically, since the numbers tell us how the code changed, but not the more abstract state of your library as a whole, we can add some better qualifiers in the end. Suffixes can also contain build numbers, should you prefer to include them.


To close this off, let us have look at Semantic Versioning from the other side:

Imagine yourself implementing a library into your own project. Everything worked, you’re happy with it as is and you stop thinking about it. At some point you encounter a bug and want to update to a newer version. But several releases happend since you last checked. Which one do you use, assuming you want to release an update of your own as soon as possible?
You could read through the changelogs (which is a good idea anyway) to find out what version is still compatible with your code. Timeconsuming and not necessarily reliable.
But if they use Semantic Versioning it’s easy – just take the newest version that still falls into the Major group you’re currently using. Done.
You can still look at the changelogs up to that version, and maybe you’ll find interesting features you want to try. If there are Major releases you can also start planning the migration, while already releasing an updated version of your own project.

Semantic Versioning becomes even more important if you want to automate updates (to a certain degree). What if the library isn’t added before compiling, while you still have the control, but sometime later, when it is used by someone else? This might be the case if it is loaded dynamically at runtime. Or we might be talking about NuGet.

When you create a NuGet package which itself depends on other NuGet packages, you need to make sure whoever installs it has the right version of your dependencies.
Of course you could require them to use a very specific version only. In that case, if there is an update of the package you depend upon, you’d need to make a new version of your own package as well to reflect the newer dependency, even if you didn’t change anything in your code.
The better solution is to allow a range of versions, including ones that haven’t been released yet. The most simple case would be allowing versions starting at the one you used, up to any bigger version in the same Major group.


So that’s Semantic Versioning. Pretty useful, isn’t it? And you can apply the idea to other things as well.
What if your program allows plugins? You can leave the programs own version in whatever format you like, but give the interface a version that plugins can read and evaluate, to see if they still work.
Or maybe add a semantic version to your custom file format, so different versions of a program know whether or not they can read from or write to a file.
You could even apply it to a fully fledged program, where it is intended to inform the enduser about the scope of changes: Bugfix? Added any new features? Any menus gone?

Just be creative!