You've Been Haacked
You’ve been Haacked is a blog about Technology, Software, Management, and Open Source. It’s full of good stuff.
PostHog helps you build better products. It tracks what users do. It controls features in production. And now it works with .NET! I joined PostHog at the beginning of the year as a Product Engineer on the Feature Flags team. Feature flags are just one of the many tools PostHog offers to help product engineers build better products. Much of my job will consist of writing Python and React with TypeScript. But when I started, I noticed they didn’t have a .NET SDK. It turns out, I know a thing or two about .NET! So if you’ve been wanting to use PostHog in your ASP.NET Core applications, yesterday is your lucky day! The 1.0 version of the PostHog .NET SDK for ASP.NET Core is available on NuGet. dotnet add package PostHog.AspNetCore You can find documentation for the library on the PostHog docs site, but I’ll cover some of the basics here. I’ll also cover non-ASP.NET Core usage later in this post. Configuration To configure the client SDK, you’ll need: Project API Key - from the PostHog dashboard Personal API Key - for local evaluation (Optional, but recommended) Note For better performance, enable local feature flag evaluation by adding a personal API key (found in Settings). This avoids making API calls for each flag check. By default, the PostHog client looks for settings in the PostHog section of the configuration system such as in the appSettings.json file: { "PostHog": { "ProjectApiKey": "phc_..." } } Treat your personal API key as a secret by using a secrets manager to store it. For example, for local development, use the dotnet user-secrets command to store your personal API key: dotnet user-secrets init dotnet user-secrets set "PostHog:PersonalApiKey" "phx_..." In production, you might use Azure Key Vault or a similar service to provide the personal API key. Register the client Once you set up configuration, register the client with the dependency injection container. In your Program.cs file, call the AddPostHog extension method on the WebApplicationBuilder instance. It’ll look something like this: using PostHog; var builder = WebApplication.CreateBuilder(args); builder.AddPostHog(); Calling builder.AddPostHog() adds a singleton implementation of IPostHogClient to the dependency injection container. Inject it into your controllers or pages like so: public class MyController(IPostHogClient posthog) : Controller { } public class MyPage(IPostHogClient posthog) : PageModel { } Usage Use the IPostHogClient service to identify users, capture analytics, and evaluate feature flags. Use the IdentifyAsync method to identify users: // This stores information about the user in PostHog. await posthog.IdentifyAsync( distinctId, user.Email, user.UserName, // Properties to set on the person. If they're already // set, they will be overwritten. personPropertiesToSet: new() { ["phone"] = user.PhoneNumber ?? "unknown", ["email_confirmed"] = user.EmailConfirmed, }, // Properties to set once. If they're already set // on the person, they won't be overwritten. personPropertiesToSetOnce: new() { ["joined"] = DateTime.UtcNow }); Some things to note about the IdentifyAsync method: The distinctId is the identifier for the user. This could be an email, a username, or some other identifier such as the database Id. The important thing is that it’s a consistent and unique identifier for the user. If you use PostHog on the client, use the same distinctId here as you do on the client. The personPropertiesToSet and personPropertiesToSetOnce are optional. You can use them to set properties about the user. If you choose a distinctId that can change (such as username or email), you can use the AliasAsync method to alias the old distinctId with the new one so that the user can be tracked across different distinctIds. To capture an event, call the Capture method: posthog.Capture("some-distinct-id", "my-event"); This will capture an event with the distinct id, the event name, and the current timestamp. You can also include properties: posthog.Capture( "some-distinct-id", "user signed up", new() { ["plan"] = "pro" }); The Capture method is synchronous and returns immediately. The actual batching and sending of events is done in the background. Feature flags To evaluate a feature flag, call the IsFeatureEnabledAsync method: if (await posthog.IsFeatureEnabledAsync( "new_user_feature", "some-distinct-id")) { // The feature flag is enabled. } This will evaluate the feature flag and return true if the feature flag is enabled. If the feature flag is not enabled or not found, it will return false. Feature Flags can contain filter conditions that might depend on properties of the user. For example, you might have a feature flag that is enabled for users on the pro plan. If you’ve previously identified the user and are NOT using local evaluation, the feature flag is evaluated on the server against the user properties set on the person via the IdentifyAsync method. But if you’re using local evaluation, the feature flag is evaluated on the client, so you have to pass in the properties of the user: await posthog.IsFeatureEnabledAsync( featureKey: "person-flag", distinctId: "some-distinct-id", personProperties: new() { ["plan"] = "pro" }); This will evaluate the feature flag and return true if the feature flag is enabled and the user’s plan is “pro”. .NET Feature Management .NET Feature Management is an abstraction over feature flags that is supported by ASP.NET Core. With it enabled, you can use the tag helper to conditionally render UI based on the state of a feature flag. This is a feature flag. You can also use the FeatureGateAttribute in your controllers and pages to conditionally execute code based on the state of a feature flag. [FeatureGate("my-feature")] public class MyController : Controller { } If your app already uses .NET Feature Management, you can switch to using PostHog with very little effort. To use PostHog feature flags with the .NET Feature Management library, implement the IPostHogFeatureFlagContextProvider interface. The simplest way to do that is to inherit from the PostHogFeatureFlagContextProvider class and override the GetDistinctId and GetFeatureFlagOptionsAsync methods. This is required so that .NET Feature Management can evaluate feature flags locally with the correct distinctId and personProperties. public class MyFeatureFlagContextProvider( IHttpContextAccessor httpContextAccessor) : PostHogFeatureFlagContextProvider { protected override string? GetDistinctId() => httpContextAccessor.HttpContext?.User.Identity?.Name; protected override ValueTask GetFeatureFlagOptionsAsync() { // In a real app, you might get this information from a database or other source for the current user. return ValueTask.FromResult( new FeatureFlagOptions { PersonProperties = new Dictionary { ["email"] = "some-test@example.com", ["plan"] = "pro" }, OnlyEvaluateLocally = true }); } } Then, register your implementation in Program.cs (or Startup.cs): using PostHog; var builder = WebApplication.CreateBuilder(args); builder.AddPostHog(options => { options.UseFeatureManagement(); }); This registers a feature flag provider that uses your implementation of IPostHogFeatureFlagContextProvider to evaluate feature flags against PostHog. Non-ASP.NET Core usage The PostHog.AspNetCore package adds ASP.NET Core specific functionality on top of the core PostHog package. But if you’re not using ASP.NET Core, you can use the core PostHog package directly: dotnet add package PostHog.AspNetCore And then register it with your dependency injection container: builder.Services.AddPostHog(); If you’re not using dependency injection, you can still use the registration method: using PostHog; var services = new ServiceCollection(); services.AddPostHog(); var serviceProvider = services.BuildServiceProvider(); var posthog = serviceProvider.GetRequiredService(); For a console app (or apps not using dependency injection), you can also use the PostHogClient directly, just make sure it’s a singleton: using System; using PostHog; var posthog = new PostHogClient( Environment.GetEnvironmentVariable("PostHog__PersonalApiKey")); Examples To see all this in action, the posthog-dotnet GitHub repository has a samples directory with a growing number of example projects. For example, the HogTied.Web project is an ASP.NET Core web app that uses PostHog for analytics and feature flags and shows some advanced configuration. What’s next? With this release done, I’ll be focusing my attention on the Feature Flags product. Even so, I’ll continue to maintain the SDK and fix any reported bugs. If anyone reports bugs, I’ll be sure to fix them. But I won’t be adding any new features for the moment. Down the road, I’m hoping to add a PostHog.Unity package. I just don’t have a lot of experience with Unity yet. My game development experience mostly consists of getting shot in the face by squaky voiced kids playing Fortnite. I’m hoping someone will contqribute a Unity sample project to the repo which I can use as a starting point. If you have any feedback, questions, or issues with the PostHog .NET SDK, please reach file an issue at https://github.com/PostHog/posthog-dotnet.
PostHog helps you build better products. It tracks what users do. It controls features in production. And now it works with .NET! I joined PostHog at the beginning of the year as a Product Engineer on the Feature Flags team. Feature flags are just one of the many tools PostHog offers to help product engineers build better products. Much of my job will consist of writing Python and React with TypeScript. But when I started, I noticed they didn’t have a .NET SDK. It turns out, I know a thing or two about .NET! So if you’ve been wanting to use PostHog in your ASP.NET Core applications, yesterday is your lucky day! The 1.0 version of the PostHog .NET SDK for ASP.NET Core is available on NuGet. dotnet add package PostHog.AspNetCore You can find documentation for the library on the PostHog docs site, but I’ll cover some of the basics here. I’ll also cover non-ASP.NET Core usage later in this post. Configuration To configure the client SDK, you’ll need: Project API Key - from the PostHog dashboard Personal API Key - for local evaluation (Optional, but recommended) Note For better performance, enable local feature flag evaluation by adding a personal API key (found in Settings). This avoids making API calls for each flag check. By default, the PostHog client looks for settings in the PostHog section of the configuration system such as in the appSettings.json file: { "PostHog": { "ProjectApiKey": "phc_..." } } Treat your personal API key as a secret by using a secrets manager to store it. For example, for local development, use the dotnet user-secrets command to store your personal API key: dotnet user-secrets init dotnet user-secrets set "PostHog:PersonalApiKey" "phx_..." In production, you might use Azure Key Vault or a similar service to provide the personal API key. Register the client Once you set up configuration, register the client with the dependency injection container. In your Program.cs file, call the AddPostHog extension method on the WebApplicationBuilder instance. It’ll look something like this: using PostHog; var builder = WebApplication.CreateBuilder(args); builder.AddPostHog(); Calling builder.AddPostHog() adds a singleton implementation of IPostHogClient to the dependency injection container. Inject it into your controllers or pages like so: public class MyController(IPostHogClient posthog) : Controller { } public class MyPage(IPostHogClient posthog) : PageModel { } Usage Use the IPostHogClient service to identify users, capture analytics, and evaluate feature flags. Use the IdentifyAsync method to identify users: // This stores information about the user in PostHog. await posthog.IdentifyAsync( distinctId, user.Email, user.UserName, // Properties to set on the person. If they're already // set, they will be overwritten. personPropertiesToSet: new() { ["phone"] = user.PhoneNumber ?? "unknown", ["email_confirmed"] = user.EmailConfirmed, }, // Properties to set once. If they're already set // on the person, they won't be overwritten. personPropertiesToSetOnce: new() { ["joined"] = DateTime.UtcNow }); Some things to note about the IdentifyAsync method: The distinctId is the identifier for the user. This could be an email, a username, or some other identifier such as the database Id. The important thing is that it’s a consistent and unique identifier for the user. If you use PostHog on the client, use the same distinctId here as you do on the client. The personPropertiesToSet and personPropertiesToSetOnce are optional. You can use them to set properties about the user. If you choose a distinctId that can change (such as username or email), you can use the AliasAsync method to alias the old distinctId with the new one so that the user can be tracked across different distinctIds. To capture an event, call the Capture method: posthog.Capture("some-distinct-id", "my-event"); This will capture an event with the distinct id, the event name, and the current timestamp. You can also include properties: posthog.Capture( "some-distinct-id", "user signed up", new() { ["plan"] = "pro" }); The Capture method is synchronous and returns immediately. The actual batching and sending of events is done in the background. Feature flags To evaluate a feature flag, call the IsFeatureEnabledAsync method: if (await posthog.IsFeatureEnabledAsync( "new_user_feature", "some-distinct-id")) { // The feature flag is enabled. } This will evaluate the feature flag and return true if the feature flag is enabled. If the feature flag is not enabled or not found, it will return false. Feature Flags can contain filter conditions that might depend on properties of the user. For example, you might have a feature flag that is enabled for users on the pro plan. If you’ve previously identified the user and are NOT using local evaluation, the feature flag is evaluated on the server against the user properties set on the person via the IdentifyAsync method. But if you’re using local evaluation, the feature flag is evaluated on the client, so you have to pass in the properties of the user: await posthog.IsFeatureEnabledAsync( featureKey: "person-flag", distinctId: "some-distinct-id", personProperties: new() { ["plan"] = "pro" }); This will evaluate the feature flag and return true if the feature flag is enabled and the user’s plan is “pro”. .NET Feature Management .NET Feature Management is an abstraction over feature flags that is supported by ASP.NET Core. With it enabled, you can use the tag helper to conditionally render UI based on the state of a feature flag. This is a feature flag. You can also use the FeatureGateAttribute in your controllers and pages to conditionally execute code based on the state of a feature flag. [FeatureGate("my-feature")] public class MyController : Controller { } If your app already uses .NET Feature Management, you can switch to using PostHog with very little effort. To use PostHog feature flags with the .NET Feature Management library, implement the IPostHogFeatureFlagContextProvider interface. The simplest way to do that is to inherit from the PostHogFeatureFlagContextProvider class and override the GetDistinctId and GetFeatureFlagOptionsAsync methods. This is required so that .NET Feature Management can evaluate feature flags locally with the correct distinctId and personProperties. public class MyFeatureFlagContextProvider( IHttpContextAccessor httpContextAccessor) : PostHogFeatureFlagContextProvider { protected override string? GetDistinctId() => httpContextAccessor.HttpContext?.User.Identity?.Name; protected override ValueTask GetFeatureFlagOptionsAsync() { // In a real app, you might get this information from a database or other source for the current user. return ValueTask.FromResult( new FeatureFlagOptions { PersonProperties = new Dictionary { ["email"] = "some-test@example.com", ["plan"] = "pro" }, OnlyEvaluateLocally = true }); } } Then, register your implementation in Program.cs (or Startup.cs): using PostHog; var builder = WebApplication.CreateBuilder(args); builder.AddPostHog(options => { options.UseFeatureManagement(); }); This registers a feature flag provider that uses your implementation of IPostHogFeatureFlagContextProvider to evaluate feature flags against PostHog. Non-ASP.NET Core usage The PostHog.AspNetCore package adds ASP.NET Core specific functionality on top of the core PostHog package. But if you’re not using ASP.NET Core, you can use the core PostHog package directly: dotnet add package PostHog.AspNetCore And then register it with your dependency injection container: builder.Services.AddPostHog(); If you’re not using dependency injection, you can still use the registration method: using PostHog; var services = new ServiceCollection(); services.AddPostHog(); var serviceProvider = services.BuildServiceProvider(); var posthog = serviceProvider.GetRequiredService(); For a console app (or apps not using dependency injection), you can also use the PostHogClient directly, just make sure it’s a singleton: using System; using PostHog; var posthog = new PostHogClient( Environment.GetEnvironmentVariable("PostHog__PersonalApiKey")); Examples To see all this in action, the posthog-dotnet GitHub repository has a samples directory with a growing number of example projects. For example, the HogTied.Web project is an ASP.NET Core web app that uses PostHog for analytics and feature flags and shows some advanced configuration. What’s next? With this release done, I’ll be focusing my attention on the Feature Flags product. Even so, I’ll continue to maintain the SDK and fix any reported bugs. If anyone reports bugs, I’ll be sure to fix them. But I won’t be adding any new features for the moment. Down the road, I’m hoping to add a PostHog.Unity package. I just don’t have a lot of experience with Unity yet. My game development experience mostly consists of getting shot in the face by squaky voiced kids playing Fortnite. I’m hoping someone will contqribute a Unity sample project to the repo which I can use as a starting point. If you have any feedback, questions, or issues with the PostHog .NET SDK, please reach file an issue at https://github.com/PostHog/posthog-dotnet.
Last year I wrote a post, career chutes and ladders, where I proposed that a linear climb to the C-suite is not the only approach to a satisfying career. At the end of the post, I mentioned I was stepping off the ladder to take on an IC role. After over a year of being on a personally funded sabbatical, I started a new job at PostHog as a Senior Product Engineer. This week is my orientation where I get to drink from the firehose once again. What is PostHog? Apart from being a company that seems to really love cute hedgehogs, PostHog is an open-source product analytics platform. They have a set of tools to help product engineers build better products. Each product can be used as a standalone tool, but they’re designed to level-up when you put them together. In particular, I’ve started on the Feature Flags team. Yesterday was my first day of onboarding and so far I really like my team. Today is day two and I’ve already submitted a small fix for my first pull request! Why PostHog? When I was looking around at companies, an old buddy from GitHub who worked at PostHog reached out to me and suggested I take a look at this company. He said it reminded him of the good parts of working at GitHub. Their company handbook really impressed me. What it communicates to me is that this is a remote-friendly company that values transparency, autonomy, and trust. It’s a company that treats its employees like adults and tries to minimize overhead. Not only that, they’ve embraced a lot of employee-friendly practices. For example, a while back my friend Zach wrote about his distaste for the 90 day exercise window. PostHog provides a 10-year window. Not only that, they offer employes double trigger acceleration! Note Double trigger acceleration means if you are let go or forced to leave due to the company being acquired, you receive all of your options at that time This is a perk usually only offerered to executives. I should mention we’re hiring! Please mention me if you apply. If we’ve worked together, let me know so I can provide feedback internally. I’m excited to be part of a company that’s small, but growing. The company is at a stage similar to the stage GitHub was at when I joined. This is a team with a strong product engineering culture and I’m excited to contribute what I can and learn from them. The Challenge The other part that’s exciting for me is that I’ll be working in a stack that I don’t have a huge amount of experience with. The front-end is React with TypeScript and the back-end is Django with Python. I’ve done a bit of work in all these technologies except Django. However, I believe my experience with ASP.NET MVC will help me pick up Django quickly. Not to mention, I’ve always taken the stance that I’m a software engineer, not just a .NET developer. Don’t get me wrong, I love working in .NET. But at the same time, I think it’s healthy for me to get production experience in other stacks. It’ll be an area of personal growth. Not to mention, they don’t quite have a .NET Client SDK yet so once I get settled in, that’s something I’m interested in getting started on. The Future I’ll share more about my experience here as I get settled in. In the meanwhile, wish me luck!
I love using Refit to call web APIs in a nice type-safe manner. Sometimes though, APIs don’t want to cooperate with your strongly-typed hopes. For example, you might run into an API written by a hipster in a beanie, aka a dynamic-type enthusiast. I don’t say that pejoratively. Some of my closest friends write Python and Ruby. For example, I came across an API that returned a value like this: { "important": true } No problem, I defined a class like this to deserialize it to: public class ImportantResponse { public bool Important { get; set; } } And life was good. Until that awful day that the API returned this: { "important": "What is important is subjective to the viewer." } Damn! This philosophy lesson broke my client. One workaround is to do this: public class ImportantResponse { public JsonElement Important { get; set; } } It works, but it’s not great. It doesn’t communicate to the consumer that this value can only be a string or a bool. That’s when I remembered an old blog post from my past. April Fool’s Joke to the Rescue When I was the Program Manager (PM) for ASP.NET MVC, my colleague and lead developer, Eilon, wrote a blog post entitled “The String or the Cat: A New .NET Framework Library where he introduced the class StringOr. This class could represent a dual-state value that’s either a string or another type. The concepts presented here are based on a thought experiment proposed by scientist Erwin Schrödinger. While an understanding of quantum physics will help to understand the new types and APIs, it is not required. It turned out his blog post was an April Fool’s joke. But the idea stuck with me. And now, here’s a case where I need a real implementation of it. But I’m going to name mine, StringOrValue. A modern StringOrValue One nice thing about implementing this today is we can leverage modern C# features. Here’s the starting implementation: [JsonConverter(typeof(StringOrValueConverter))] public readonly struct StringOrValue : IStringOrObject { public StringOrValue(string stringValue) { StringValue = stringValue; IsString = true; } public StringOrValue(T value) { Value = value; IsValue = true; } public T? Value { get; } public string? StringValue { get; } [MemberNotNullWhen(true, nameof(StringValue))] public bool IsString { get; } [MemberNotNullWhen(true, nameof(Value))] public bool IsValue { get; } } /// /// Internal interface for . /// /// /// This is here to make serialization and deserialization easy. /// [JsonConverter(typeof(StringOrValueConverter))] internal interface IStringOrObject { bool IsString { get; } bool IsValue { get; } string? StringValue { get; } object? ObjectValue { get; } } We can use the MemberNotNullWhen attribute to tell the compiler that when IsString is true, StringValue is not null. And when IsValue is true, Value is not null. That way, code like this compiles just fine without raising null warnings: var value = new StringOrValue("Hello"); if (value.IsString) { Console.WriteLine(value.StringValue.Length); } and var value = new StringOrValue(42); if (value.IsValue) { Console.WriteLine(value.ToString()); } It also is decorated with the JsonConverter attribute to tell the JSON serializer to use the StringOrValueConverter class to serialize and deserialize this type. I wanted this type to Just Work™. I didn’t want consumers of this class have to bother with registering a JsonConverterFactory for this type. This also explains why I introduced the internal IStringOrObject interface. We can’t implement the JsonConverter attribute on a open generic type, so we need a non-generic interface to apply the attribute to. It also makes it easier to write the converter as you’ll see. /// /// Value converter for . /// internal class StringOrValueConverter : JsonConverter { public override bool CanConvert(Type typeToConvert) => typeToConvert.IsGenericType && typeToConvert.GetGenericTypeDefinition() == typeof(StringOrValue); public override IStringOrObject Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) { var targetType = typeToConvert.GetGenericArguments()[0]; if (reader.TokenType == JsonTokenType.String) { var stringValue = reader.GetString(); return stringValue is null ? CreateEmptyInstance(targetType) : CreateStringInstance(targetType, stringValue); } var value = JsonSerializer.Deserialize(ref reader, targetType, options); return value is null ? CreateEmptyInstance(targetType) : CreateValueInstance(targetType, value); } static ConstructorInfo GetEmptyConstructor(Type targetType) { return typeof(StringOrValue) .MakeGenericType(targetType). GetConstructor([]) ?? throw new InvalidOperationException($"No constructor found for StringOrValue."); } static ConstructorInfo GetConstructor(Type targetType, Type argumentType) { return typeof(StringOrValue) .MakeGenericType(targetType). GetConstructor([argumentType]) ?? throw new InvalidOperationException($"No constructor found for StringOrValue."); } static IStringOrObject CreateEmptyInstance(Type targetType) { var ctor = GetEmptyConstructor(targetType); return (IStringOrObject)ctor.Invoke([]); } static IStringOrObject CreateStringInstance(Type targetType, string value) { var ctor = GetConstructor(targetType, typeof(string)); return (IStringOrObject)ctor.Invoke([value]); } static IStringOrObject CreateValueInstance(Type targetType, object value) { var ctor = GetConstructor(targetType, targetType); return (IStringOrObject)ctor.Invoke([value]); } public override void Write(Utf8JsonWriter writer, IStringOrObject value, JsonSerializerOptions options) { if (value.IsString) { writer.WriteStringValue(value.StringValue); } else if (value.IsValue) { JsonSerializer.Serialize(writer, value.ObjectValue, options); } else { writer.WriteNullValue(); } } } In the actual implementation of StringOrValue, I implemented IEquatable, IEquatable> and overrode the implicit operators: public static implicit operator StringOrValue(string stringValue) => new(stringValue); public static implicit operator StringOrValue(T value) => new(value); This allows you to write code like this: StringOrValue valueAsString = "Hello"; StringOrValue valueAsNumber = 42; Assert.Equals("Hello", valueAsString); Assert.Equals(42, valueAsNumber); So with this implementation in place, I can go back to the original example and write this: public class ImportantResponse { public StringOrValue Important { get; set; } } And now I can handle both cases: var response = JsonSerializer.Deserialize(json) ?? throw new InvalidOperationException("Deserialization failed."); if (response.Important.IsValue) { if (response.Important.Value) { Console.WriteLine("It's important!"); } else { Console.WriteLine("It's not important."); } } else { Console.WriteLine(response.Important.StringValue); } It’s time to go shopping for a beanie! Here’s the full implementation for those interested in using this in your own projects!
The career ladder is a comforting fiction we’re sold as we embark on our careers: Start as Junior, climb to Senior, then Principal, Director, and VP. One day, you defeat the final boss and receive a key to the executive bathroom and and join the C-suite. You’ve made it! If you’re lucky, your company supports a tall Individual Contributor (IC) ladder. But often, the next rung swaps your hoodie for a power shirt, suspenders, and management duties. My first job followed this script: developer to team lead to Senior Manager. I was hustling up that ladder. But life isn’t linear. Careers are more like Chutes and Ladders (which the Brits know as Snakes and Ladders, but that messes up my metaphor). Lucky breaks shoot you up; surprises send you sliding down. Those chutes? Not failures. Opportunities. Take my journey. I took a chute from my first job back to an IC role at my second. Climbed back up into management. Took a chute to Microsoft as a Senior Program Manager. Then came another chute: GitHub. I started as a developer, became a manager, then Director of Engineering. After that, I co-founded a startup, took the CTO title, but spent most of my time writing code. It looked like the classic ladder climb, but in startups, titles are smoke and mirrors. So it only counted as a step up if we succeeded. Spoiler: we didn’t. Ladders Are Overrated Ladders are narrow and rigid. Titles—while shiny—hide what matters: the work. They’re bumper stickers for your career. Nice for signaling, but they don’t tell the whole story. As Director, I mentored teams and focused on broad initiatives. But over time, I started to miss the tech. The longer I stayed out of writing production code, the more disconnected I felt. As a CTO, I got back to coding and loved it. When the startup failed, I promised myself at least a year off to reflect before jumping into something new. Growth on the Slide Over a year later, I’ve been thinking about my next move. It turns out nobody will pay me to be a man of leisure. Conventional wisdom (and ego) says aim higher: VP of Software Development or CTO at an established company. The irony of big titles is they mean more power at work, but less power over your time. For me, my recent life circumstances make time autonomy among my top priorities. Sure, big titles pay well, but maximizing income isn’t my goal. Here’s what I realized: By my own definition, I’ve succeeded. I’ve been part of great teams, built great products, and helped grow companies. I have nothing left to prove. I don’t need a lofty title or a giant paycheck (but I wouldn’t turn one down either). So, once again, I’m setting aside my pride and stepping off the ladder. Next year on January 6th I start a new IC role. Details to come, but I’m thrilled to be back in the trenches, building and learning. It’s a place that treats its employees like adults and gives me the autonomy to structure my day as I see fit. This isn’t about rejecting leadership. It’s about recalibrating. Leadership is broad. It’s guiding organizations or leading by example. I think it’s healthy—even advantageous—to bounce between IC and management roles over the course of a career. Life changes. I might return to management someday. Or not. The point is to stay open to what fits now. Your Move If you’re staring at a chute, wondering if stepping off the ladder will hurt you, consider this: maybe it’s not a setback. Maybe it’s a shortcut to what you really want. Careers aren’t about perfect titles. They’re about collecting experiences, relationships, and skills that shape you. Sometimes, the most important moves don’t look like progress—until you’re somewhere unexpected, doing work that matters. Spin the dial. Take the slide. Even in Chutes and Ladders, the winner isn’t who climbs highest. It’s who enjoys the game. At least, that’s what I told my friends when they beat me.
Ever look for a recipe online only to scroll through a self-important rambling 10-page essay about a trip to Tuscany that inspired the author to create the recipe? Finally, after wearing out your mouse, trackpad, or Page Down key to scroll to the end, you get to the actual recipe. I hate those. So I’ll spare you the long scroll and start this post with a git bisect cheat sheet and then I’ll tell you about my trip to hell that lead me to write this post. $ git bisect start $ git bisect bad # Current version is bad $ git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good . # Repeat git bisect [good|bad] until you find the commit that introduced the bug $ git bisect reset The One Where Poor Phil is Stumped Like Groot, I was stumped. I’m learning Blazor by building a simple app. After a while of working in a feature branch, I decided to test a logged out scenario. When I tried to log back in, the login page was stuck in an infinite redirect loop. I tried everything I could think of. I found every line of code that did a redirect and put a breakpoint in it, none of them were hit. I allowed anonymous on the login page. I tried playing with authorization policies. No dice. I asked Copilot for help. It offered some support and good advice, but it led nowhere. I even sacrificed two chickens and a goat, but not even the denizens of the seven hells could help me. I switched back to my main branch to see if the bug was there, and lo and behold it was! That meant this bug had been in the code for a while and I hadn’t noticed because I was always logged in. Often, when faced with such a bug, you might go on a divide and conquer mission. Start removing code you think might be related and see if the bug goes away. But in my case, that would encompass a large search area because I had no idea what the cause was or where to start cutting. It was clear to me that some cross-cutting concern was causing this bug. I needed to find the commit that introduced it to reduce the scope of my search. Enter git bisect. Git Bisect to the Rescue From the docs: This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a “bad” commit that is known to contain the bug, and a “good” commit that is known to be before the bug was introduced. Then git bisect picks a commit between those two endpoints and asks you whether the selected commit is “good” or “bad”. It continues narrowing down the range until it finds the exact commit that introduced the change. The key thing to note here is that it’s a binary search. So even if the span of commits you’re searching is 128 commits, it’ll take at most 7 steps to find the commit that introduced the bug (27 = 128). Here’s how I used it: $ git bisect start # Get the ball rolling. $ git bisect bad # The current commit is bad. Now I need to supply the last known good commit. That could be a search of its own, but usually you have a good idea. For example, you might know the last release was good so you use the tag for that release. In my case, I found the commit, 543ada5, where I first implemented the login page because I know it worked then. Yes, I do test my own code. $ git bisect good 543ada5 Bisecting: 7 revisions left to test after this (roughly 3 steps) [9736e3f90b571bebf512c2acb1f7ef14f3a77df4] Update all the NPMs After calling git bisect good with the known good commit, git bisect picked a commit between the bad and good commit, 9736e3f. I tested that commit and it turns out the bug wasn’t there! So I told git bisect that commit was good. $ git bisect good Bisecting: 3 revisions left to test after this (roughly 2 steps) [b9db65316a7f569c3ef9ed1eb4caa2072a6ba5d8] Show guests on Details page After a few more iterations of this, git bisect found the commit that introduced the bug. 4e08eb48956b80a7a33987df272d30acb5bd6ee2 is the first bad commit commit 4e08eb48956b80a7a33987df272d30acb5bd6ee2 Author: Phil Haack Date: Thu Oct 10 15:38:21 2024 -0700 That commit seemed pretty innocuous but I did notice something odd. I made this change in the App.razor file because I was tired of adding render mode InteractiveServer to nearly every page. - + It turns out, this change wasn’t exactly wrong, just incomplete. I can save the proper fix as a follow-up post. I’m annoyed that Copilot wasn’t able to offer up the eventual solution because I found it by googling around. Now that I found the culprit, I can get back to my original state before running git bisect by calling git bisect reset. Challenges with Git Bisect I encourage you to read the docs on git bisect as there are other sub-commands that are important. For example, sometimes a commit cannot be tested, such as a broken build. In that case, you can call git bisect skip to skip that commit. In practice, I found cases where you have to do a bit of tweaking to get the commit to run. For example, one commit had the following build error: error NU1903: Warning As Error: Package ‘System.Text.Json’ 8.0.4 has a known high severity vulnerability At the time that I wrote that commit, everything built fine. Since I only want to build and test locally, I ignored that warning in order to test the commit. Automating Git Bisect The reason I bring up these challenges is git bisect has the potential to be automated. You could write a script that builds and tests each commit. If a commit fails to build or test, the script could call git bisect skip for you. For example, it’d be nice to do something like this: git bisect run dotnet test This would run dotnet test on each commit and automatically call git bisect skip if the test fails. However, in practice, it doesn’t work as well as you’d like. Commits that were fine when you wrote them might not build any longer. Also, what you really probably want to do is inject a new test to be run during the git bisect process. Not to mention if you have integration tests that hit the database. You’d have to have migrations run up and down during the process of git bisect. I’ve considered building tooling that would solve these problems, but in my experience, so few .NET developers I know make regular use of git bisect that it’s hard to justify the effort. Maybe this post will convince you to add this tool to your repoirtoire.
C# 11 introduced a new feature - static virtual members in interfaces. The primary motivation for this feature is to support generic math algorithms. The mention of math might make some ignore this feature, but it turns out it can be useful in other scenarios. For example, I was able to leverage this feature to clean up how I register and consume custom config section types. Custom Config Section As a refresher, let’s look at custom config sections. Suppose you want to configure an API client in your appSettings.json. You can map the config section to a type. For example, here is an appSettings.json file in one of my projects. { "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "AllowedHosts": "*", "OpenAI": { "ApiKey": "Set this in User Secrets", "OrganizationId": "{Set this to your org id}", "Model": "gpt-4", "EmbeddingModel": "text-embedding-3-large" } } Rather than going through the IConfiguration API to read each of the “OpenAI” settings one at a time, I prefer to map this to a type. public class OpenAIOptions { public string? ApiKey { get; init; } public string? OrganizationId { get; init; } public string Model { get; init; } = "gpt-3.5-turbo"; public string EmbeddingModel { get; init; } = "text-embedding-ada-002"; } In Program.cs, I can configure this mapping. builder.Configuration.Configure(builder.Configuration.GetSection("OpenAI")); With this configured, I can inject an IOptions into any class that’s resolved via Dependency Injection and access the config section properties in a strongly typed manner. using Microsoft.Extensions.Options; public class OpenAIClient(IOptions options) { string? ApiKey => options.Value.ApiKey; string? Model => options.Value.Model; // ... } Sometimes, you’re in a situation where you can’t inject IOptions for whatever reason. You can grab it from IConfiguration like so. Configuration.GetSection("OpenAI").Get() Static Virtual Interfaces Come To Clean Up This is all fine, but a little repetitive when you have multiple configuration classes. I’d like to build a more convention based approach. This is where static virtual members in interfaces come in handy. First, let’s define an interface for all my configuration sections. public interface IConfigOptions { static abstract string SectionName { get; } } Notice there’s a static abstract string property named SectionName. This is the static virtual member. Any type that implements this interface has to implement a static SectionName property. Now I’m going to implement that interface in my configuration class. public class OpenAIOptions : IConfigOptions { public static string SectionName => "OpenAI"; public string? ApiKey { get; init; } public string? OrganizationId { get; init; } public string Model { get; init; } = "gpt-3.5-turbo"; public string EmbeddingModel { get; init; } = "text-embedding-ada-002"; } With that in place, I can implement an extension method to access the SectionName when registering a configuration section type. public static class OptionsExtensions { public static IHostApplicationBuilder Configure(this IHostApplicationBuilder builder) where TOptions : class, IConfigOptions { var section = builder.Configuration.GetSection(TOptions.SectionName); builder.Services.Configure(section); return builder; } public static TOptions? GetConfigurationSection(this IHostApplicationBuilder builder) where TOptions : class, IConfigOptions { return builder.Configuration .GetSection(TOptions.SectionName) .Get(); } } Now, with this method, I can register a configuration section like so: builder.Configure(); When you have several configuration sections to configure, the registration code looks nice and clean. For example, in one project I have a section like this: builder.Configure() .Configure() .Configure() .Configure() Conclusion The astute reader will notice I didn’t need to use static virtual members here. I could have built a convention-based approach by using reflection to extract the configuration section name from the type name. It’s true, but the code isn’t as tight as this approach. Also, there may be times where you want the type name to be different from the section name.
This is a follow-up to my previous post where I compared .NET Aspire to NuGet. In that post, I promised I would follow up with a comparison of using .NET Aspire to add a service dependency to a project versus using Docker. And looky here, I’m following through for once! The goal of these examples is to look at how much “ceremony” there is to add a service dependency to a .NET project using .NET Aspire versus using Docker. Even though it may not be the “best” example, I chose PostgreSql because it’s often the first service dependency I add to a new project. The example would be stronger if I chose another service dependency in addition to Postgres, but I think you can extrapolate that as well. And I have another project I’m working on that will have more dependencies. I won’t include installing the pre-requisite tooling as part of the “ceremony” because that’s a one-time thing. I’ll focus on the steps to add the service dependency to a project. Tooling I wrote this so that you can follow along and create the projects yourself on your own computer. If you want to follow along, you’ll need the following tools installed. .NET 8 SDK Docker Desktop .NET Aspire tooling Once you have these installed, you’ll also need to install the Aspire .NET workloads. dotnet workload update dotnet workload install aspire Examples This section contains two step-by-step walk throughs to create the example project, once with Docker and once with PostgreSql. The example project is a simple Blazor web app with a PostgreSQL database. I’ll use Entity Framework Core to interact with the database. I’ll also use the dotnet-ef command line tool to create migrations. Since we’re creating the same project twice, I’ll put the common code we’ll need right here since both walkthroughs will refer to it. Both projects will make use of a custom DbContext derived class and a simple User entity with an Id and a Name. using Microsoft.EntityFrameworkCore; namespace HaackDemo.Web; public class DemoDbContext(DbContextOptions options) : DbContext(options) { public DbSet Users { get; set; } } public class User { public int Id { get; set; } public string Name { get; set; } } Also, both projects will have a couple of background services that run on startup: DemoDbInitializer - runs migrations and seeds the database on startup. DemoDbInitializerHealthCheck - sets up a health check to report on the status of the database initializer. I used to run my migrations in Program.cs on startup, but I saw this example in the Aspire samples and thought I’d try it out. I also copied their health check initializer. Both of these need to be registered in Program.cs. builder.Services.AddSingleton(); builder.Services.AddHostedService(sp => sp.GetRequiredService()); builder.Services.AddHealthChecks() .AddCheck("DbInitializer", null); With that in place, let’s begin. Docker From your root development directory, the following commands will create a new Blazor project and solution. md docker-efcore-postgres-demo && cd docker-efcore-postgres-demo` dotnet new blazor -n DockerDemo -o DockerDemo.Web` dotnet new sln --name DockerDemo` dotnet sln add DockerDemo.Web` Npgqsl is the PostgreSql provider for Entity Framework Core. We need to add it to the web project. cd DockerDemo.Web dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL We also need the EF Core Design package to support the dotnet-ef command line tool we’re going to use to create migrations. dotnet add package Microsoft.EntityFrameworkCore.Design Now we add the docker file to the root directory. cd .. touch docker-compose.yml And paste the following in version: '3.7' services: postgres: container_name: 'postgres' image: postgres environment: # change this for a "real" app! POSTGRES_PASSWORD: password Note that the container_name could conflict with other containers on your system. You may need to change it to something unique. Add the postgres connection string to your appsettings.json. "ConnectionStrings": { "postgresdb": "User ID=postgres;Password=password;Server=postgres;Port=5432;Database=POSTGRES_USER;Integrated Security=true;Pooling=true;" } Now we can add our custom DbContext derived class and User entity mentioned earlier. We also need to register DemoDbInitializer and DemoDbInitializerHealthCheck in Program.cs as mentioned before. Next create the initial migration. cd ../DockerDemo.Web dotnet ef migrations add InitialMigration We’re ready to run the app. First, we need to start the Postgres container. docker-compose build docker-compose up Finally, we can hit hit F5 in Visual Studio/Rider or run dotnet run in the terminal and run our app locally. Aspire Once again, from your root development directory, the following commands will create a new Blazor project and solution. But this time, we’ll use the Aspire starter template. md aspire-efcore-postgres-demo && cd aspire-efcore-postgres-demo dotnet new aspire-starter -n AspireDemo -o . This creates three projects: AspireDemo.AppHost - The host project that configures the application. AspireDemo.Web - The web application project. AspireDemo.ApiService - An example web service to get the weather. We don’t need AspireDemo.ApiService for this example, so we can remove it. The first thing we want to do is configure the PostgreSql service in the AspireDemo.AppHost project. In a way, this is analogous to how we configured Postgres in the docker-compose.yml file in the Docker example. Switch to the App Host project and install the Aspire.Hosting.PostgreSQL package. cd ../AspireDemo.AppHost dotnet add package Aspire.Hosting.PostgreSQL Add this snippet after the builder is created in Program.cs. var postgres = builder.AddPostgres("postgres"); var postgresdb = postgres.AddDatabase("postgresdb"); This creates a Postgres service named postgres and a database named postgresdb. We’ll use the postgresdb reference when we want to connect to the database in the consuming project. Finally, we update the existing line to include the reference to the database. builder.AddProject("webfrontend") - .WithExternalHttpEndpoints(); + .WithExternalHttpEndpoints() + .WithReference(postgresdb); That completes the configuration of the PostgreSql service in the App Host project. Now we can consume this from our web project. Add the PostgreSQL component to the consuming project, aka the web application. cd ../AspireDemo.Web dotnet add package Aspire.Npgsql.EntityFrameworkCore.PostgreSQL We also need the EF Core Design package to support the ef command line tool we’re going to use to create migrations. dotnet add package Microsoft.EntityFrameworkCore.Design Once again, we add our custom DbContext derived class, DemoDbContext, along with the User entity to the project. Once we do that, we configure the DemoDbContext in the Program.cs file. Note that we use the postgresdb reference we created in the App Host project. builder.AddNpgsqlDbContext("postgresdb"); Then we can create the migrations using the dotnet-ef cli tool. cd ../AspireDemo.Web dotnet ef migrations add InitialMigration Don’t forget to add the DemoDbInitializer and DemoDbInitializerHealthCheck to the project and register them in Program.cs as mentioned before. Now to run the app, I can hit F5 in Visual Studio/Rider or run dotnet run in the terminal. If you use F5, make sure the AppHost project is selected as the run project.` Conclusions At the end of both walkthroughs we end up with a simple Blazor web app that uses a PostgreSQL database. Personally, I like the .NET Aspire approach because I didn’t have to mess with connection strings and the F5 to run experience is preserved. As I mentioned before, I have another project I’m working on that has more dependencies. When I’m done with that port, I think it’ll be a better example of the ceremony surrounding cloud dependencies when using .NET Aspire. In any case, you can see both of these projects I created on GitHub. haacked/docker-efcore-postgres-demo haacked/aspire-efcore-postgres-demo
Recently I tweeted, It’s not a perfect analogy, but .Net Aspire is like NuGet for cloud services. We created NuGet to make it easy to pull in libraries. Before, it took a lot of steps. Nowadays, to use a service like Postgres or Rabbit MQ, takes a lot of steps. And I’m not just saying that because David Fowler, one of the creators of .NET Aspire, was also most definitely one of the creators of NuGet. But it is his MO to focus on developer productivity. To understand why I said that, it helps to look at my initial blog post that introduced NuGet. Specifically the section “What does NuGet solve?”. The .NET open source community has churned out a huge catalog of useful libraries. But what has been lacking is a widely available easy to use manner of discovering and incorporating these libraries into a project. Back in the dark ages before NuGet, adding a .NET library to your project took more steps than a marching band on speed. NuGet drastically reduced the number of steps to find and depend on a library, no stimulants necessary. In addition to that, NuGet helped support the clone and F5 workflow of local development. The goal with this workflow is that a new developer can clone a repository and then hit F5 in their editor or IDE to run the project locally. Or at least have as few steps between clone and run as possible. .NET Aspire helps with this too. The State of Cloud Service Dependencies We’re in a similar situation when it comes to cloud service dependencies. It takes a lot of steps to incorporate the service into a project. In that way, .NET Aspire is similar to NuGet (and in fact leverages NuGet) as it reduces the number of steps to incorporate a cloud service into a project, and helps support the git clone and F5 development model. As I mentioned, my analogy isn’t perfect because .NET Aspire doesn’t stop at the local development story. Unlike a library dependency, a cloud service dependency has additional requirements. It needs to be provisioned, configured, and deployed. Connection strings need to be securely managed. Sacrifices to the old gods need to be made. Aspire helps with all that except the sacrifices. Why Not Just Docker? My tweet lead to a debate where someone pointed out that Postgres was a bad example because Aspire adds a lot of ceremony compared to just using Docker. This is a fair point. Docker is a great way to package up a service and run it locally. But it doesn’t help with the other steps of provisioning, configuring, and deploying a service. And even for local development, I found .NET Aspire to have the advantage that it supports the clone and run workflow better than Docker alone. In a follow-up post to this one, I’ll walk through setting up two simple asp.net core applications that leverage Postgres via EF Core. One will use Docker alone, the other will use Aspire. This provides a point of comparison so you can judge for yourself. I know I don’t have a great track record with timely follow-up posts, but I usually do follow through! This time, I won’t wait 8 years for the follow-up.
When you fail, many people will tell you how failure is a great teacher. And they’re not wrong. But you know what else is a great teacher? Success! And success is a lot less expensive than failure. About a month ago, my co-founder and I decided to shut down our startup, A Serious Business, Inc., the makers of Abbot. He wrote some beautiful words about it on LinkedIn. Now it’s my turn to write some less than beautiful words about the experience. Before I get all maudlin about failure, let me say that the experience of building a company from scratch with a close friend and amazing team was one of the most rewarding experiences of my career. We built a great company, team, and product. The only thing we failed to do was the only thing that mattered for a startup — obtain product market fit. I’ve been very fortunate in my career. I’ve encountered so little failure. Not because I’m so great, but because I haven’t taken huge risks until now. The biggest risk I remember taking was leaving my cush high-paying job at Microsoft in order to join a scrappy little startup for much less pay. It so happens that startup was GitHub. In retrospect, not that much of a risk, though it felt like it back then. So yeah, I’ve been lucky. Very lucky. Back to the main topic, why didn’t we achieve product market fit? I’ve been reflecting on that question a lot, but I keep running into a stumbling block. By now, most of us are familiar with the idea of survivorship bias as exemplified by the famous airplane image: For those who don’t know, survivorship bias is the logical error of looking at the survivors (or successes) of a process and drawing conclusions without also considering the failures. During World War II, military researchers studied the distribution of bullet holes from returning aircraft and wanted to add armor to the areas where bullet holes were concentrated. A Hungarian mathematician (Abraham Wald) suggested differently. He noted that the planes that did not return were not being considered. He suggested adding armor to the areas without bullet holes as it’s likely the reason those areas were sparse in bullet holes was because the planes that were hit there did not return. I think the same bias occurs when examining failures. Perhaps we should call it Failureship Bias. If that term catches on, you’ve heard it here first. For example, one question I’ve pondered is whether our tech stack held us back. I’ve said many times in the past that the tech stack is the least interesting part of a company. The product market fit is all that matters in the beginning and later on, the company culture, the ability to sell, etc. But to reach product market fit, you have to be able to shotgun features at the wall and see what sticks. Fast experimentation is really key. Chris Wanstrath (aka defunkt) tweeted the following today: I started learning Rails in 2005 and doing it professionally in 2006. By 2007, when we started GitHub, I had already worked on or made dozens of sites. The velocity was a huge part of the appeal - we could create new features fast! At A Serious Business, Inc. I chose ASP.NET Core and C# because I knew I would be faster with it than any other stack. I helped build that stack. Even so, there is still much ceremony and paper cuts when it comes to the inner loop of development. It may not seem like much, but that shit adds up. For example, compilation and startup time when making changes compounds. I would love to have ASP.NET Core interpreted while in local development. Or interpreted while it’s background compiled. So did the stack hold us back? Again, going back to Failurship Bias, I can’t run a double blind experiment where another team with the same exact circumstances builds the same exact product using Rails and see if they survive. Maybe some day we can peek into parallel universes and I can see how Bill Maack, the Rubyist, fares. Having said that, there was another team who built a product very similar to ours and seems to be doing well. They also went through the YCombinator program like we did. Is it their stack that helped them? Or did they benefit from the second-mover advantage? Or is it the fact that all three of the co-founders live and work in the same apartment. In their own words, this is all they do. Perhaps all of those are reasons why they succeeded and we did not. Perhaps not. I hope that’s not what it takes because I’m not willing to move into an apartment with my co-founder. I love him, but not that much. And his family and my family probably would object. So what is the lesson I’ve learned from this failure? Well as I said in the title. It really suuuuuucks. But don’t cry for me Argentina. The experience of building a product with wonderful people was its own reward. And I did gain some ideas that I want to experiment with the next time I start a company. I’m just sober enough to understand that if my next company succeeds, it’s just as likely that it was luck in-the-moment as it was the lessons I learned from this failure. But hey, I’ll take it.
One of my pet peeves is when I’m using a .NET client library that uses internal constructors for its return type. For example, let’s take a look at the Azure.AI.OpenAI nuget package. Now, I don’t mean to single out this package, as this is a common practice. It just happens to be the one I’m using at the moment. It’s an otherwise lovely package. I’m sure the authors are lovely people. Here’s a method that calls the Azure Open AI service to get completions. Note that this is a simplified version of the actual method for demonstration purposes: public async Task GetCompletionsAsync() { var endpoint = new Uri("https://wouldn't-you-like-to-know.openai.azure.com/"); var client = new Azure.AI.OpenAI.OpenAIClient(endpoint, new DefaultAzureCredential()); var response = await client.GetCompletionsAsync("text-davinci-003", new CompletionsOptions { Temperature = (float)1.0, Prompts = { "Some prompt" }, MaxTokens = 2048, }); return response?.Value ?? throw new Exception("We'll handle this situation later"); } This code works fine. But I have existing code that calls Open AI directly using the OpenAI library. While I work to transition over to Azure, I need to be able to easily switch between the two libraries. So what I really want to do is change this method to return a CompletionResult from the OpenAI library. This is easy enough to do with an extension method to convert a Completions into a CompletionResult. public static CompletionResult ToCompletionResult(this Completions completions) { return new CompletionResult { Completions = completions.Choices.Select(c => new Choice { Text = c.Text, Index = c.Index.GetValueOrDefault(), }).ToList(), Usage = new CompletionUsage { PromptTokens = completions.Usage.PromptTokens, CompletionTokens = (short)completions.Usage.CompletionTokens, TotalTokens = completions.Usage.TotalTokens, }, Model = completions.Model, Id = completions.Id, CreatedUnixTime = completions.Created, }; } But how do I test this? Well, it’d be nice to just “new” up a Completions, call this method on it, and make sure all the properties match up. But you see where this is going. As the beginning of this post foreshadowed, the Completions type only has internal constructors for no good reason I can see. So I can’t easily create a Completions object in my unit tests. Instead, I have to use one of my handy-dandy helper methods for dealing with this sort of paper cut. public static T Instantiate(params object[] args) { var type = typeof(T); Type[] parameterTypes = args.Select(p => p.GetType()).ToArray(); var constructor = type.GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, parameterTypes, null); if (constructor is null) { throw new ArgumentException("The args don't match any ctor"); } return (T)constructor.Invoke(args); } With this method, I can now write a unit test for my extension method. [Fact] public void CreatesCompletionResultFromCompletions() { var choices = new[] { Instantiate( "the resulting text", (int?)0.7, Instantiate(), "stop") }; var usage = Instantiate(200, 123, 323); var completion = Instantiate( "some-id", (int?)123245, "text-davinci-003", choices, usage); var result = completion.ToCompletionResult(); Assert.Equal("the resulting text", result.Completions[0].Text); Assert.Equal("text-davinci-003", result.Model); Assert.Equal("some-id", result.Id); Assert.Equal(200, result.Usage.CompletionTokens); Assert.Equal(123, result.Usage.PromptTokens); Assert.Equal(323, result.Usage.TotalTokens); } If you’re wondering how I call the method without having to declare the type the method belongs to, recall that you can import methods with the using static declaration. So this method is part of my ReflectionExtensions class (so original, I know), so I have a using static Serious.ReflectionExtensions; at the top of my unit tests. With this all in place, I can update my original method now: public async Task GetCompletionsAsync() { var endpoint = new Uri("https://wouldn't-you-like-to-know.openai.azure.com/"); var client = new Azure.AI.OpenAI.OpenAIClient(endpoint, new DefaultAzureCredential()); var response = await client.GetCompletionsAsync("text-davinci-003", new CompletionsOptions { Temperature = (float)1.0, Prompts = { "Some prompt" }, MaxTokens = 2048, }); return response?.Value.ToCompletionResult() ?? throw new Exception("We'll handle this situation later"); } So yeah, I can work around the internal constructor pretty easily, but in my mind it’s unnecessary friction. Also, I know a lot of folks are going to tell me I should wrap the entire API with my own data types. Sure, but that doesn’t change the fact that I’m going to want to test the translation from the API’s types to my own types. Not to mention, I wouldn’t have to do this if the data types returned by the API were simple constructable DTOs. For my needs, this is also unnecessary friction. I hope this code helps you work around it the next time you run into this situation.
This is the final installment of the adventures of Bill Maack the Hapless Developer (any similarity to me is purely coincidental and a result of pure random chance in an infinite universe). Follow along as Bill continues to improve the reliability of his ASP.NET Core and Entity Framework Core code. If you haven’t read the previous installments, you can find them here: How to Recover from a DbUpdateException With EF Core Why Did That Database Throw That Exception? In the first post, we looked at a background Hangfire job that processed incoming Slack event and it raised some questions such as: DbContext is not supposed to be thread safe. Why are allowing your repository method to be executed concurrently from multiple threads? This post addresses that question and more! Part of the confusion lies in the fact that the original example didn’t provide enough context. Let’s take a deeper look at the scenario. Bill works on the team that builds Abbot, a Slack app that helps customer success/support teams keep track of conversations within Slack and support more customers with less effort. The app is built on ASP.NET Core and Entity Framework Core. As a Slack App, it receives events from Slack in the form of HTTP POST requests. A simple ASP.NET MVC controller can handle that. Note that the following code is a paraphrase of the actual code as it leaves out some details such as verifying the Slack request signature. Bill would never skimp on security and definitely validates those Slack signatures. public class SlackController : Controller { readonly AbbotDbContext _db; readonly ISlackEventParser _slackEventParser; readonly IBackgroundJobClient _backgroundJobClient; // Hangfire public SlackController(AbbotContext db, ISlackEventParser slackEventParser, IBackgroundJobClient backgroundJobClient) { _db = db; _slackEventParser = slackEventParser; _backgroundJobClient = backgroundJobClient; } [HttpPost] public async Task PostAsync() { var slackEvent = await _slackEventParser.ParseAsync(Request); _db.SlackEvents.Add(slackEvent); await _db.SaveChangesAsync(); _backgroundJobClient.Enqueue(x => x.ProcessEventAsync(id)); } } This code is pretty straightforward. Bill parses the incoming Slack event, saves it to the database, and then enqueues it for background processing using Hangfire. When Hangfire is ready to process that event, it uses the ASP.NET Core dependency injection container to create an instance of SlackEventProcessor and calls the ProcessEventAsync method. What’s nice about this generic method approach is that SlackEventProcessor itself doesn’t even need to be registered in the container, only all of its dependencies need to be registered. Here’s the SlackEventProcessor class that handles the background processing. public class SlackEventProcessor { readonly AbbotContext _db; public SlackEventProcessor(AbbotContext db) { _db = db; // AbbotContext derives from DbContext } // This code runs in a background Hangfire job. public async Task ProcessEventAsync(int id) { var nextEvent = (await _db.SlackEvents.FindAsync(id)) ?? throw new InvalidOperationException($"Event not found: {id}"); try { // This does the actual processing of the Slack event. await RunPipelineAsync(nextEvent); } catch (Exception e) { nextEvent.Error = e.ToString(); } finally { nextEvent.Completed = DateTime.UtcNow; await _db.SaveChangesAsync(); } } } The key thing to note here is that in the case of Hangfire, every time Hangfire processes a job, it creates a unit of work (aka a scope) for that job. The end result is that as long as your DbContext derived instance (in this case AbbotContext) is registered with a lifetime of ServiceLifetime.Scoped, Hangfire will inject a new instance of your DbContext when invoking a job. So the code here doesn’t call any DbContext methods on multiple threads concurrently. We’re Ok here in that regard. However, there is an issue with Bill’s code here. I glossed over it before, but the RunPipelineAsync method internally uses dependency injection to resolve a service to handle the Slack event processing. That service depends on AbbotContext. Since this is all running as part of a Hangfire job, it’s all in the same Lifetime scope. What that means is that the AbbotContext instance that is used to retrieve the SlackEvent instance is the same instance that is used to process the event. That’s not good. The AbbotContext instance in SlackEventProcessor should only be responsible for retrieving and updating the SlackEvent instance that it needs to process. It should not be the same instance that is used when running the Slack event processing pipeline. The solution is to create a separate AbbotContext instance for the outer scope. To do that, Bill needs to inject an IDbContextFactory into SlackEventProcessor and use that to create a new AbbotContext instance for the outer scope, resulting in: public class SlackEventProcessor { readonly IDbContextFactory _dbContextFactory; public SlackEventProcessor(IDbContextFactory dbContextFactory) { _dbContextFactory = dbContextFactory; } // This code runs in a background Hangfire job. public async Task ProcessEventAsync(int id) { await using var db = await _dbContextFactory.CreateDbContextAsync(); var nextEvent = (await db.SlackEvents.FindAsync(id)) ?? throw new InvalidOperationException($"Event not found: {id}"); try { // This does the actual processing of the Slack event. // The AbbotContext is injected into the pipeline and is not shared with `SlackEventProcessor`. await RunPipelineAsync(nextEvent); } catch (Exception e) { nextEvent.Error = e.ToString(); } finally { nextEvent.Completed = DateTime.UtcNow; await db.SaveChangesAsync(); } } } The instance of AbbotContext created by the factory will always be a new instance. It won’t be the same instance injected into any dependencies that are resolved by the DI container. This is a pretty straightforward fix, except the first time Bill tried it, it didn’t work. Registering the DbContextFactory Correctly Let’s take a step back and look at how Bill registered the DbContext instance with the DI container. Since Bill is working on an ASP.NET Core application, the recommended way to register the DbContext is to use the AddDbContext extension method on IServiceCollection. services.AddDbContext(options => {...}); This sets the ServiceLifetime for the DbContext to ServiceLifetime.Scoped. This means that the DbContext instance is scoped to the current HTTP request. This is the default and recommended behavior for ASP.NET Core applications. We wouldn’t want this to be a ServiceLifetime.Singleton as that would cause issues with concurrent calls to the DbContext which is a big no no. You’ll never guess the name of the method to register a DbContextFactory with the DI container. Yep, it’s AddDbContextFactory. services.AddDbContextFactory(options => {...}); Now here’s where it gets tricky. When Bill ran this code, he ran into an exception that looked something like: Cannot consume scoped service 'Microsoft.EntityFrameworkCore.DbContextOptions1[AbbotContext]' from singleton 'Microsoft.EntityFrameworkCore.IDbContextFactory1[AbbotContext]'. What’s happening here is that AddDbContext is not just registering our DbContext instance, it’s also registering the DbContextOptions instance used to create the DbContext instance. The lifetime of DbContextOptions is the same as DbContext, aka ServiceLifetime.Scoped. However, DbContextFactory also needs to consume the DbContextOptions instance, but DbContextFactory has a lifetime of ServiceLifetime.Singleton. As a Singleton, it can’t consume a Scoped service because the Scoped service has a shorter lifetime than the Singleton service. To summarize, DbContext is Scoped while DbContextFactory is Singleton and they both need a DbContextOptions which is Scoped by default. Fortunately, there’s a simple solution. Well, it’s simple when you know it, otherwise it’s the kind of thing that makes a Bill want to pull his hair out. The solution is to make DbContextOptions a Singleton as well. Then both DbContext and DbContextFactory could both use it. There’s an overload to AddDbContext that accepts a ServiceLifetime specifically for the DbContextOptions and you can set that to Singleton. So Bill’s final registration code looks like: services.AddDbContextFactory(options => {...}); services.AddDbContext(options => {...}, optionsLifetime: ServiceLifetime.Singleton); Bill used a named parameter to make it clear what the lifetime is for. So to summarize, DbContext still has a lifetime of Scoped while DbContextFactory and DbContextOptions have a Singleton lifetime. And EF Core is happy and Bill’s code works and is more robust. The End!
Last year I wrote a post, career chutes and ladders, where I proposed that a linear climb to the C-suite is not the only approach to a satisfying career. At the end of the post, I mentioned I was stepping off the ladder to take on an IC role. After over a year of being on a personally funded sabbatical, I started a new job at PostHog as a Senior Product Engineer. This week is my orientation where I get to drink from the firehose once again. What is PostHog? Apart from being a company that seems to really love cute hedgehogs, PostHog is an open-source product analytics platform. They have a set of tools to help product engineers build better products. Each product can be used as a standalone tool, but they’re designed to level-up when you put them together. In particular, I’ve started on the Feature Flags team. Yesterday was my first day of onboarding and so far I really like my team. Today is day two and I’ve already submitted a small fix for my first pull request! Why PostHog? When I was looking around at companies, an old buddy from GitHub who worked at PostHog reached out to me and suggested I take a look at this company. He said it reminded him of the good parts of working at GitHub. Their company handbook really impressed me. What it communicates to me is that this is a remote-friendly company that values transparency, autonomy, and trust. It’s a company that treats its employees like adults and tries to minimize overhead. Not only that, they’ve embraced a lot of employee-friendly practices. For example, a while back my friend Zach wrote about his distaste for the 90 day exercise window. PostHog provides a 10-year window. Not only that, they offer employes double trigger acceleration! Double trigger acceleration, which means if you are let go or forced to leave due to the company being acquired, you receive all of your options at that time This is a perk usually only offerered to executives. I should mention we’re hiring! Please mention me if you apply. If we’ve worked together, let me know so I can provide feedback internally. I’m excited to be part of a company that’s small, but growing. The company is at a stage similar to the stage GitHub was at when I joined. This is a team with a strong product engineering culture and I’m excited to contribute what I can and learn from them. The Challenge The other part that’s exciting for me is that I’ll be working in a stack that I don’t have a huge amount of experience with. The front-end is React with TypeScript and the back-end is Django with Python. I’ve done a bit of work in all these technologies except Django. However, I believe my experience with ASP.NET MVC will help me pick up Django quickly. Not to mention, I’ve always taken the stance that I’m a software engineer, not just a .NET developer. Don’t get me wrong, I love working in .NET. But at the same time, I think it’s healthy for me to get production experience in other stacks. It’ll be an area of personal growth. Not to mention, they don’t quite have a .NET Client SDK yet so once I get settled in, that’s something I’m interested in getting started on. The Future I’ll share more about my experience here as I get settled in. In the meanwhile, wish me luck!
I love using Refit to call web APIs in a nice type-safe manner. Sometimes though, APIs don’t want to cooperate with your strongly-typed hopes. For example, you might run into an API written by a hipster in a beanie, aka a dynamic-type enthusiast. I don’t say that pejoratively. Some of my closest friends write Python and Ruby. For example, I came across an API that returned a value like this: { "important": true } No problem, I defined a class like this to deserialize it to: public class ImportantResponse { public bool Important { get; set; } } And life was good. Until that awful day that the API returned this: { "important": "What is important is subjective to the viewer." } Damn! This philosophy lesson broke my client. One workaround is to do this: public class ImportantResponse { public JsonElement Important { get; set; } } It works, but it’s not great. It doesn’t communicate to the consumer that this value can only be a string or a bool. That’s when I remembered an old blog post from my past. April Fool’s Joke to the Rescue When I was the Program Manager (PM) for ASP.NET MVC, my colleague and lead developer, Eilon, wrote a blog post entitled “The String or the Cat: A New .NET Framework Library where he introduced the class StringOr. This class could represent a dual-state value that’s either a string or another type. The concepts presented here are based on a thought experiment proposed by scientist Erwin Schrödinger. While an understanding of quantum physics will help to understand the new types and APIs, it is not required. It turned out his blog post was an April Fool’s joke. But the idea stuck with me. And now, here’s a case where I need a real implementation of it. But I’m going to name mine, StringOrValue. A modern StringOrValue One nice thing about implementing this today is we can leverage modern C# features. Here’s the starting implementation: [JsonConverter(typeof(StringOrValueConverter))] public readonly struct StringOrValue : IStringOrObject { public StringOrValue(string stringValue) { StringValue = stringValue; IsString = true; } public StringOrValue(T value) { Value = value; IsValue = true; } public T? Value { get; } public string? StringValue { get; } [MemberNotNullWhen(true, nameof(StringValue))] public bool IsString { get; } [MemberNotNullWhen(true, nameof(Value))] public bool IsValue { get; } } /// /// Internal interface for . /// /// /// This is here to make serialization and deserialization easy. /// [JsonConverter(typeof(StringOrValueConverter))] internal interface IStringOrObject { bool IsString { get; } bool IsValue { get; } string? StringValue { get; } object? ObjectValue { get; } } We can use the MemberNotNullWhen attribute to tell the compiler that when IsString is true, StringValue is not null. And when IsValue is true, Value is not null. That way, code like this compiles just fine without raising null warnings: var value = new StringOrValue("Hello"); if (value.IsString) { Console.WriteLine(value.StringValue.Length); } and var value = new StringOrValue(42); if (value.IsValue) { Console.WriteLine(value.ToString()); } It also is decorated with the JsonConverter attribute to tell the JSON serializer to use the StringOrValueConverter class to serialize and deserialize this type. I wanted this type to Just Work™. I didn’t want consumers of this class have to bother with registering a JsonConverterFactory for this type. This also explains why I introduced the internal IStringOrObject interface. We can’t implement the JsonConverter attribute on a open generic type, so we need a non-generic interface to apply the attribute to. It also makes it easier to write the converter as you’ll see. /// /// Value converter for . /// internal class StringOrValueConverter : JsonConverter { public override bool CanConvert(Type typeToConvert) => typeToConvert.IsGenericType && typeToConvert.GetGenericTypeDefinition() == typeof(StringOrValue); public override IStringOrObject Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) { var targetType = typeToConvert.GetGenericArguments()[0]; if (reader.TokenType == JsonTokenType.String) { var stringValue = reader.GetString(); return stringValue is null ? CreateEmptyInstance(targetType) : CreateStringInstance(targetType, stringValue); } var value = JsonSerializer.Deserialize(ref reader, targetType, options); return value is null ? CreateEmptyInstance(targetType) : CreateValueInstance(targetType, value); } static ConstructorInfo GetEmptyConstructor(Type targetType) { return typeof(StringOrValue) .MakeGenericType(targetType). GetConstructor([]) ?? throw new InvalidOperationException($"No constructor found for StringOrValue."); } static ConstructorInfo GetConstructor(Type targetType, Type argumentType) { return typeof(StringOrValue) .MakeGenericType(targetType). GetConstructor([argumentType]) ?? throw new InvalidOperationException($"No constructor found for StringOrValue."); } static IStringOrObject CreateEmptyInstance(Type targetType) { var ctor = GetEmptyConstructor(targetType); return (IStringOrObject)ctor.Invoke([]); } static IStringOrObject CreateStringInstance(Type targetType, string value) { var ctor = GetConstructor(targetType, typeof(string)); return (IStringOrObject)ctor.Invoke([value]); } static IStringOrObject CreateValueInstance(Type targetType, object value) { var ctor = GetConstructor(targetType, targetType); return (IStringOrObject)ctor.Invoke([value]); } public override void Write(Utf8JsonWriter writer, IStringOrObject value, JsonSerializerOptions options) { if (value.IsString) { writer.WriteStringValue(value.StringValue); } else if (value.IsValue) { JsonSerializer.Serialize(writer, value.ObjectValue, options); } else { writer.WriteNullValue(); } } } In the actual implementation of StringOrValue, I implemented IEquatable, IEquatable> and overrode the implicit operators: public static implicit operator StringOrValue(string stringValue) => new(stringValue); public static implicit operator StringOrValue(T value) => new(value); This allows you to write code like this: StringOrValue valueAsString = "Hello"; StringOrValue valueAsNumber = 42; Assert.Equals("Hello", valueAsString); Assert.Equals(42, valueAsNumber); So with this implementation in place, I can go back to the original example and write this: public class ImportantResponse { public StringOrValue Important { get; set; } } And now I can handle both cases: var response = JsonSerializer.Deserialize(json) ?? throw new InvalidOperationException("Deserialization failed."); if (response.Important.IsValue) { if (response.Important.Value) { Console.WriteLine("It's important!"); } else { Console.WriteLine("It's not important."); } } else { Console.WriteLine(response.Important.StringValue); } It’s time to go shopping for a beanie! Here’s the full implementation for those interested in using this in your own projects!
The career ladder is a comforting fiction we’re sold as we embark on our careers: Start as Junior, climb to Senior, then Principal, Director, and VP. One day, you defeat the final boss and receive a key to the executive bathroom and and join the C-suite. You’ve made it! If you’re lucky, your company supports a tall Individual Contributor (IC) ladder. But often, the next rung swaps your hoodie for a power shirt, suspenders, and management duties. My first job followed this script: developer to team lead to Senior Manager. I was hustling up that ladder. But life isn’t linear. Careers are more like Chutes and Ladders (which the Brits know as Snakes and Ladders, but that messes up my metaphor). Lucky breaks shoot you up; surprises send you sliding down. Those chutes? Not failures. Opportunities. Take my journey. I took a chute from my first job back to an IC role at my second. Climbed back up into management. Took a chute to Microsoft as a Senior Program Manager. Then came another chute: GitHub. I started as a developer, became a manager, then Director of Engineering. After that, I co-founded a startup, took the CTO title, but spent most of my time writing code. It looked like the classic ladder climb, but in startups, titles are smoke and mirrors. So it only counted as a step up if we succeeded. Spoiler: we didn’t. Ladders Are Overrated Ladders are narrow and rigid. Titles—while shiny—hide what matters: the work. They’re bumper stickers for your career. Nice for signaling, but they don’t tell the whole story. As Director, I mentored teams and focused on broad initiatives. But over time, I started to miss the tech. The longer I stayed out of writing production code, the more disconnected I felt. As a CTO, I got back to coding and loved it. When the startup failed, I promised myself at least a year off to reflect before jumping into something new. Growth on the Slide Over a year later, I’ve been thinking about my next move. It turns out nobody will pay me to be a man of leisure. Conventional wisdom (and ego) says aim higher: VP of Software Development or CTO at an established company. The irony of big titles is they mean more power at work, but less power over your time. For me, my recent life circumstances make time autonomy among my top priorities. Sure, big titles pay well, but maximizing income isn’t my goal. Here’s what I realized: By my own definition, I’ve succeeded. I’ve been part of great teams, built great products, and helped grow companies. I have nothing left to prove. I don’t need a lofty title or a giant paycheck (but I wouldn’t turn one down either). So, once again, I’m setting aside my pride and stepping off the ladder. Next year on January 6th I start a new IC role. Details to come, but I’m thrilled to be back in the trenches, building and learning. It’s a place that treats its employees like adults and gives me the autonomy to structure my day as I see fit. This isn’t about rejecting leadership. It’s about recalibrating. Leadership is broad. It’s guiding organizations or leading by example. I think it’s healthy—even advantageous—to bounce between IC and management roles over the course of a career. Life changes. I might return to management someday. Or not. The point is to stay open to what fits now. Your Move If you’re staring at a chute, wondering if stepping off the ladder will hurt you, consider this: maybe it’s not a setback. Maybe it’s a shortcut to what you really want. Careers aren’t about perfect titles. They’re about collecting experiences, relationships, and skills that shape you. Sometimes, the most important moves don’t look like progress—until you’re somewhere unexpected, doing work that matters. Spin the dial. Take the slide. Even in Chutes and Ladders, the winner isn’t who climbs highest. It’s who enjoys the game. At least, that’s what I told my friends when they beat me.
Ever look for a recipe online only to scroll through a self-important rambling 10-page essay about a trip to Tuscany that inspired the author to create the recipe? Finally, after wearing out your mouse, trackpad, or Page Down key to scroll to the end, you get to the actual recipe. I hate those. So I’ll spare you the long scroll and start this post with a git bisect cheat sheet and then I’ll tell you about my trip to hell that lead me to write this post. $ git bisect start $ git bisect bad # Current version is bad $ git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good . # Repeat git bisect [good|bad] until you find the commit that introduced the bug $ git bisect reset The One Where Poor Phil is Stumped Like Groot, I was stumped. I’m learning Blazor by building a simple app. After a while of working in a feature branch, I decided to test a logged out scenario. When I tried to log back in, the login page was stuck in an infinite redirect loop. I tried everything I could think of. I found every line of code that did a redirect and put a breakpoint in it, none of them were hit. I allowed anonymous on the login page. I tried playing with authorization policies. No dice. I asked Copilot for help. It offered some support and good advice, but it led nowhere. I even sacrificed two chickens and a goat, but not even the denizens of the seven hells could help me. I switched back to my main branch to see if the bug was there, and lo and behold it was! That meant this bug had been in the code for a while and I hadn’t noticed because I was always logged in. Often, when faced with such a bug, you might go on a divide and conquer mission. Start removing code you think might be related and see if the bug goes away. But in my case, that would encompass a large search area because I had no idea what the cause was or where to start cutting. It was clear to me that some cross-cutting concern was causing this bug. I needed to find the commit that introduced it to reduce the scope of my search. Enter git bisect. Git Bisect to the Rescue From the docs: This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a “bad” commit that is known to contain the bug, and a “good” commit that is known to be before the bug was introduced. Then git bisect picks a commit between those two endpoints and asks you whether the selected commit is “good” or “bad”. It continues narrowing down the range until it finds the exact commit that introduced the change. The key thing to note here is that it’s a binary search. So even if the span of commits you’re searching is 128 commits, it’ll take at most 7 steps to find the commit that introduced the bug (27 = 128). Here’s how I used it: $ git bisect start # Get the ball rolling. $ git bisect bad # The current commit is bad. Now I need to supply the last known good commit. That could be a search of its own, but usually you have a good idea. For example, you might know the last release was good so you use the tag for that release. In my case, I found the commit, 543ada5, where I first implemented the login page because I know it worked then. Yes, I do test my own code. $ git bisect good 543ada5 Bisecting: 7 revisions left to test after this (roughly 3 steps) [9736e3f90b571bebf512c2acb1f7ef14f3a77df4] Update all the NPMs After calling git bisect good with the known good commit, git bisect picked a commit between the bad and good commit, 9736e3f. I tested that commit and it turns out the bug wasn’t there! So I told git bisect that commit was good. $ git bisect good Bisecting: 3 revisions left to test after this (roughly 2 steps) [b9db65316a7f569c3ef9ed1eb4caa2072a6ba5d8] Show guests on Details page After a few more iterations of this, git bisect found the commit that introduced the bug. 4e08eb48956b80a7a33987df272d30acb5bd6ee2 is the first bad commit commit 4e08eb48956b80a7a33987df272d30acb5bd6ee2 Author: Phil Haack Date: Thu Oct 10 15:38:21 2024 -0700 That commit seemed pretty innocuous but I did notice something odd. I made this change in the App.razor file because I was tired of adding render mode InteractiveServer to nearly every page. - + It turns out, this change wasn’t exactly wrong, just incomplete. I can save the proper fix as a follow-up post. I’m annoyed that Copilot wasn’t able to offer up the eventual solution because I found it by googling around. Now that I found the culprit, I can get back to my original state before running git bisect by calling git bisect reset. Challenges with Git Bisect I encourage you to read the docs on git bisect as there are other sub-commands that are important. For example, sometimes a commit cannot be tested, such as a broken build. In that case, you can call git bisect skip to skip that commit. In practice, I found cases where you have to do a bit of tweaking to get the commit to run. For example, one commit had the following build error: error NU1903: Warning As Error: Package ‘System.Text.Json’ 8.0.4 has a known high severity vulnerability At the time that I wrote that commit, everything built fine. Since I only want to build and test locally, I ignored that warning in order to test the commit. Automating Git Bisect The reason I bring up these challenges is git bisect has the potential to be automated. You could write a script that builds and tests each commit. If a commit fails to build or test, the script could call git bisect skip for you. For example, it’d be nice to do something like this: git bisect run dotnet test This would run dotnet test on each commit and automatically call git bisect skip if the test fails. However, in practice, it doesn’t work as well as you’d like. Commits that were fine when you wrote them might not build any longer. Also, what you really probably want to do is inject a new test to be run during the git bisect process. Not to mention if you have integration tests that hit the database. You’d have to have migrations run up and down during the process of git bisect. I’ve considered building tooling that would solve these problems, but in my experience, so few .NET developers I know make regular use of git bisect that it’s hard to justify the effort. Maybe this post will convince you to add this tool to your repoirtoire.
C# 11 introduced a new feature - static virtual members in interfaces. The primary motivation for this feature is to support generic math algorithms. The mention of math might make some ignore this feature, but it turns out it can be useful in other scenarios. For example, I was able to leverage this feature to clean up how I register and consume custom config section types. Custom Config Section As a refresher, let’s look at custom config sections. Suppose you want to configure an API client in your appSettings.json. You can map the config section to a type. For example, here is an appSettings.json file in one of my projects. { "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "AllowedHosts": "*", "OpenAI": { "ApiKey": "Set this in User Secrets", "OrganizationId": "{Set this to your org id}", "Model": "gpt-4", "EmbeddingModel": "text-embedding-3-large" } } Rather than going through the IConfiguration API to read each of the “OpenAI” settings one at a time, I prefer to map this to a type. public class OpenAIOptions { public string? ApiKey { get; init; } public string? OrganizationId { get; init; } public string Model { get; init; } = "gpt-3.5-turbo"; public string EmbeddingModel { get; init; } = "text-embedding-ada-002"; } In Program.cs, I can configure this mapping. builder.Configuration.Configure(builder.Configuration.GetSection("OpenAI")); With this configured, I can inject an IOptions into any class that’s resolved via Dependency Injection and access the config section properties in a strongly typed manner. using Microsoft.Extensions.Options; public class OpenAIClient(IOptions options) { string? ApiKey => options.Value.ApiKey; string? Model => options.Value.Model; // ... } Sometimes, you’re in a situation where you can’t inject IOptions for whatever reason. You can grab it from IConfiguration like so. Configuration.GetSection("OpenAI").Get() Static Virtual Interfaces Come To Clean Up This is all fine, but a little repetitive when you have multiple configuration classes. I’d like to build a more convention based approach. This is where static virtual members in interfaces come in handy. First, let’s define an interface for all my configuration sections. public interface IConfigOptions { static abstract string SectionName { get; } } Notice there’s a static abstract string property named SectionName. This is the static virtual member. Any type that implements this interface has to implement a static SectionName property. Now I’m going to implement that interface in my configuration class. public class OpenAIOptions : IConfigOptions { public static string SectionName => "OpenAI"; public string? ApiKey { get; init; } public string? OrganizationId { get; init; } public string Model { get; init; } = "gpt-3.5-turbo"; public string EmbeddingModel { get; init; } = "text-embedding-ada-002"; } With that in place, I can implement an extension method to access the SectionName when registering a configuration section type. public static class OptionsExtensions { public static IHostApplicationBuilder Configure(this IHostApplicationBuilder builder) where TOptions : class, IConfigOptions { var section = builder.Configuration.GetSection(TOptions.SectionName); builder.Services.Configure(section); return builder; } public static TOptions? GetConfigurationSection(this IHostApplicationBuilder builder) where TOptions : class, IConfigOptions { return builder.Configuration .GetSection(TOptions.SectionName) .Get(); } } Now, with this method, I can register a configuration section like so: builder.Configure(); When you have several configuration sections to configure, the registration code looks nice and clean. For example, in one project I have a section like this: builder.Configure() .Configure() .Configure() .Configure() Conclusion The astute reader will notice I didn’t need to use static virtual members here. I could have built a convention-based approach by using reflection to extract the configuration section name from the type name. It’s true, but the code isn’t as tight as this approach. Also, there may be times where you want the type name to be different from the section name.
This is a follow-up to my previous post where I compared .NET Aspire to NuGet. In that post, I promised I would follow up with a comparison of using .NET Aspire to add a service dependency to a project versus using Docker. And looky here, I’m following through for once! The goal of these examples is to look at how much “ceremony” there is to add a service dependency to a .NET project using .NET Aspire versus using Docker. Even though it may not be the “best” example, I chose PostgreSql because it’s often the first service dependency I add to a new project. The example would be stronger if I chose another service dependency in addition to Postgres, but I think you can extrapolate that as well. And I have another project I’m working on that will have more dependencies. I won’t include installing the pre-requisite tooling as part of the “ceremony” because that’s a one-time thing. I’ll focus on the steps to add the service dependency to a project. Tooling I wrote this so that you can follow along and create the projects yourself on your own computer. If you want to follow along, you’ll need the following tools installed. .NET 8 SDK Docker Desktop .NET Aspire tooling Once you have these installed, you’ll also need to install the Aspire .NET workloads. dotnet workload update dotnet workload install aspire Examples This section contains two step-by-step walk throughs to create the example project, once with Docker and once with PostgreSql. The example project is a simple Blazor web app with a PostgreSQL database. I’ll use Entity Framework Core to interact with the database. I’ll also use the dotnet-ef command line tool to create migrations. Since we’re creating the same project twice, I’ll put the common code we’ll need right here since both walkthroughs will refer to it. Both projects will make use of a custom DbContext derived class and a simple User entity with an Id and a Name. using Microsoft.EntityFrameworkCore; namespace HaackDemo.Web; public class DemoDbContext(DbContextOptions options) : DbContext(options) { public DbSet Users { get; set; } } public class User { public int Id { get; set; } public string Name { get; set; } } Also, both projects will have a couple of background services that run on startup: DemoDbInitializer - runs migrations and seeds the database on startup. DemoDbInitializerHealthCheck - sets up a health check to report on the status of the database initializer. I used to run my migrations in Program.cs on startup, but I saw this example in the Aspire samples and thought I’d try it out. I also copied their health check initializer. Both of these need to be registered in Program.cs. builder.Services.AddSingleton(); builder.Services.AddHostedService(sp => sp.GetRequiredService()); builder.Services.AddHealthChecks() .AddCheck("DbInitializer", null); With that in place, let’s begin. Docker From your root development directory, the following commands will create a new Blazor project and solution. md docker-efcore-postgres-demo && cd docker-efcore-postgres-demo` dotnet new blazor -n DockerDemo -o DockerDemo.Web` dotnet new sln --name DockerDemo` dotnet sln add DockerDemo.Web` Npgqsl is the PostgreSql provider for Entity Framework Core. We need to add it to the web project. cd DockerDemo.Web dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL We also need the EF Core Design package to support the dotnet-ef command line tool we’re going to use to create migrations. dotnet add package Microsoft.EntityFrameworkCore.Design Now we add the docker file to the root directory. cd .. touch docker-compose.yml And paste the following in version: '3.7' services: postgres: container_name: 'postgres' image: postgres environment: # change this for a "real" app! POSTGRES_PASSWORD: password Note that the container_name could conflict with other containers on your system. You may need to change it to something unique. Add the postgres connection string to your appsettings.json. "ConnectionStrings": { "postgresdb": "User ID=postgres;Password=password;Server=postgres;Port=5432;Database=POSTGRES_USER;Integrated Security=true;Pooling=true;" } Now we can add our custom DbContext derived class and User entity mentioned earlier. We also need to register DemoDbInitializer and DemoDbInitializerHealthCheck in Program.cs as mentioned before. Next create the initial migration. cd ../DockerDemo.Web dotnet ef migrations add InitialMigration We’re ready to run the app. First, we need to start the Postgres container. docker-compose build docker-compose up Finally, we can hit hit F5 in Visual Studio/Rider or run dotnet run in the terminal and run our app locally. Aspire Once again, from your root development directory, the following commands will create a new Blazor project and solution. But this time, we’ll use the Aspire starter template. md aspire-efcore-postgres-demo && cd aspire-efcore-postgres-demo dotnet new aspire-starter -n AspireDemo -o . This creates three projects: AspireDemo.AppHost - The host project that configures the application. AspireDemo.Web - The web application project. AspireDemo.ApiService - An example web service to get the weather. We don’t need AspireDemo.ApiService for this example, so we can remove it. The first thing we want to do is configure the PostgreSql service in the AspireDemo.AppHost project. In a way, this is analogous to how we configured Postgres in the docker-compose.yml file in the Docker example. Switch to the App Host project and install the Aspire.Hosting.PostgreSQL package. cd ../AspireDemo.AppHost dotnet add package Aspire.Hosting.PostgreSQL Add this snippet after the builder is created in Program.cs. var postgres = builder.AddPostgres("postgres"); var postgresdb = postgres.AddDatabase("postgresdb"); This creates a Postgres service named postgres and a database named postgresdb. We’ll use the postgresdb reference when we want to connect to the database in the consuming project. Finally, we update the existing line to include the reference to the database. builder.AddProject("webfrontend") - .WithExternalHttpEndpoints(); + .WithExternalHttpEndpoints() + .WithReference(postgresdb); That completes the configuration of the PostgreSql service in the App Host project. Now we can consume this from our web project. Add the PostgreSQL component to the consuming project, aka the web application. cd ../AspireDemo.Web dotnet add package Aspire.Npgsql.EntityFrameworkCore.PostgreSQL We also need the EF Core Design package to support the ef command line tool we’re going to use to create migrations. dotnet add package Microsoft.EntityFrameworkCore.Design Once again, we add our custom DbContext derived class, DemoDbContext, along with the User entity to the project. Once we do that, we configure the DemoDbContext in the Program.cs file. Note that we use the postgresdb reference we created in the App Host project. builder.AddNpgsqlDbContext("postgresdb"); Then we can create the migrations using the dotnet-ef cli tool. cd ../AspireDemo.Web dotnet ef migrations add InitialMigration Don’t forget to add the DemoDbInitializer and DemoDbInitializerHealthCheck to the project and register them in Program.cs as mentioned before. Now to run the app, I can hit F5 in Visual Studio/Rider or run dotnet run in the terminal. If you use F5, make sure the AppHost project is selected as the run project.` Conclusions At the end of both walkthroughs we end up with a simple Blazor web app that uses a PostgreSQL database. Personally, I like the .NET Aspire approach because I didn’t have to mess with connection strings and the F5 to run experience is preserved. As I mentioned before, I have another project I’m working on that has more dependencies. When I’m done with that port, I think it’ll be a better example of the ceremony surrounding cloud dependencies when using .NET Aspire. In any case, you can see both of these projects I created on GitHub. haacked/docker-efcore-postgres-demo haacked/aspire-efcore-postgres-demo
Recently I tweeted, It’s not a perfect analogy, but .Net Aspire is like NuGet for cloud services. We created NuGet to make it easy to pull in libraries. Before, it took a lot of steps. Nowadays, to use a service like Postgres or Rabbit MQ, takes a lot of steps. And I’m not just saying that because David Fowler, one of the creators of .NET Aspire, was also most definitely one of the creators of NuGet. But it is his MO to focus on developer productivity. To understand why I said that, it helps to look at my initial blog post that introduced NuGet. Specifically the section “What does NuGet solve?”. The .NET open source community has churned out a huge catalog of useful libraries. But what has been lacking is a widely available easy to use manner of discovering and incorporating these libraries into a project. Back in the dark ages before NuGet, adding a .NET library to your project took more steps than a marching band on speed. NuGet drastically reduced the number of steps to find and depend on a library, no stimulants necessary. In addition to that, NuGet helped support the clone and F5 workflow of local development. The goal with this workflow is that a new developer can clone a repository and then hit F5 in their editor or IDE to run the project locally. Or at least have as few steps between clone and run as possible. .NET Aspire helps with this too. The State of Cloud Service Dependencies We’re in a similar situation when it comes to cloud service dependencies. It takes a lot of steps to incorporate the service into a project. In that way, .NET Aspire is similar to NuGet (and in fact leverages NuGet) as it reduces the number of steps to incorporate a cloud service into a project, and helps support the git clone and F5 development model. As I mentioned, my analogy isn’t perfect because .NET Aspire doesn’t stop at the local development story. Unlike a library dependency, a cloud service dependency has additional requirements. It needs to be provisioned, configured, and deployed. Connection strings need to be securely managed. Sacrifices to the old gods need to be made. Aspire helps with all that except the sacrifices. Why Not Just Docker? My tweet lead to a debate where someone pointed out that Postgres was a bad example because Aspire adds a lot of ceremony compared to just using Docker. This is a fair point. Docker is a great way to package up a service and run it locally. But it doesn’t help with the other steps of provisioning, configuring, and deploying a service. And even for local development, I found .NET Aspire to have the advantage that it supports the clone and run workflow better than Docker alone. In a follow-up post to this one, I’ll walk through setting up two simple asp.net core applications that leverage Postgres via EF Core. One will use Docker alone, the other will use Aspire. This provides a point of comparison so you can judge for yourself. I know I don’t have a great track record with timely follow-up posts, but I usually do follow through! This time, I won’t wait 8 years for the follow-up.
When you fail, many people will tell you how failure is a great teacher. And they’re not wrong. But you know what else is a great teacher? Success! And success is a lot less expensive than failure. About a month ago, my co-founder and I decided to shut down our startup, A Serious Business, Inc., the makers of Abbot. He wrote some beautiful words about it on LinkedIn. Now it’s my turn to write some less than beautiful words about the experience. Before I get all maudlin about failure, let me say that the experience of building a company from scratch with a close friend and amazing team was one of the most rewarding experiences of my career. We built a great company, team, and product. The only thing we failed to do was the only thing that mattered for a startup — obtain product market fit. I’ve been very fortunate in my career. I’ve encountered so little failure. Not because I’m so great, but because I haven’t taken huge risks until now. The biggest risk I remember taking was leaving my cush high-paying job at Microsoft in order to join a scrappy little startup for much less pay. It so happens that startup was GitHub. In retrospect, not that much of a risk, though it felt like it back then. So yeah, I’ve been lucky. Very lucky. Back to the main topic, why didn’t we achieve product market fit? I’ve been reflecting on that question a lot, but I keep running into a stumbling block. By now, most of us are familiar with the idea of survivorship bias as exemplified by the famous airplane image: For those who don’t know, survivorship bias is the logical error of looking at the survivors (or successes) of a process and drawing conclusions without also considering the failures. During World War II, military researchers studied the distribution of bullet holes from returning aircraft and wanted to add armor to the areas where bullet holes were concentrated. A Hungarian mathematician (Abraham Wald) suggested differently. He noted that the planes that did not return were not being considered. He suggested adding armor to the areas without bullet holes as it’s likely the reason those areas were sparse in bullet holes was because the planes that were hit there did not return. I think the same bias occurs when examining failures. Perhaps we should call it Failureship Bias. If that term catches on, you’ve heard it here first. For example, one question I’ve pondered is whether our tech stack held us back. I’ve said many times in the past that the tech stack is the least interesting part of a company. The product market fit is all that matters in the beginning and later on, the company culture, the ability to sell, etc. But to reach product market fit, you have to be able to shotgun features at the wall and see what sticks. Fast experimentation is really key. Chris Wanstrath (aka defunkt) tweeted the following today: I started learning Rails in 2005 and doing it professionally in 2006. By 2007, when we started GitHub, I had already worked on or made dozens of sites. The velocity was a huge part of the appeal - we could create new features fast! At A Serious Business, Inc. I chose ASP.NET Core and C# because I knew I would be faster with it than any other stack. I helped build that stack. Even so, there is still much ceremony and paper cuts when it comes to the inner loop of development. It may not seem like much, but that shit adds up. For example, compilation and startup time when making changes compounds. I would love to have ASP.NET Core interpreted while in local development. Or interpreted while it’s background compiled. So did the stack hold us back? Again, going back to Failurship Bias, I can’t run a double blind experiment where another team with the same exact circumstances builds the same exact product using Rails and see if they survive. Maybe some day we can peek into parallel universes and I can see how Bill Maack, the Rubyist, fares. Having said that, there was another team who built a product very similar to ours and seems to be doing well. They also went through the YCombinator program like we did. Is it their stack that helped them? Or did they benefit from the second-mover advantage? Or is it the fact that all three of the co-founders live and work in the same apartment. In their own words, this is all they do. Perhaps all of those are reasons why they succeeded and we did not. Perhaps not. I hope that’s not what it takes because I’m not willing to move into an apartment with my co-founder. I love him, but not that much. And his family and my family probably would object. So what is the lesson I’ve learned from this failure? Well as I said in the title. It really suuuuuucks. But don’t cry for me Argentina. The experience of building a product with wonderful people was its own reward. And I did gain some ideas that I want to experiment with the next time I start a company. I’m just sober enough to understand that if my next company succeeds, it’s just as likely that it was luck in-the-moment as it was the lessons I learned from this failure. But hey, I’ll take it.
One of my pet peeves is when I’m using a .NET client library that uses internal constructors for its return type. For example, let’s take a look at the Azure.AI.OpenAI nuget package. Now, I don’t mean to single out this package, as this is a common practice. It just happens to be the one I’m using at the moment. It’s an otherwise lovely package. I’m sure the authors are lovely people. Here’s a method that calls the Azure Open AI service to get completions. Note that this is a simplified version of the actual method for demonstration purposes: public async Task GetCompletionsAsync() { var endpoint = new Uri("https://wouldn't-you-like-to-know.openai.azure.com/"); var client = new Azure.AI.OpenAI.OpenAIClient(endpoint, new DefaultAzureCredential()); var response = await client.GetCompletionsAsync("text-davinci-003", new CompletionsOptions { Temperature = (float)1.0, Prompts = { "Some prompt" }, MaxTokens = 2048, }); return response?.Value ?? throw new Exception("We'll handle this situation later"); } This code works fine. But I have existing code that calls Open AI directly using the OpenAI library. While I work to transition over to Azure, I need to be able to easily switch between the two libraries. So what I really want to do is change this method to return a CompletionResult from the OpenAI library. This is easy enough to do with an extension method to convert a Completions into a CompletionResult. public static CompletionResult ToCompletionResult(this Completions completions) { return new CompletionResult { Completions = completions.Choices.Select(c => new Choice { Text = c.Text, Index = c.Index.GetValueOrDefault(), }).ToList(), Usage = new CompletionUsage { PromptTokens = completions.Usage.PromptTokens, CompletionTokens = (short)completions.Usage.CompletionTokens, TotalTokens = completions.Usage.TotalTokens, }, Model = completions.Model, Id = completions.Id, CreatedUnixTime = completions.Created, }; } But how do I test this? Well, it’d be nice to just “new” up a Completions, call this method on it, and make sure all the properties match up. But you see where this is going. As the beginning of this post foreshadowed, the Completions type only has internal constructors for no good reason I can see. So I can’t easily create a Completions object in my unit tests. Instead, I have to use one of my handy-dandy helper methods for dealing with this sort of paper cut. public static T Instantiate(params object[] args) { var type = typeof(T); Type[] parameterTypes = args.Select(p => p.GetType()).ToArray(); var constructor = type.GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, parameterTypes, null); if (constructor is null) { throw new ArgumentException("The args don't match any ctor"); } return (T)constructor.Invoke(args); } With this method, I can now write a unit test for my extension method. [Fact] public void CreatesCompletionResultFromCompletions() { var choices = new[] { Instantiate( "the resulting text", (int?)0.7, Instantiate(), "stop") }; var usage = Instantiate(200, 123, 323); var completion = Instantiate( "some-id", (int?)123245, "text-davinci-003", choices, usage); var result = completion.ToCompletionResult(); Assert.Equal("the resulting text", result.Completions[0].Text); Assert.Equal("text-davinci-003", result.Model); Assert.Equal("some-id", result.Id); Assert.Equal(200, result.Usage.CompletionTokens); Assert.Equal(123, result.Usage.PromptTokens); Assert.Equal(323, result.Usage.TotalTokens); } If you’re wondering how I call the method without having to declare the type the method belongs to, recall that you can import methods with the using static declaration. So this method is part of my ReflectionExtensions class (so original, I know), so I have a using static Serious.ReflectionExtensions; at the top of my unit tests. With this all in place, I can update my original method now: public async Task GetCompletionsAsync() { var endpoint = new Uri("https://wouldn't-you-like-to-know.openai.azure.com/"); var client = new Azure.AI.OpenAI.OpenAIClient(endpoint, new DefaultAzureCredential()); var response = await client.GetCompletionsAsync("text-davinci-003", new CompletionsOptions { Temperature = (float)1.0, Prompts = { "Some prompt" }, MaxTokens = 2048, }); return response?.Value.ToCompletionResult() ?? throw new Exception("We'll handle this situation later"); } So yeah, I can work around the internal constructor pretty easily, but in my mind it’s unnecessary friction. Also, I know a lot of folks are going to tell me I should wrap the entire API with my own data types. Sure, but that doesn’t change the fact that I’m going to want to test the translation from the API’s types to my own types. Not to mention, I wouldn’t have to do this if the data types returned by the API were simple constructable DTOs. For my needs, this is also unnecessary friction. I hope this code helps you work around it the next time you run into this situation.
This is the final installment of the adventures of Bill Maack the Hapless Developer (any similarity to me is purely coincidental and a result of pure random chance in an infinite universe). Follow along as Bill continues to improve the reliability of his ASP.NET Core and Entity Framework Core code. If you haven’t read the previous installments, you can find them here: How to Recover from a DbUpdateException With EF Core Why Did That Database Throw That Exception? In the first post, we looked at a background Hangfire job that processed incoming Slack event and it raised some questions such as: DbContext is not supposed to be thread safe. Why are allowing your repository method to be executed concurrently from multiple threads? This post addresses that question and more! Part of the confusion lies in the fact that the original example didn’t provide enough context. Let’s take a deeper look at the scenario. Bill works on the team that builds Abbot, a Slack app that helps customer success/support teams keep track of conversations within Slack and support more customers with less effort. The app is built on ASP.NET Core and Entity Framework Core. As a Slack App, it receives events from Slack in the form of HTTP POST requests. A simple ASP.NET MVC controller can handle that. Note that the following code is a paraphrase of the actual code as it leaves out some details such as verifying the Slack request signature. Bill would never skimp on security and definitely validates those Slack signatures. public class SlackController : Controller { readonly AbbotDbContext _db; readonly ISlackEventParser _slackEventParser; readonly IBackgroundJobClient _backgroundJobClient; // Hangfire public SlackController(AbbotContext db, ISlackEventParser slackEventParser, IBackgroundJobClient backgroundJobClient) { _db = db; _slackEventParser = slackEventParser; _backgroundJobClient = backgroundJobClient; } [HttpPost] public async Task PostAsync() { var slackEvent = await _slackEventParser.ParseAsync(Request); _db.SlackEvents.Add(slackEvent); await _db.SaveChangesAsync(); _backgroundJobClient.Enqueue(x => x.ProcessEventAsync(id)); } } This code is pretty straightforward. Bill parses the incoming Slack event, saves it to the database, and then enqueues it for background processing using Hangfire. When Hangfire is ready to process that event, it uses the ASP.NET Core dependency injection container to create an instance of SlackEventProcessor and calls the ProcessEventAsync method. What’s nice about this generic method approach is that SlackEventProcessor itself doesn’t even need to be registered in the container, only all of its dependencies need to be registered. Here’s the SlackEventProcessor class that handles the background processing. public class SlackEventProcessor { readonly AbbotContext _db; public SlackEventProcessor(AbbotContext db) { _db = db; // AbbotContext derives from DbContext } // This code runs in a background Hangfire job. public async Task ProcessEventAsync(int id) { var nextEvent = (await _db.SlackEvents.FindAsync(id)) ?? throw new InvalidOperationException($"Event not found: {id}"); try { // This does the actual processing of the Slack event. await RunPipelineAsync(nextEvent); } catch (Exception e) { nextEvent.Error = e.ToString(); } finally { nextEvent.Completed = DateTime.UtcNow; await _db.SaveChangesAsync(); } } } The key thing to note here is that in the case of Hangfire, every time Hangfire processes a job, it creates a unit of work (aka a scope) for that job. The end result is that as long as your DbContext derived instance (in this case AbbotContext) is registered with a lifetime of ServiceLifetime.Scoped, Hangfire will inject a new instance of your DbContext when invoking a job. So the code here doesn’t call any DbContext methods on multiple threads concurrently. We’re Ok here in that regard. However, there is an issue with Bill’s code here. I glossed over it before, but the RunPipelineAsync method internally uses dependency injection to resolve a service to handle the Slack event processing. That service depends on AbbotContext. Since this is all running as part of a Hangfire job, it’s all in the same Lifetime scope. What that means is that the AbbotContext instance that is used to retrieve the SlackEvent instance is the same instance that is used to process the event. That’s not good. The AbbotContext instance in SlackEventProcessor should only be responsible for retrieving and updating the SlackEvent instance that it needs to process. It should not be the same instance that is used when running the Slack event processing pipeline. The solution is to create a separate AbbotContext instance for the outer scope. To do that, Bill needs to inject an IDbContextFactory into SlackEventProcessor and use that to create a new AbbotContext instance for the outer scope, resulting in: public class SlackEventProcessor { readonly IDbContextFactory _dbContextFactory; public SlackEventProcessor(IDbContextFactory dbContextFactory) { _dbContextFactory = dbContextFactory; } // This code runs in a background Hangfire job. public async Task ProcessEventAsync(int id) { await using var db = await _dbContextFactory.CreateDbContextAsync(); var nextEvent = (await db.SlackEvents.FindAsync(id)) ?? throw new InvalidOperationException($"Event not found: {id}"); try { // This does the actual processing of the Slack event. // The AbbotContext is injected into the pipeline and is not shared with `SlackEventProcessor`. await RunPipelineAsync(nextEvent); } catch (Exception e) { nextEvent.Error = e.ToString(); } finally { nextEvent.Completed = DateTime.UtcNow; await db.SaveChangesAsync(); } } } The instance of AbbotContext created by the factory will always be a new instance. It won’t be the same instance injected into any dependencies that are resolved by the DI container. This is a pretty straightforward fix, except the first time Bill tried it, it didn’t work. Registering the DbContextFactory Correctly Let’s take a step back and look at how Bill registered the DbContext instance with the DI container. Since Bill is working on an ASP.NET Core application, the recommended way to register the DbContext is to use the AddDbContext extension method on IServiceCollection. services.AddDbContext(options => {...}); This sets the ServiceLifetime for the DbContext to ServiceLifetime.Scoped. This means that the DbContext instance is scoped to the current HTTP request. This is the default and recommended behavior for ASP.NET Core applications. We wouldn’t want this to be a ServiceLifetime.Singleton as that would cause issues with concurrent calls to the DbContext which is a big no no. You’ll never guess the name of the method to register a DbContextFactory with the DI container. Yep, it’s AddDbContextFactory. services.AddDbContextFactory(options => {...}); Now here’s where it gets tricky. When Bill ran this code, he ran into an exception that looked something like: Cannot consume scoped service 'Microsoft.EntityFrameworkCore.DbContextOptions1[AbbotContext]' from singleton 'Microsoft.EntityFrameworkCore.IDbContextFactory1[AbbotContext]'. What’s happening here is that AddDbContext is not just registering our DbContext instance, it’s also registering the DbContextOptions instance used to create the DbContext instance. The lifetime of DbContextOptions is the same as DbContext, aka ServiceLifetime.Scoped. However, DbContextFactory also needs to consume the DbContextOptions instance, but DbContextFactory has a lifetime of ServiceLifetime.Singleton. As a Singleton, it can’t consume a Scoped service because the Scoped service has a shorter lifetime than the Singleton service. To summarize, DbContext is Scoped while DbContextFactory is Singleton and they both need a DbContextOptions which is Scoped by default. Fortunately, there’s a simple solution. Well, it’s simple when you know it, otherwise it’s the kind of thing that makes a Bill want to pull his hair out. The solution is to make DbContextOptions a Singleton as well. Then both DbContext and DbContextFactory could both use it. There’s an overload to AddDbContext that accepts a ServiceLifetime specifically for the DbContextOptions and you can set that to Singleton. So Bill’s final registration code looks like: services.AddDbContextFactory(options => {...}); services.AddDbContext(options => {...}, optionsLifetime: ServiceLifetime.Singleton); Bill used a named parameter to make it clear what the lifetime is for. So to summarize, DbContext still has a lifetime of Scoped while DbContextFactory and DbContextOptions have a Singleton lifetime. And EF Core is happy and Bill’s code works and is more robust. The End!
You can subscribe to this RSS to get more information