Eye-share at NDC London 2023!

In Januay 2023 Eye-share sent a cross-departmental delegation of seven to participate on NDC {London}!

Gaute Løken | 02.02.2023


We made our way past Westminster Abbey to Queen Elizabeth II Center where a labyrinth of a floorplan awaited us. We quickly learned that the elevators were for taking you from top to bottom and the stairs from moving a floor or two unless you enjoy waiting in line. There were a multitude of stands inviting you to scan your conference pass to gain the privilege of provide spam in exchange of a chance of winning some swag, as per usual. Stations for food and drink were live on multiple floors throughout and we felt welcome and well taken care of.

Keynote: Day 1 – Accessibility & AI

We were welcomed and reminded to be inclusive and non-judgmental. And I’ve got to say it’s great to see that even though there are still more men participating in developer events I do believe it’s getting a little more diverse. I do wish we were able to attract more women developers to Eye-share so that we could be part of the solution!

In the keynote, Alex Dunn showed us how we might build better accessibility tools through use of AI, I was a little surprised at how many intermediate steps he was willing to take.

It felt a bit obvious that performance was never going to be sufficient using the cloud services he was cobbling together. But I suppose it did showcase some of the myriad of tools we have available to us, and some of his techniques would certainly work better for other applications. Regardless he settled on using a locally run ONNX machine learning model to control a game using facial expressions and tossing his head. Pretty cool and certainly a game changer for some!


eye-share Workflow has an involved permission-based system where users gain access to various resources and documents based on what access-groups they are part of and what companies in a hierarchy of companies they are associated with either directly or through the access-groups’ association to the same hierarchy.

We wrote this system imperatively in .NET Framework many years ago. Nowadays the .NET authorization primitives seem easier to extend. Jason Taylor showed us a variation of what we have with an excellent talk on “Developing Flexible Authorization Capabilities in ASP.NET Core”.

I feel like this is pretty much confirmation that our authorization system is conceptually well made and perhaps was even a little ahead of its time.

He did provide us with alternative naming for our constructs which might make it easier to communicate our concepts to others. Talking about authorization policies and permissions might be better than talking about access groups and functions.


Philippe De Ryck picked up where Jason left off by talking about “Getting API security right”. I was introduced to the concept of Broken Object Level Authorization (BOLA), and can rest easy since our permission-based authorization system ensures that only users who should have access to read and/or write specific objects and/or parts of objects are allowed to do so.

However, he did make the point that if it’s hard to understand what our security policy is due to the policy being fragmented in too many places, we should probably refactor that. I don’t think ours is easy to read, and we should probably take some steps to make this better, but it’s not terrible right now.

Our integration tests require a somewhat wordy Moq setup and so they are a bit hard to maintain. We should look at making this easier using AutoMocker, which I was not familiar with. Furthermore, we should definitely add tests for the OWASP Top Ten.

This is, of course, covered by penetration testing, but I’d feel better if we established that we still do, automatically, on every release. Now there are helpers for creating a HttpClient from a Web-API controller using the WebApplicationFactory, and that should make this so much easier now post ASP.NET Core!


Microservices are all the rage these days, and what role they should play in our architecture as we push towards a more cloud native architecture is under consideration. For me, perhaps the most aha-inducing talk of the conference was “Don’t build a Distributed Monolith: How to Avoid Doing Microservices Completely Wrong” by Jonathan “J.” Tower.

Insights into the pros and cons of a modular monolith vs a microservice architecture is valuable information. Our developers are seniors capable of working on any part of our system, so siloing off functionality in order to make team boundaries is not really something that appeals to us and denormalizing our data for different features seems both inefficient and quite costly to develop considering our feature set and plugin architecture.

We’ve recently split out a few compute intensive services for working with e-invoices and PDFs and realizing that doing so isn’t really pulling towards microservices, it is just making our monolith more modular and enabling us to maintain and deploy updates to those modules separately from the rest of our application.

Plugins, deployment & hosting

We have a multilayered plugin-based architecture and making it easier for us to scale different aspects of eye-share Workflow appears to require a different solution than microservices. Since each of our customers have a different tailor-made data model extended from one of our many integrations and customized for them specifically scaling out is non-trivial.

My current thinking is that we need to rework our hosting system by adding a metadata database which allows the hosts to pull and load the latest plugins for a given integration and customer on demand. That requires some sort of sandboxing in order to avoid customers affecting each other.

Niels Tanis had a session on sandboxing, diving into how one might create a sandbox now that AppDomains are no longer a thing in .NET. It was more concerned with sandboxing in an Input/Output (IO) sense, rather than segmenting multiple parts of a process. We’ll have to explore other means like sub-processes or a very robust layer of global exception handling and runtime constraints against stack overflow and similar. It will be a challenge, but I believe it’s what’s necessary if we want to be able to both scale down to multiple customer onto one server, and up to multiple servers for one customer and at the same time scale different modules orthogonally.

Minimal APIs

There were a few sessions on .NET Minimal APIs and it’s clear that even though we’re looking forward to using them for rapid prototyping, we should keep using our Web-API controllers rather than migrating to Minimal APIs for eye-share Workflow.


While I try to take the time to refactor my code as part of every task, having some speakers making it explicit that as developers we do have permission to write good code – that it’s part of our job, is a great thing and should be repeated at every conference like Kevlin Henney did in his excellent talk “Refactoring Is Not Just Clickbait”!

We have permission to refactor before moving to the next task. Refactoring is not only about extracting methods and renaming variables, but also looking for patterns in our code and adjusting to make those patterns easier to pick up on by the next reader. We write code for humans to read – the fact that computers can execute that code is just a bonus!


At the end of day 3 there was an open bar, mingling, a mirth-inducing game show mocking JavaScript and an amazing concert by Dylan Beattie & the Linebreakers. Great fun with a lot of funny geeky lyrics and impressively supported by a lot of video.

Keynote: Day 3 – Web History

Steve Sanderson kicked off day 3 with a history lesson “Why web tech is like this” going into how choices were made that affect all of us working with web technologies today and leading into what I took to be a call to action: Many important decisions were made by corporations, but many things were decided by individual software engineers, and today, that’s us. It’s now up to individual engineers to come up with the next big thing. Go make it, be great! What a wholesome message!


There were a couple of sessions on Roslyn – the .NET compiler services. Stefan Pölz delved into incremental source generators showing what’s important if you don’t want your IDE to slow down as a result of adding that source generator. A bit of a technical deep dive, but I think it’s great that some of the sessions really looks at the details.

Steve Gordon talked about “Writing Code with Code: Getting Started with the Roslyn APIs”. I was waiting for the opportunity to ask a question, but Steve volunteered I got the answer towards the end, unprompted; there is support for working with a workspace in the Roslyn API, which is something I need in order to trace types and referenced types across a solution for some tooling I want to make. The parts of the Roslyn API typically shown only deals with a file at a time as is the case for the source generators, I believe.


As I mentioned earlier, we rely on a plugins architecture and because of this we have written custom middleware for generic controllers and resolve and close those generics based on a couple of query parameters. This greatly reduces boilerplate since we don’t have to create a lot of explicit copies of Web-API controllers that differ only in their model, data transfer objects (DTOs) and datasource. However, it does mean that we cannot currently generate an OpenAPI spec or swagger endpoint.

At the very end of the conference, Mark Rendle, had a talk titled “OpenAPI & .NET: You’re Doing It Wrong”. According to Mark we should declare our API explicitly and apart from the implementation like they do it in most other non-.NET languages instead of generating the contract from our attributed controllers.

Since we close our generic controllers in many ways, I was hoping that even though ASP.NET doesn’t allow us to generate an OpenAPI spec where only the model or DTO type changes as variations of an endpoint, perhaps OpenAPI itself does?! Unfortunately, Mark assured me after the talk, it does not. However, I got a tip that I should investigate Microsoft CADL and see if that would help with generating the OpenAPI. Alternatively, if we don’t want to leave the .NET paradigm of code first, we could dig into the swagger package and see if we can reuse some pieces and build out support for our generic controllers.

No matter how we approach it we would at best get a multitude of similar swagger endpoints, but duplication is better than nothing. I’m not sure I’m sold on separating the spec from the implementation though he did make many good supporting points. Generating one from the other does have its advantages, though. Regardless, adding swagger support in some form would be a good thing, and now we have more avenues to explore in order to reach that goal. The talk certainly gave me a lot to consider.


Of course, there were other interesting topics and sessions as well, but the ones I’ve mentioned were the ones that gave me the most actionable information. And, I have probably missed many great sessions. After all, you can’t really watch multiple sessions at a time, and the conference had 6 parallel sessions at any given time. Fortunately, my colleagues covered many of those, so I feel like we’ve returned both happy for the experience and richer of knowledge.

Thanks, NDC Conferences, and until next time!