Rockford Lhotka’s Blog's Avatar

Rockford Lhotka’s Blog

@blog.lhotka.net.web.brid.gy

VP, Open Source Creator, Author, Speaker [bridged from https://blog.lhotka.net/ on the web: https://fed.brid.gy/web/blog.lhotka.net ]

25 Followers  |  0 Following  |  2,430 Posts  |  Joined: 27.08.2024  |  4.1252

Latest posts by blog.lhotka.net.web.brid.gy on Bluesky

.NET Terminology I was recently part of a conversation thread online, which reinforced the naming confusion that exists around the .NET (dotnet) ecosystem. I thought I’d summarize my responses to that thread, as it surely can be confusing to a newcomer, or even someone who blinked and missed a bit of time, as things change fast. ## .NET Framework There is the Microsoft .NET Framework, which is tied to Windows and has been around since 2002 (give or take). It is now considered “mature” and is at version 4.8. We all expect that’s the last version, as it is in maintenance mode. I consider .NET Framework (netfx) to be legacy. ## Modern .NET There is modern .NET (dotnet), which is cross-platform and isn’t generally tied to any specific operating system. I suppose the term “.NET” encompasses both, but most of us that write and speak in this space tend to use “.NET Framework” for legacy, and “.NET” for modern .NET. The .NET Framework and modern .NET both have a bunch of sub-components that have their own names too. Subsystems for talking to databases, creating various types of user experience, and much more. Some are tied to Windows, others are cross platform. Some are legacy, others are modern. It is important to remember that modern .NET is cross-platform and you can develop and deploy to Linux, Mac, Android, iOS, Windows, and other operating systems. It also supports various CPU architectures, and isn’t tied to x64. ## Modern Terminology The following table tries to capture most of the major terminology around .NET today. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET (dotnet) 5+ | modern | No | Platform ASP.NET Core | modern | No | Web Framework Blazor | modern | No | Web SPA framework ASP.NET Core MVC | modern | No | Web UI framework ASP.NET Core Razor Pages | modern | No | Web UI framework .NET MAUI | modern | No | Mobile/Desktop UI framework MAUI Blazor Hybrid | modern | no | Mobile/Desktop UI framework ADO.NET | modern | No | Data access framework Entity Framework | modern | No | Data access framework WPF | modern | Yes | Windows UI Framework Windows Forms | modern | Yes | Windows UI Framework ## Legacy Terminology And here is the legacy terminology. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET Framework (netfx) 4.8 | legacy | Yes | Platform ASP.NET | legacy | Yes | Web Framework ASP.NET Web Forms | legacy | Yes | Web UI Framework ASP.NET MVC | legacy | Yes | Web UI Framework Xamarin | legacy (deprecated) | No | Mobile UI Framework ADO.NET | legacy | Yes | Data access framework Entity Framework | legacy | Yes | Data access framework UWP | legacy | Yes | Windows UI Framework WPF | legacy | Yes | Windows UI Framework Windows Forms | legacy | Yes | Windows UI Framework ## Messy History Did I leave out some history? Sure, there’s the whole “.NET Core” thing, and the .NET Core 1.0-3.1 timespan, and .NET Standard (2 versions). Are those relevant in the world right now, today? Hopefully not really! They are cool bits of history, but just add confusion to anyone trying to approach modern .NET today. ## What I Typically Use What do _I personally_ tend to use these days? I mostly: * Develop modern dotnet on Windows using mostly Visual Studio, but also VS Code and Rider * Build my user experiences using Blazor and/or MAUI Blazor Hybrid * Build my web API services using ASP.NET Core * Use ADO.NET (often with the open source Dapper) for data access * Use the open source CSLA .NET for maintainable business logic * Test on Linux using Ubuntu on WSL * Deploy to Linux containers on the server (Azure, Kubernetes, etc.) ## Other .NET UI Frameworks Finally, I would be remiss if I didn’t mention some other fantastic cross-platform UI frameworks based on modern .NET: * Uno Platform * Avalonia * OpenSilver
28.10.2025 18:38 — 👍 0    🔁 0    💬 0    📌 0
Blazor EditForm OnSubmit behavior I am working on the open-source KidsIdKit app and have encountered some “interesting” behavior with the `EditForm` component and how buttons trigger the `OnSubmit` event. An `EditForm` is declared similar to this: <EditForm Model="CurrentChild" OnSubmit="SaveData"> I would expect that any `button` component with `type="submit"` would trigger the `OnSubmit` handler. <button class="btn btn-primary" type="submit">Save</button> I would also expect that any `button` component _without_ `type="submit"` would _not_ trigger the `OnSubmit` handler. <button class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> I’d think this was true _especially_ if that second button was in a nested component, so it isn’t even in the `EditForm` directly, but is actually in its own component, and it uses an `EventCallback` to tell the parent component that the button was clicked. ### Actual Results In Blazor 8 I see different behaviors between MAUI Hybrid and Blazor WebAssembly hosts. In a Blazor WebAssembly (web) scenario, my expectations are met. The secondary button in the sub-component does _not_ cause `EditForm` to submit. In a MAUI Hybrid scenario however, the secondary button in the sub-component _does_ cause `EditForm` to submit. I also tried this using the new Blazor 9 MAUI Hybrid plus Web template - though in this case the web version is Blazor server. In my Blazor 9 scenarios, in _both_ hosting cases the secondary button triggers the submit of the `EditForm` - even though the secondary button is in a sub-component (its own `.razor` file)! What I’m getting out of this is that we must assume that _any button_ , even if it is in a nested component, will trigger the `OnSubmit` event of an `EditForm`. Nasty! ### Solution The solution (thanks to @jeffhandley) is to add `type="button"` to all non-submit `button` components. It turns out that the default HTML for `<button />` is `type="submit"`, so if you don’t override that value, then all buttons trigger a submit. What this means is that I _could_ shorten my actual submit button: <button class="btn btn-primary">Save</button> I probably won’t do this though, as being explicit probably increases readability. And I _absolutely must_ be explicit with all my other buttons: <button type="button" class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> This prevents the other buttons (even in nested Razor components) from accidentally triggering the submit behavior in the `EditForm` component.
28.10.2025 18:38 — 👍 0    🔁 0    💬 0    📌 0
Do not throw away your old PCs As many people know, Windows 10 is coming to its end of life (or at least end of support) in 2025. Because Windows 11 requires specialized hardware that isn’t built into a lot of existing PCs running Windows 10, there is no _Microsoft-based_ upgrade path for those devices. The thing is, a lot of those “old” Windows 10 devices are serving their users perfectly well, and there is often no compelling reason for a person to replace their PC just because they can’t upgrade to Windows 11. > ℹ️ If you can afford to replace your PC with a new one, that’s excellent, and I’m not trying to discourage that! However, you can still avoid throwing away your old PC, and you should consider alternatives. Throwing away a PC or laptop - like in the trash - is a _horrible_ thing to do, because PCs contain toxic elements that are bad for the environment. In many places it might actually be illegal. Besides which, whether you want to keep and continue to use your old PC or not, _someone_ can probably make good use of it. > ️⚠️ If you do need to “throw away” your old PC, please make sure to turn it in to a recycling center for e-waste or hazardous waste center. I’d like to discuss some possible alternatives to throwing away or recycling your old PC. Things that provide much better outcomes for people and the environment! It might be that you can continue to use your PC or laptop, or someone else may be able to give it new life. Here are some options. ## Continue Using the PC Although you may be unable to upgrade to Windows 11, there are alternative operating systems that will breathe new life into your existing PC. The question you should ask first, is what do you do on your PC? The following may require Windows: * Windows-only software (like CAD drawing or other software) * Hard-core gaming On the other hand, if you use your PC entirely for things like: * Browsing the web * Writing documents * Simple spreadsheets * Web-based games in a browser Then you can probably replace Windows with an alternative and continue to be very happy with your PC. What are these “alternative operating systems”? They are all variations of Linux. If you’ve never heard of Linux, or have heard it is complicated and only for geeks, rest assured that there are some variations of Linux that are no more complex than Windows 10. ### “Friendly” Variations of Linux Some of the friendliest variations of Linux include: * Cinnamon Mint - Linux with a desktop that is very similar to Windows * Ubuntu Desktop - Linux with its own style of graphical desktop that isn’t too hard to learn if you are used to Windows There are many others, these are just a couple that I’ve used and found to be easy to install and learn. > 🛑 Before installing Linux on your PC make sure to copy all the files you want to keep onto a thumb drive or something! Installing Linux will _entirely delete your existing hard drive_ and none of your existing files will be on the PC when you are done. Once you’ve installed Linux, you’ll need software to do the things you do today. ### Browsers on Linux Linux often comes with the Firefox browser pre-installed. Other browsers that you can install include: * Chrome * Edge I am sure other browsers are available as well. Keep in mind that most modern browsers provide comparable features and let you use nearly every web site, so you may be happy with Firefox or whatever comes pre-installed with Linux. ### Software similar to Office on Linux Finally, most people use their PC to write documents, create spreadsheets and do other things that are often done using Microsoft Office. Some alternatives to Office available on Linux include: * OneDrive - Microsoft on-line file storage and web-based versions of Word, Excel, and more * Google Docs - Google on-line file storage and web-based word processor, spreadsheet, and more * LibreOffice - Software you install on your PC that provides word processing, spreadsheets, and more. File formats are compatible with Word, Excel, and other Office tools. Other options exist, these are the ones I’ve used and find to be most common. ## Donate your PC Even if your needs can’t be met by running Linux on your old PC, or perhaps installing a new operating system just isn’t for you - please consider that there are people all over the world, including near you, that would _love_ to have access to a free computer. This might include kids, adults, or seniors in your area who can’t afford a PC (or to have their own PC). In the US, rural and urban areas are _filled_ with young people who could benefit from having a PC to do school work, learn about computers, and more. > 🛑 Before donating your PC, make sure to use the Windows 10 feature to reset the PC to factory settings. This will delete all _your_ files from the PC, ensuring that the new owner can’t access any of your information. Check with your church and community organizations to find people who may benefit from having access to a computer. ## Build a Server If you know people, or are someone, who likes to tinker with computers, there are a lot of alternative uses for an old PC or laptop. You can install Linux _server_ software on an old PC and then use that server for all sorts of fun things: * Create a file server for your photos and other media - can be done with a low-end PC that has a large hard drive * Build a Kubernetes cluster out of discarded devices - requires PCs with at least 2 CPU cores and 8 gigs of memory, though more is better Here are a couple articles with other good ideas: * Avoid the Trash Heap: 17 Creative Uses for an Old Computer * 10 Creative Things to Do With an Old Computer If you aren’t the type to tinker with computers, just ask around your family and community. It is amazing how many people do enjoy this sort of thing, and would love to have access to a free device that can be used for something other than being hazardous waste. ## Conclusion I worry that 2025 will be a bad year for e-waste and hazardous waste buildup in landfills and elsewhere around the world, as people realize that their Windows 10 PC or laptop can’t be upgraded and “needs to be replaced”. My intent in writing this post is to provide some options to consider that may breathe new life into your “old” PC. For yourself, or someone else, that computer may have many more years of productivity ahead of it.
28.10.2025 18:38 — 👍 0    🔁 0    💬 0    📌 0
Running Linux on My Surface Go I have a first-generation Surface Go, the little 10” tablet Microsoft created to try and compete with the iPad. I’ll confess that I never used it a lot. I _tried_ , I really did! But it is underpowered, and I found that my Surface Pro devices were better for nearly everything. My reasoning for having a smaller tablet was that I travel quite a lot, more back then than now, and I thought having a tablet might be nicer for watching movies and that sort of thing, especially on the plane. It turns out that the Surface Pro does that too, without having to carry a second device. Even when I switched to my Surface Studio Laptop, I _still_ didn’t see the need to carry a second device - though the Surface Pro is absolutely better for traveling in my view. I’ve been saying for quite some time that I think people need to look at Linux as a way to avoid the e-waste involved in discarding their Windows 10 PCs - the ones that can’t run Windows 11. I use Linux regularly, though usually via the command line for software development, and so I thought I’d put it on my Surface Go to gain real-world experience. > I have quite a few friends and family who have Windows 10 devices that are perfectly good. Some of those folks don’t want to buy a new PC, due to financial constraints, or just because their current PC works fine. End of support for Windows 10 is a problem for them! The Surface Go is a bit trickier than most mainstream Windows 10 laptops or desktops, because it is a convertable tablet with a touch screen and specialized (rare) hardware - as compared to most of the devices in the market. So I did some reading, and used Copilot, and found a decent (if old) article on installing Linux on a Surface Go. > ⚠️ One quick warning: Surface Go was designed around Windows, and while it does work reasonably well with Linux, it isn’t as good. Scrolling is a bit laggy, and the cameras don’t have the same quality (by far). If you want to use the Surface Go as a small, lightweight laptop I think it is pretty good; if you are looking for a good _tablet_ experience you should probably just buy a new device - and donate the old one to someone who needs a basic PC. Fortunately, Linux hasn’t evolved all that much or all that rapidly, and so this article remains pretty valid even today. ## Using Ubuntu Desktop I chose to install Ubuntu, identified in the article as a Linux distro (distribution, or variant, or version) that has decent support for the Surface Go. I also chose Ubuntu because this is normally what I use for my other purposes, and so I’m familiar with it in general. However, I installed the latest Ubuntu Desktop (version 25.04), not the older version mentioned in the article. This was a good choice, because support for the Surface hardware has improved over time - though the other steps in the article remain valid. ## Download and Set Up Media The steps to get ready are: 1. Download Ubuntu Desktop - this downloads a file with a `.iso` extension 2. Download software to create a bootable flash drive based on the `.iso` file. I used software called Rufus - just be careful to avoid the flashy (spammy) download buttons, and find the actual download link text in the page 3. Get a flash drive (either new, or one you can erase) and insert it into your PC 4. Run rufus, and identify the `.iso` file and your flash drive 5. Rufus will write the data to the flash drive, and make the flash drive bootable so you can use it to install Linux on any PC 6. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 ## Install Ubuntu on the Surface At this point you have a bootable flash drive and a Surface Go device, and you can do the installation. This is where the zdnet article is a bit dated - the process is smoother and simpler than it was back then, so just do the install like this: 1. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 2. Insert the flash drive into the Surface USB port (for the Surface Go I had to use an adapter from type C to type A) 3. Press the Windows key and type “reset” and choose the settings option to reset your PC 4. That will bring up the settings page where you can choose Advanced and reset the PC for booting from a USB device 5. What I found is that the first time I did this, my Linux boot device didn’t appear, so I rebooted to Windows and did step 4 again 6. The second time, an option was there for Linux. It had an odd name: Linpus (as described in the zdnet article) 7. Boot from “Linpus” and your PC will sit and spin for quite some time (the Surface Go is quite old and slow by modern standards), and eventually will come up with Ubuntu 8. The thing is, it is _running_ Ubuntu, but it hasn’t _installed_ Ubuntu. So go through the wizard and answer the questions - especially the wifi setup 9. Once you are on the Ubuntu (really Gnome) desktop, you’ll see an icon for _installing_ Ubuntu. Double-click that and the actual installation process will begin 10. I chose to have the installer totally reformat my hard drive, and I recommend doing that, because the Surface Go doesn’t have a huge drive to start with, and I want all of it available for my new operating system 11. Follow the rest of the installer steps and let the PC reboot 12. Once it has rebooted, you can remove the flash drive ## Installing Updates At this point you should be sitting at your new desktop. The first thing Linux will want to do is install updates, and you should let it do so. I laugh a bit, because people make fun of Windows updates, and Patch Tuesday. Yet all modern and secure operating systems need regular updates to remain functional and secure, and Linux is no exception. Whether automated or not, you should do regular (at least monthly) updates to keep Linux secure and happy. ## Installing Missing Features Immediately upon installation, Ubuntu 25.04 seems to have very good support for the Surface Go, including multi-touch on the screen and trackpad, use of the Surface Pen, speakers, and the external (physical) keyboard. What doesn’t work right away, at least what I found, are the cameras or any sort of onscreen/soft keyboard. You need to take extra steps for these. The zdnet article is helpful here. ### Getting the Cameras Working The zdnet article walks through the process to get the cameras working. I actually think the camera drivers are now just part of Ubuntu, but I did have to take steps to get them working, and even then they don’t have great quality - this is clearly an area where moving to Linux is a step backward. At times I found the process a bit confusing, but just plowed ahead figuring I could always reinstall Linux again if necessary. It did work fine in the end, no reinstall needed. 1. Install the Linux Surface kernel - which sounds intimidating, but is really just following some steps as documented in their GitHub repo; other stuff in the document is quite intimidating, but isn’t really relevant if all you want to do is get things running 2. That GitHub repo also has information about the various camera drivers for different Surface devices, and I found that to be a bit overwhelming; fortunately, it really amounts to just running one command 3. Make sure you also run these commands to give your Linux account permissions to use the camera 4. At this point I was able to follow instructions to run `cam` and see the cameras - including some other odd entries I igored 5. And I was able to run `qcam`, which is a command that brings up a graphical app so you can see through each camera > ⚠️ Although the cameras technically work, I am finding that a lot of apps still don’t see the cameras, and in all cases the camera quality is quite poor. ### Getting a Soft or Onscreen Keyboard Because the Surface Go is _technically_ a tablet, I expected there to be a soft or onscreen keyboard. It turns out that there is a primitive one built into Ubuntu, but it really doesn’t work very well. It is pretty, but I was unable to figure out how to get it to appear via touch, which kind of defeats the purpose (I needed my physical keyboard to get the virtual one to appear). I found an article that has some good suggestions for Linux onscreen keyboard (OSK) improvements. I used what the article calls “Method 2” to install an Extension Manager, which allowed me to install extensions for the keyboard. 1. Install the Extension Manager `sudo apt install gnome-shell-extension-manager` 2. Open the Extension Manager app 3. This is where the article fell down, because the extension they suggested doesn’t seem to exist any longer, and there are numerous other options to explore 4. I installed an extension called “Touch X” which has the ability to add an icon to the upper-right corner of the screen by which you can open the virtual keyboard at any time (it can also do a cool ripple animation when you touch the screen if you’d like) 5. I also installed “GJS OSK”, which is a replacement soft keyboard that has a lot more configurability than the built-in default; you can try both and see which you prefer ## Installing Important Apps This section is mostly editorial, because I use certain apps on a regular basis, and you might use other apps. Still, you should be aware that there are a couple ways to install apps on Ubuntu: snap and apt. The “snap” concept is specific to Ubuntu, and can be quite nice, as it installs each app into a sort of sandbox that is managed by Ubuntu. The “app store” in Ubuntu lists and installs apps via snap. The “apt” concept actually comes from Ubuntu’s parent, Debian. Since Debian and Ubuntu make up a very large percentage of the Linux install base, the `apt` command is extremely common. This is something you do from a terminal command line. Using snap is very convenient, and when it works I love it. Sometimes I find that apps installed via snap don’t have access to things like speakers, cameras, or other things. I think that’s because they run in a sandbox. I’m pretty sure there are ways to address these issues - my normal way of addressing them is to uninstall the snap and use `apt`. ### My “Important” Apps I installed apps via snap, apt, and as PWAs. #### Snap and Apt Apps Here are the apps I installed right away: 1. Microsoft Edge browser - because I use Edge on my Windows devices and Android phone, I want to use the same browser here to sync all my history, settings, etc. - I installed this using the default Firefix browser, then switched the default to Edge 2. Visual Studio Code - I’m a developer, and find it hard to imagine having a device without some way to write code - and I use vscode on Windows, so I’m used to it, and it works the same on Linux - I installed this as a snap via App Center 3. git - again, I’m a developer and all my stuff is on GitHub, which means using git as a primary tool - I installed this using `apt` 4. Discord - I use discord for many reasons - talking to friends, gaming, hosting the CSLA .NET Discord server - so it is something I use all the time - I installed this as a snap via App Center 5. Thunderbird Email - I’m not sold on this yet - it seems to be the “default” email app for Linux, but feels like Outlook from 10-15 years ago, and I do hope to find something a lot more modern - I installed this as a snap via App Center 6. Copilot Desktop - I’ve been increasingly using Copilot on Windows 11, and was delighted to find that Ken VanDine wrote a Copilot shell for Linux; it is in the App Center and installs as a snap, providing the same basic experience as Copilot on Windows or Android - I installed this as a snap via App Center 7. .NET SDK - I mostly develop using .NET and Blazor, and so installing the .NET software developer kit seemed obvious; Ubuntu has a snap to install version 8, but I used apt to install version 9 #### PWA Apps Once I got Edge installed, I used it to install a number of progressive web apps (PWAs) that I use on nearly every device. A PWA is an app that is installed and updates via your browser, and is a great way to get cross-platform apps. Exactly how you install a PWA will vary from browser to browser. Some have a little icon when you are on the web page, others have an “install app” option or “install on desktop” or similar. The end result is that you get what appears to be an app icon on your phone, PC, whatever - and when you click the icon the PWA app runs in a window like any other app. 1. Elk - I use Mastodon (social media) a lot, and my preferred client is Elk - fast, clean, works great 2. Bluesky - I use Bluesky (social media) a lot, and Bluesky can be installed as a PWA 3. LinkedIn - I use LinkedIn quite a bit, and it can be installed as a PWA 4. Facebook - I still use Facebook a little, and it can be installed as a PWA #### Using Microsoft 365 Office Most people want the edit documents and maybe spreadsheets on their PC. A lot of people, including me, use Word and Excel for this purpose. Those apps aren’t available on Linux - at least not directly. Fortunately there are good alternatives, including: 1. Use https://onedrive.com to create and edit documents and spreadsheets in the browser 2. Use https://office.com to access Office online if you have a subscription 3. Install LibreOffice, an open-source office productivity suite sort of like Office I use OneDrive for a lot of personal documents, photos, etc. And I use actual Office for work. The LibreOffice idea is something I might explore at some point, but the online versions of the Office apps are usually enough for casual work - which is all I’m going to do on the little Surface Go device anyway. One feature of Edge is the ability to have multiple profiles. I use this all the time on Windows, having a personal and two work profiles. This feature works on Linux as well, though I found it had some glitches. My default Edge profile is my personal one, so all those PWAs I installed are connected to that profile. I set up another Edge profile for my CSLA work, and it is connected to my marimer.llc email address. This is where I log into the M365 office.com apps, and I have that page installed as a PWA. When I run “Office” it opens in my work profile and I have access to all my work documents. On my personal profile I don’t use the Office apps as much, but when I do open something from my personal OneDrive, it opens in that profile. The limitation is that I can only edit documents while online, but for my purposes with this device, that’s fine. I can edit my documents and spreadsheets as necessary. ## Conclusion At this point I’m pretty happy. I don’t expect to use this little device to do any major software development, but it actually does run vscode and .NET just fine (and also Jetbrains Rider if you prefer a more powerful option). I mostly use it for browsing the web, discord, Mastodon, and Bluesky. Will I bring this with when I travel? No, because my normal Windows 11 PC does everything I want. Could I live with this as my “one device”? Well, no, but that’s because it is underpowered and physically too small. But could I live with a modern laptop running Ubuntu? Yes, I certainly could. I wouldn’t _prefer_ it, because I like full-blown Visual Studio and way too many high end Steam games. The thing is, I am finding myself leaving the Surface Go in the living room, and reaching for it to scan the socials while watching TV. Something I could have done just as well with Windows, and can now do with Linux.
28.10.2025 18:37 — 👍 0    🔁 0    💬 0    📌 0
Why MAUI Blazor Hybrid It can be challenging to choose a UI technology in today’s world. Even if you narrow it down to wanting to build “native” apps for phones, tablets, and PCs there are so many options. In the Microsoft .NET space, there are _still_ many options, including .NET MAUI, Uno, Avalonia, and others. The good news is that these are good options - Uno and Avalonia are excellent, and MAUI is coming along nicely. At this point in time, my _default_ choice is usually something called a MAUI Hybrid app, where you build your app using Blazor, and the app is hosted in MAUI so it is built as a native app for iOS, Android, Windows, and Mac. Before I get into why this is my default, I want to point out that I (personally) rarely build mobile apps that “represent the brand” of a company. Take the Marriott or Delta apps as examples - the quality of these apps and the way they work differently on iOS vs Android can literally cost these companies customers. They are a primary contact point that can irritate a customer or please them. This is not the space for MAUI Blazor Hybrid in my view. ## Common Code MAUI Blazor Hybrid is (in my opinion) for apps that need to have rich functionality, good design, and be _common across platforms_ , often including phones, tablets, and PCs. Most of my personal work is building business apps - apps that a business creates to enable their employees, vendors, partners, and sometimes even customers, to interact with important business systems and functionality. Blazor (the .NET web UI framework) turns out to be an excellent choice for building business apps. Though this is a bit of a tangent, Blazor is my go-to for modernizing (aka replacing) Windows Forms, WPF, Web Forms, MVC, Silverlight, and other “legacy” .NET app user experiences. The one thing Blazor doesn’t do by itself, is create native apps that can run on devices. It creates web sites (server hosted) or web apps (browser hosted) or a combination of the two. Which is wonderful for a lot of scenarios, but sometimes you really need things like offline functionality or access to per-platform APIs and capabilities. This is where MAUI Hybrid comes into the picture, because in this model you build your Blazor app, and that app is _hosted_ by MAUI, and therefore is a native app on each platform: iOS, Android, Windows, Mac. That means that your Blazor app is installed as a native app (therefore can run offline), and it can tap into per-platform APIs like any other native app. ## Per-Platform In most business apps there is little (or no) per-platform difference, and so the vast majority of your app is just Blazor - C#, html, css. It is common across all the native platforms, and optionally (but importantly) also the browser. When you do have per-platform differences, like needing to interact with serial or USB port devices, or arbitrary interactions with local storage/hard drives, you can do that. And if you do that with a little care, you still end up with the vast majority of your app in Blazor, with small bits of C# that are per-platform. ## End User Testing I mentioned that a MAUI Hybrid app can not only create native apps but that it can also target the browser. This is fantastic for end user testing, because it can be challenging to do testing via the Apple, Google, and Microsoft stores. Each requires app validation, on their schedule not yours, and some have limits on the numbers of test users. > In .NET 9, the ability to create a MAUI Hyrid that also targets the browser is a pre-built template. Previously you had to set it up yourself. What this means is that you can build your Blazor app, have your users do a lot of testing of your app via the browser, and once you are sure it is ready to go, then you can do some final testing on a per-platform basis via the stores (or whatever scheme you use to install native apps). ## Rich User Experience Blazor, with its use of html and css backed by C#, directly enables rich user experiences and high levels of interactivity. The defacto UI language is html/css after all, and we all know how effective it can be at building great experiences in browsers - as well as native apps. There is a rapidly growing and already excellent ecosystem around Blazor, with open-source and commercial UI toolkits and frameworks available that make it easy to create many different types of user experience, including Material design and others. From a developer perspective, it is nice to know that learning any of these Blazor toolsets is a skill that spans native and web development, as does Blazor itself. In some cases you’ll want to tap into per-platform capabilities as well. The MAUI Community Toolkit is available and often provides pre-existing abstractions for many per-platform needs. Some highlights include: * File system interaction * Badge/notification systems * Images * Speech to text Between the basic features of Blazor, advanced html/css, and things like the toolkit, it is pretty easy to build some amazing experiences for phones, tablets, and PCs - as well as the browser. ## Offline Usage Blazor itself can provide a level of offline app support if you build a Progressive Web App (PWA). To do this, you create a standlone Blazor WebAssembly app that includes the PWA manifest and worker job code (in JavaScript). PWAs are quite powerful and are absolutely something to consider as an option for some offline app requirements. The challenge with a PWA is that it is running in a browser (even though it _looks_ like a native app) and therefore is limited by the browser sandbox and the local operating system. For example, iOS devices place substantial limitations on what a PWA can do and how much data it can store locally. There are commercial reasons why Apple doesn’t like PWAs competing with “real apps” in its store, and the end result is that PWAs _might_ work for you, as long as you don’t need too much local storage or too many native features. MAUI Hybrid apps are actual native apps installed on the end user’s device, and so they can do anything a native app can do. Usually this means asking the end user for permission to access things like storage, location, and other services. As a smartphone user you are certainly aware of that type of request as an app is installed. The benefit then, is that if the user gives your app permission, your app can do things it couldn’t do in a PWA from inside the browser sandbox. In my experience, the most important of these things is effectively unlimited access to local storage for offline data that is required by the app. ## Conclusion This has been a high level overview of my rationale for why MAUI Blazor Hybrid is my “default start point” when thinking about building native apps for iOS, Android, Windows, and/or Mac. Can I be convinced that some other option is better for a specific set of business and technical requirements? Of course!! However, having a well-known and very capable option as a starting point provides a short-cut for discussing the business and technical requirements - to determine if each requirement is or isn’t already met. And in many cases, MAUI Hybrid apps offer very high developer productivity, the functionality needed by end users, and long-term maintainability.
28.10.2025 18:37 — 👍 0    🔁 0    💬 0    📌 0
CSLA 2-tier Data Portal Behavior History The CSLA data portal originally treated 2- and 3-tier differently, primarily for performance reasons. Back in the early 2000’s, the data portal did not serialize the business object graph in 2-tier scenarios. That behavior still exists and can be enabled via configuration, but is not the default for the reasons discussed in this post. Passing the object graph by reference (instead of serializing it) does provide much better performance, but at the cost of being behaviorally/semantically different from 3-tier. In a 3-tier (or generally n-tier) deployment, there is at least one network hop between the client and any server, and the object graph _must be serialized_ to cross that network boundary. When different 2-tier and 3-tier behaviors existed, a lot of people did their dev work in 2-tier and then tried to switch to 3-tier. Usually they’d discover all sorts of issues in their code, because they were counting on the logical client and server using the same reference to the object graph. A variety of issues are solved by serializing the graph even in 2-tier scenarios, including: 1. Consistency with 3-tier deployment (enabling location transparency in code) 2. Preventing data binding from reacting to changes to the object graph on the logical server (nasty performance and other issues would occur) 3. Ensuring that a failure on the logical server (especially part-way through the graph) leaves the graph on the logical client in a stable/known state There are other issues as well - and ultimately those issues drove the decision (I want to say around 2006 or 2007?) to default to serializing the object graph even in 2-tier scenarios. There is a performance cost to that serialization, but having _all_ n-tier scenarios enjoy the same semamantic behaviors has eliminated so many issues and support questions on the forums that I regret nothing.
28.10.2025 18:37 — 👍 0    🔁 0    💬 0    📌 0
A Simple CSLA MCP Server In a recent CSLA discussion thread, a user asked about setting up a simple CSLA Mobile Client Platform (MCP) server. https://github.com/MarimerLLC/csla/discussions/4685 I’ve written a few MCP servers over the past several months with varying degrees of success. Getting the MCP protocol right is tricky (or was), and using semantic matching with vectors isn’t always the best approach, because I find it often misses the most obvious results. Recently however, Anthropic published a C# SDK (and NuGet package) that makes it easier to create and host an MCP server. The SDK handles the MCP protocol details, so you can focus on implementing your business logic. https://github.com/modelcontextprotocol/csharp-sdk Also, I’ve been reading up on the idea of hybrid search, which combines traditional search techniques with vector-based semantic search. This approach can help improve the relevance of search results by leveraging the strengths of both methods. The code I’m going to walk through in this post can be easily adapted to any scenario, not just CSLA. In fact, the MCP server just searches and returns markdown files from a folder. To use it for any scenario, you just need to change the source files and update the descriptions of the server, tools, and parameters that are in the attributes in code. Perhaps a future enhancement for this project will be to make those dynamic so you can change them without recompiling the code. The code for this article can be found in this GitHub repository. > ℹ️ Most of the code was actually written by Claude Sonnet 4 with my collaboration. Or maybe I wrote it with the collaboration of the AI? The point is, I didn’t do much of the typing myself. Before getting into the code, I want to point out that this MCP server really is useful. Yes, the LLMs already know all about CSLA because CSLA is open source. However, the LLMs often return outdated or incorrect information. By providing a custom MCP server that searches the actual CSLA code samples and snippets, the LLM can return accurate and up-to-date information. ## The MCP Server Host The MCP server itself is a console app that uses Spectre.Console to provide a nice command-line interface. The project also references the Anthropic C# SDK and some other packages. It targets .NET 10.0, though I believe the code should work with .NET 8.0 or later. I am not going to walk through every line of code, but I will highlight the key parts. > ⚠️ The modelcontextprotocol/csharp-sdk package is evolving rapidly, so you may need to adapt to use whatever is latest when you try to build your own. Also, all the samples in their GitHub repository use static tool methods, and I do as well. At some point I hope to figure out how to use instance methods instead, because that will allow the use of dependency injection. Right now the code has a lot of `Console.WriteLine` statements that would be better handled by a logging framework. Although the project is a console app, it does use ASP.NET Core to host the MCP server. var builder = WebApplication.CreateBuilder(); builder.Services.AddMcpServer() .WithHttpTransport() .WithTools<CslaCodeTool>(); The `AddMcpServer` method adds the MCP server services to the ASP.NET Core dependency injection container. The `WithHttpTransport` method configures the server to use HTTP as the transport protocol. The `WithTools<CslaCodeTool>` method registers the `CslaCodeTool` class as a tool that can be used by the MCP server. There is also a `WithStdioTransport` method that can be used to configure the server to use standard input and output as the transport protocol. This is useful if you want to run the server locally when using a locally hosted LLM client. The nice thing about using the modelcontextprotocol/csharp-sdk package is that it handles all the details of the MCP protocol for you. You just need to implement your tools and their methods. All the subtleties of the MCP protocol are handled by the SDK. ## Implementing the Tools The `CslaCodeTool` class is where the main logic of the MCP server resides. This class is decorated with the `McpServerToolType` attribute, which indicates that this class will contain MCP tool methods. [McpServerToolType] public class CslaCodeTool ### The Search Method The first tool is Search, defined by the `Search` method. This method is decorated with the `McpServerTool` attribute, which indicates that this method is an MCP tool method. The attribute also provides a description of the tool and what it will return. This description is used by the LLM to determine when to use this tool. My description here is probably a bit too short, but it seems to work okay. Any parameters for the tool method are decorated with the `Description` attribute, which provides a description of the parameter. This description is used by the LLM to understand what the parameter is for, and what kind of value to provide. [McpServerTool, Description("Searches CSLA .NET code samples and snippets for examples of how to implement code that makes use of #cslanet. Returns a JSON object with two sections: SemanticMatches (vector-based semantic similarity) and WordMatches (traditional keyword matching). Both sections are ordered by their respective scores.")] public static string Search([Description("Keywords used to match against CSLA code samples and snippets. For example, read-write property, editable root, read-only list.")]string message) #### Word Matching The orginal implementation (which works very well) uses only word matching. To do this, it gets a list of all the files in the target directory, and searches them for any words from the LLM’s `message` parameter that are 4 characters or longer. It counts the number of matches in each file to generate a score for that file. Here’s the code that gets the list of search terms from `message`: // Extract words longer than 4 characters from the message var searchWords = message .Split(new char[] { ' ', '\t', '\n', '\r', '.', ',', ';', ':', '!', '?', '(', ')', '[', ']', '{', '}', '"', '\'', '-', '_' }, StringSplitOptions.RemoveEmptyEntries) .Where(word => word.Length > 3) .Select(word => word.ToLowerInvariant()) .Distinct() .ToList(); Console.WriteLine($"[CslaCodeTool.Search] Extracted search words: [{string.Join(", ", searchWords)}]"); It then loops through each file and counts the number of matching words. The final result is sorted by score and then file name: var sortedResults = results.OrderByDescending(r => r.Score).ThenBy(r => r.FileName).ToList(); #### Semantic Matching More recently I added semantic matching as well, resulting in a hybrid search approach. The search tool now returns two sets of results: one based on traditional word matching, and one based on vector-based semantic similarity. The semantic search behavior comes in two parts: indexing the source files, and then matching against the message parameter from the LLM. ##### Indexing the Source Files Indexing source files takes time and effort. To minimize startup time, the MCP server actually starts and will work without the vector data. In that case it relies on the word matching only. After a few minutes, the vector indexing will be complete and the semantic search results will be available. The indexing is done by calling a text embedding model to generate a vector representation of the text in each file. The vectors are then stored in memory along with the file name and content. Or the vectors could be stored in a database to avoid having to re-index the files each time the server is started. I’m relying on a `vectorStore` object to index each document: await vectorStore.IndexDocumentAsync(fileName, content); This `VectorStoreService` class is a simple in-memory vector store that uses Ollama to generate the embeddings: public VectorStoreService(string ollamaEndpoint = "http://localhost:11434", string modelName = "nomic-embed-text:latest") { _httpClient = new HttpClient(); _vectorStore = new Dictionary<string, DocumentEmbedding>(); _ollamaEndpoint = ollamaEndpoint; _modelName = modelName; } This could be (and probably will be) adapted to use a cloud-based embedding model instead of a local Ollama instance. Ollama is free and easy to use, but it does require a local installation. The actual embedding is created by a call to the Ollama endpoint: var response = await _httpClient.PostAsync($"{_ollamaEndpoint}/api/embeddings", content); The embedding is just a list of floating-point numbers that represent the semantic meaning of the text. This needs to be extracted from the JSON response returned by the Ollama endpoint. var responseJson = await response.Content.ReadAsStringAsync(); var result = JsonSerializer.Deserialize<JsonElement>(responseJson); if (result.TryGetProperty("embedding", out var embeddingElement)) { var embedding = embeddingElement.EnumerateArray() .Select(e => (float)e.GetDouble()) .ToArray(); return embedding; } > 👩‍🔬 All those floating-point numbers are the magic of this whole thing. I don’t understand any of the math, but it obviously represents the semantic “meaning” of the file in a way that a query can be compared later to see if it is a good match. All those embeddings are stored in memory for later use. ##### Matching Against the Message When the `Search` method is called, it first generates an embedding for the `message` parameter using the same embedding model. It then compares that embedding to each of the document embeddings in the vector store to calculate a similarity score. All that work is delegated to the `VectorStoreService`: var semanticResults = VectorStore.SearchAsync(message, topK: 10).GetAwaiter().GetResult(); In the `VectorStoreService` class, the `SearchAsync` method generates the embedding for the query message: var queryEmbedding = await GetTextEmbeddingAsync(query); It then calculates the cosine similarity between the query embedding and each document embedding in the vector store: foreach (var doc in _vectorStore.Values) { var similarity = CosineSimilarity(queryEmbedding, doc.Embedding); results.Add(new SemanticSearchResult { FileName = doc.FileName, SimilarityScore = similarity }); } The results are then sorted by similarity score and the top K results are returned. var topResults = results .OrderByDescending(r => r.SimilarityScore) .Take(topK) .Where(r => r.SimilarityScore > 0.5f) // Filter out low similarity scores .ToList(); ##### The Final Result The final result of the `Search` method is a JSON object that contains two sections: `SemanticMatches` and `WordMatches`. Each section contains a list of results ordered by their respective scores. var combinedResult = new CombinedSearchResult { SemanticMatches = semanticMatches, WordMatches = sortedResults }; It is up to the calling LLM to decide which set of results to use. In the end, the LLM will use the fetch tool to retrieve the content of one or more of the files returned by the search tool. ### The Fetch Method The second tool is Fetch, defined by the `Fetch` method. This method is also decorated with the `McpServerTool` attribute, which provides a description of the tool and what it will return. [McpServerTool, Description("Fetches a specific CSLA .NET code sample or snippet by name. Returns the content of the file that can be used to properly implement code that uses #cslanet.")] public static string Fetch([Description("FileName from the search tool.")]string fileName) This method has some defensive code to prevent path traversal attacks and other things, but ultimately it just reads the content of the specified file and returns it as a string. var content = File.ReadAllText(filePath); return content; ## Hosting the MCP Server The MCP server can be hosted in a variety of ways. The simplest is to run it as a console app on your local machine. This is useful for development and testing. You can also host it in a cloud environment, such as Azure App Service or AWS Elastic Beanstalk. This allows you to make the MCP server available to other applications and services. Like most things, I am running it in a Docker container so I can choose to host it anywhere, including on my local Kubernetes cluster. For real use in your organization, you will want to ensure that the MCP server endpoint is available to all your developers from their vscode or Visual Studio environments. This might be a public IP, or one behind a VPN, or some other secure way to access it. I often use tools like Tailscale or ngrok to make local services available to remote clients. ## Testing the MCP Server Testing an MCP server isn’t as straightforward as testing a regular web API. You need an LLM client that can communicate with the MCP server using the MCP protocol. Anthropic has an npm package that can be used to test the MCP server. You can find it here: https://github.com/modelcontextprotocol/inspector This package provides a GUI or CLI tool that can be used to interact with the MCP server. You can use it to send messages to the server and see the responses. It is a great way to test and debug your MCP server. Another option is to use the MCP support built into recent vscode versions. Once you add your MCP server endpoint to your vscode settings, you can use the normal AI chat interface to ask the chat bot to interact with the MCP server. For example: call the csla-mcp-server tools to see if they work This will cause the chat bot to invoke the `Search` tool, and then the `Fetch` tool to get the content of one of the files returned by the search. Once you have the MCP server working and returning the types of results you want, add it to your vscode or Visual Studio settings so all your developers can use it. In my experience the LLM chat clients are pretty good about invoking the MCP server to determine the best way to author code that uses CSLA .NET. ## Conclusion Setting up a simple CSLA MCP server is not too difficult, especially with the help of the Anthropic C# SDK. By implementing a couple of tools to search and fetch code samples, you can provide a powerful resource for developers using CSLA .NET. The hybrid search approach, combining traditional word matching with vector-based semantic similarity, helps improve the relevance of search results. This makes it easier for developers to find the code samples they need. I hope this article has been helpful in understanding how to set up a simple CSLA MCP server. If you have any questions or need further assistance, feel free to reach out on the CSLA discussion forums or GitHub repository for the csla-mcp project.
28.10.2025 18:37 — 👍 0    🔁 0    💬 0    📌 0
MCP and A2A Basics I have been spending a lot of time lately, learning about the Model Context Protocol (MCP) and Agent to Agent (A2A) protocols. And a little about a slightly older technology called the activity protocol that comes from the Microsoft bot framework. I’m writing this blog post mostly for myself, because writing content helps me organize my thoughts and solidify my understanding of concepts. As they say with AIs, mistakes are possible, because my understanding of all this technology is still evolving. (disclaimer: unless otherwise noted, I wrote this post myself, with my own fingers on a keyboard) ## Client-Server is Alive and Well First off, I think it is important to recognize that the activity protocol basically sits on top of REST, and so is client-server. The MCP protocol is also client-server, sitting on top of JSON-RPC. A2A _can be_ client-server, or peer-to-peer, depending on how you use it. The sipmlest form is client-server, with peer-to-peer provide a lot more capability, but also complexity. ## Overall Architecture These protocols (in particular MCP and A2A) exist to enable communication between LLM “AI” agents and their environments, or other tools, or other agents. ### Activity Protocol The activity protocol is a client-server protocol that sits on top of REST. It is primarily used for communication between a user and a bot, or between bots. The protocol defines a set of RESTful APIs for sending and receiving activities, which are JSON objects that represent a message, event, or command. The activity protocol is widely used in the Microsoft Bot Framework and is supported by many bot channels, such as Microsoft Teams, Slack, and Facebook Messenger. (that previous paragraph was written by AI - but it is pretty good) ### MCP The Model Context Protocol is really a standard and flexible way to expand the older concept of LLM tool or function calling. The primary intent is to allow an LLM AI to call tools that interact with the environment, call other apps, get data from services, or do other client-server style interactions. The rate of change here is pretty staggering. The idea of an LLM being able to call functions or “tools” isn’t that old. The limitation of that approach was that these functions had to be registered with the LLM in a way that wasn’t standard across LLM tools or platforms. MCP provides a standard for registration and interaction, allowing an MCP-enabled LLM to call in-process tools (via standard IO) or remotely (via HTTP). If you dig a little into the MCP protocol, it is erily reminiscent of COM from the 1990’s (and I suspect CORBA as well). We provide the LLM “client” with an endpoint for the MCP server. The client can ask the MCP server what it does, and also for a list of tools it provides. Much like `IUnknown` in COM. Once the LLM client has the description of the server and all the tools, it can then decide when and if it should call those tools to solve problems. You might create a tool that deletes a file, or creates a file, or blinks a light on a device, or returns some data, or sends a message, or creates a record in a database. Really, the sky is the limit in terms of what you can build with MCP. ### A2A Agent to Agent (A2A) communication is a newer and more flexible protocol that (I think) has the potential to do a couple things: 1. I could see it replacing MCP, because you can use A2A for client-server calls from an LLM client to an A2A “tool” or agent. This is often done over HTTP. 2. It also can be used to implement bi-directional, peer-to-peer communication between agents, enabling more complex and dynamic interactions. This is often done over WebSockets or (better yet) queuing systems like RabbitMQ. ## Metadata Rules In any case, the LLM that is going to call a tool or send a message to another agent needs a way to understand the capabilities and requirements of that tool or agent. This is where metadata comes into play. Metadata provides essential information about the tool or agent, such as its name, description, input and output parameters, and more. “Metadata” in this context is human language descriptions. Remember that the calling LLM is an AI model that is generally good with language. However, some of the metadata might also describe JSON schemas or other structured data formats to precisely define the inputs and outputs. But even that is usually surrounded by human-readable text that describes the purpose of the scheme or data formats. This is where the older activity protocol falls down, because it doesn’t provide metadata like MCP or A2A. The newer protocols include the ability to provide descriptions of the service/agent, and of tool methods or messages that are exchanged. ## Authentication and Identity In all cases, these protocols aren’t terribly complex. Even the A2A peer-to-peer isn’t that difficult if you have an understanding of async messaging concepts and protocols. What does seem to _always_ be complex is managing authentication and identity across these interactions. There seem to be multiple layers at work here: 1. The client needs to authenticate to call the service - often with some sort of service identity represented by a token. 2. The service needs to authenticate the client, so that service token is important 3. HOWEVER, the service also usually needs to “impersonate” or act on behalf of a user or another identity, which can be a separate token or credential Getting these tokens, and validating them correctly, is often the hardest part of implementing these protocols. This is especially true when you are using abstract AI/LLM hosting environments. It is hard enough in code like C#, where you can see the token handling explicitly, but in many AI hosting platforms, these details are abstracted away, making it challenging to implement robust security. ## Summary The whole concept of an LLM AI calling tools and then service and then having peer-to-peer interactions has evolved very rapidly over the past couple of years, and it is _still_ evolving very rapidly. Just this week, for example, Microsoft announced the Microsoft Agent Framework that replaces Semantic Kernel and Autogen. And that’s just one example! What makes me feel better though, is that at their heart, these protocols are just client-server protocols with some added layers for metadata. Or a peer-to-peer communication protocol that relies on asynchronous messaging patterns. While these frameworks (to a greater or lesser degree) have some support for authentication and token passing, that seems to be the weakest part of the tooling, and the hardest to solve in real-life implementations.
28.10.2025 18:37 — 👍 0    🔁 0    💬 0    📌 0
Unit Testing CSLA Rules With Rocks One of the most powerful features of CSLA .NET is its business rules engine. It allows you to encapsulate validation, authorization, and other business logic in a way that is easy to manage and maintain. In CSLA, a rule is a class that implements `IBusinessRule`, `IBusinessRuleAsync`, `IAuthorizationRule`, or `IAuthorizationRuleAsync`. These interfaces define the contract for a rule, including methods for executing the rule and properties for defining the rule’s behavior. Normally a rule inherits from an existing base class that implements one of these interfaces. When you create a rule, you typically associate it with a specific property or set of properties on a business object. The rule is then executed automatically by the CSLA framework whenever the associated property or properties change. The advantage of a CSLA rule being a class, is that you can unit test it in isolation. This is where the Rocks mocking framework comes in. Rocks allows you to create mock objects for your unit tests, making it easier to isolate the behavior of the rule you are testing. You can create a mock business object and set up expectations for how the rule should interact with that object. This allows you to test the rule’s behavior without having to worry about the complexities of the entire business object. In summary, the combination of CSLA’s business rules engine and the Rocks mocking framework provides a powerful way to create and test business rules in isolation, ensuring that your business logic is both robust and maintainable. All code for this article can be found in this GitHub repository in Lab 02. ## Creating a Business Rule As an example, consider a business rule that sets an `IsActive` property based on the value of a `LastOrderDate` property. If the `LastOrderDate` is within the last year, then `IsActive` should be true; otherwise, it should be false. using Csla.Core; using Csla.Rules; namespace BusinessLibrary.Rules; public class LastOrderDateRule : BusinessRule { public LastOrderDateRule(IPropertyInfo lastOrderDateProperty, IPropertyInfo isActiveProperty) : base(lastOrderDateProperty) { InputProperties.Add(lastOrderDateProperty); AffectedProperties.Add(isActiveProperty); } protected override void Execute(IRuleContext context) { var lastOrderDate = (DateTime)context.InputPropertyValues[PrimaryProperty]; var isActive = lastOrderDate > DateTime.Now.AddYears(-1); context.AddOutValue(AffectedProperties[1], isActive); } } This rule inherits from `BusinessRule`, which is a base class provided by CSLA that implements the `IBusinessRule` interface. The constructor takes two `IPropertyInfo` parameters: one for the `LastOrderDate` property and one for the `IsActive` property. The `InputProperties` collection is used to specify which properties the rule depends on, and the `AffectedProperties` collection is used to specify which properties the rule affects. The `Execute` method is where the rule’s logic is implemented. It retrieves the value of the `LastOrderDate` property from the `InputPropertyValues` dictionary, checks if it is within the last year, and then sets the value of the `IsActive` property using the `AddOutValue` method. ## Unit Testing the Business Rule Now that we have our business rule, we can create a unit test for it using the Rocks mocking framework. First, we need to bring in a few namespaces: using BusinessLibrary.Rules; using Csla; using Csla.Configuration; using Csla.Core; using Csla.Rules; using Microsoft.Extensions.DependencyInjection; using Rocks; using System.Security.Claims; Next, we can use Rocks attributes to define the mock types we need for our test: [assembly: Rock(typeof(IPropertyInfo), BuildType.Create | BuildType.Make)] [assembly: Rock(typeof(IRuleContext), BuildType.Create | BuildType.Make)] These lines of code only need to be included once in your test project, because they are assembly-level attributes. They tell Rocks to create mock implementations of the `IPropertyInfo` and `IRuleContext` interfaces, which we will use in our unit test. Now we can create our unit test method to test the `LastOrderDateRule`. To do this, we need to arrange the necessary mock objects and set up their expectations. Then we can execute the rule and verify that it behaves as expected. The rule has a constructor that takes two `IPropertyInfo` parameters, so we need to create mock implementations of that interface. We also need to create a mock implementation of the `IRuleContext` interface, which is used to pass information to the rule when it is executed. [TestMethod] public void LastOrderDateRule_SetsIsActiveBasedOnLastOrderDate() { // Arrange var inputProperties = new Dictionary<IPropertyInfo, object>(); using var context = new RockContext(); var lastOrderPropertyExpectations = context.Create<IPropertyInfoCreateExpectations>(); lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); var lastOrderProperty = lastOrderPropertyExpectations.Instance(); var isActiveProperty = new IPropertyInfoMakeExpectations().Instance(); var ruleContextExpectations = context.Create<IRuleContextCreateExpectations>(); ruleContextExpectations.Properties.Getters.InputPropertyValues().ReturnValue(inputProperties); ruleContextExpectations.Methods.AddOutValue(Arg.Is(isActiveProperty), true); inputProperties.Add(lastOrderProperty, new DateTime(2025, 9, 24, 18, 3, 40)); // Act var rule = new LastOrderDateRule(lastOrderProperty, isActiveProperty); (rule as IBusinessRule).Execute(ruleContextExpectations.Instance()); // Assert is automatically done by Rocks when disposing the context } Notice how the Rocks mock objects have expectations set up for their properties and methods. This allows us to verify that the rule interacts with the context as expected. This is a little different from more explicit `Assert` statements, but it is a powerful way to ensure that the rule behaves correctly. For example, notice how the `Name` property of the `lastOrderProperty` mock is expected to be called twice. If the rule does not call this property the expected number of times, the test will fail when the `context` is disposed at the end of the `using` block: lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); This is a powerful feature of Rocks that allows you to verify the behavior of your code without having to write explicit assertions. The test creates an instance of the `LastOrderDateRule` and calls its `Execute` method, passing in the mock `IRuleContext`. The rule should set the `IsActive` property to true because the `LastOrderDate` is within the last year. When the test completes, Rocks will automatically verify that all expectations were met. If any expectations were not met, the test will fail. This is a simple example, but it demonstrates how you can use Rocks to unit test CSLA business rules in isolation. By creating mock objects for the dependencies of the rule, you can focus on testing the rule’s behavior without having to worry about the complexities of the entire business object. ## Conclusion CSLA’s business rules engine is a powerful feature that allows you to encapsulate business logic in a way that is easy to manage and maintain. By using the Rocks mocking framework, you can create unit tests for your business rules that isolate their behavior and ensure that they work as expected. This combination of CSLA and Rocks provides a robust and maintainable way to implement and test business logic in your applications.
28.10.2025 18:37 — 👍 0    🔁 0    💬 0    📌 0
.NET Terminology I was recently part of a conversation thread online, which reinforced the naming confusion that exists around the .NET (dotnet) ecosystem. I thought I’d summarize my responses to that thread, as it surely can be confusing to a newcomer, or even someone who blinked and missed a bit of time, as things change fast. ## .NET Framework There is the Microsoft .NET Framework, which is tied to Windows and has been around since 2002 (give or take). It is now considered “mature” and is at version 4.8. We all expect that’s the last version, as it is in maintenance mode. I consider .NET Framework (netfx) to be legacy. ## Modern .NET There is modern .NET (dotnet), which is cross-platform and isn’t generally tied to any specific operating system. I suppose the term “.NET” encompasses both, but most of us that write and speak in this space tend to use “.NET Framework” for legacy, and “.NET” for modern .NET. The .NET Framework and modern .NET both have a bunch of sub-components that have their own names too. Subsystems for talking to databases, creating various types of user experience, and much more. Some are tied to Windows, others are cross platform. Some are legacy, others are modern. It is important to remember that modern .NET is cross-platform and you can develop and deploy to Linux, Mac, Android, iOS, Windows, and other operating systems. It also supports various CPU architectures, and isn’t tied to x64. ## Modern Terminology The following table tries to capture most of the major terminology around .NET today. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET (dotnet) 5+ | modern | No | Platform ASP.NET Core | modern | No | Web Framework Blazor | modern | No | Web SPA framework ASP.NET Core MVC | modern | No | Web UI framework ASP.NET Core Razor Pages | modern | No | Web UI framework .NET MAUI | modern | No | Mobile/Desktop UI framework MAUI Blazor Hybrid | modern | no | Mobile/Desktop UI framework ADO.NET | modern | No | Data access framework Entity Framework | modern | No | Data access framework WPF | modern | Yes | Windows UI Framework Windows Forms | modern | Yes | Windows UI Framework ## Legacy Terminology And here is the legacy terminology. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET Framework (netfx) 4.8 | legacy | Yes | Platform ASP.NET | legacy | Yes | Web Framework ASP.NET Web Forms | legacy | Yes | Web UI Framework ASP.NET MVC | legacy | Yes | Web UI Framework Xamarin | legacy (deprecated) | No | Mobile UI Framework ADO.NET | legacy | Yes | Data access framework Entity Framework | legacy | Yes | Data access framework UWP | legacy | Yes | Windows UI Framework WPF | legacy | Yes | Windows UI Framework Windows Forms | legacy | Yes | Windows UI Framework ## Messy History Did I leave out some history? Sure, there’s the whole “.NET Core” thing, and the .NET Core 1.0-3.1 timespan, and .NET Standard (2 versions). Are those relevant in the world right now, today? Hopefully not really! They are cool bits of history, but just add confusion to anyone trying to approach modern .NET today. ## What I Typically Use What do _I personally_ tend to use these days? I mostly: * Develop modern dotnet on Windows using mostly Visual Studio, but also VS Code and Rider * Build my user experiences using Blazor and/or MAUI Blazor Hybrid * Build my web API services using ASP.NET Core * Use ADO.NET (often with the open source Dapper) for data access * Use the open source CSLA .NET for maintainable business logic * Test on Linux using Ubuntu on WSL * Deploy to Linux containers on the server (Azure, Kubernetes, etc.) ## Other .NET UI Frameworks Finally, I would be remiss if I didn’t mention some other fantastic cross-platform UI frameworks based on modern .NET: * Uno Platform * Avalonia * OpenSilver
28.10.2025 16:38 — 👍 0    🔁 0    💬 0    📌 0
Accessing User Identity on a Blazor Wasm Client On the server, Blazor authentication is fairly straightforward because it uses the underlying ASP.NET Core authentication mechanism. I’ll quickly review server authentication before getting to the WebAssembly part so you have an end-to-end understanding. I should note that this post is all about a Blazor 8 app that uses per-component rendering, so there is an ASP.NET Core server hosting Blazor server pages, and there may also be pages using `InterativeAuto` or `InteractiveWebAssembly` that run in WebAssembly on the client device. ## Blazor Server Authentication Blazor Server components are running in an ASP.NET Core hosted web server environment. This means that they can have access to all that ASP.NET Core has to offer. For example, a server-static rendered Blazor server page can use HttpContext, and therefore can use the standard ASP.NET Core `SignInAsync` and `SignOutAsync` methods like you’d use in MVC or Razor Pages. ### Blazor Login Page Here’s the razor markup for a simple `Login.razor` page from a Blazor 8 server project with per-component rendering: @page "/login" @using BlazorHolWasmAuthentication.Services @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @using System.Security.Claims @inject UserValidation UserValidation @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <PageTitle>Login</PageTitle> <h1>Login</h1> <div> <EditForm Model="userInfo" OnSubmit="LoginUser" FormName="loginform"> <div> <label>Username</label> <InputText @bind-Value="userInfo.Username" /> </div> <div> <label>Password</label> <InputText type="password" @bind-Value="userInfo.Password" /> </div> <button>Login</button> </EditForm> </div> <div style="background-color:lightgray"> <p>User identities:</p> <p>admin, admin</p> <p>user, user</p> </div> <div><p class="alert-danger">@Message</p></div> This form uses the server-static form of the `EditForm` component, which does a standard postback to the server. Blazor uses the `FormName` and `OnSubmit` attributes to route the postback to a `LoginUser` method in the code block: @code { [SupplyParameterFromForm] public UserInfo userInfo { get; set; } = new(); public string Message { get; set; } = ""; private async Task LoginUser() { Message = ""; ClaimsPrincipal principal; if (UserValidation.ValidateUser(userInfo.Username, userInfo.Password)) { // create authenticated principal var identity = new ClaimsIdentity("custom"); var claims = new List<Claim>(); claims.Add(new Claim(ClaimTypes.Name, userInfo.Username)); var roles = UserValidation.GetRoles(userInfo.Username); foreach (var item in roles) claims.Add(new Claim(ClaimTypes.Role, item)); identity.AddClaims(claims); principal = new ClaimsPrincipal(identity); var httpContext = httpContextAccessor.HttpContext; if (httpContext is null) { Message = "HttpContext is null"; return; } AuthenticationProperties authProperties = new AuthenticationProperties(); await httpContext.SignInAsync( CookieAuthenticationDefaults.AuthenticationScheme, principal, authProperties); NavigationManager.NavigateTo("/"); } else { Message = "Invalid credentials"; } } public class UserInfo { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } } The username and password are validated by a `UserValidation` service. That service returns whether the credentials were valid, and if they were valid, it returns the user’s claims. The code then uses that list of claims to create a `ClaimsIdentity` and `ClaimsPrincpal`. That pair of objects represents the user’s identity in .NET. The `SignInAsync` method is then called on the `HttpContext` object to create a cookie for the user’s identity (or whatever storage option was configured in `Program.cs`). From this point forward, ASP.NET Core code (such as a web API endpoint) and Blazor server components (via the Blazor `AuthenticationStateProvider` and `CascadingAuthenticationState`) all have consistent access to the current user identity. ### Blazor Logout Page The `Logout.razor` page is simpler still, since it doesn’t require any input from the user:  @page "/logout" @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <h3>Logout</h3> @code { protected override async Task OnInitializedAsync() { var httpContext = httpContextAccessor.HttpContext; if (httpContext != null) { var principal = httpContext.User; if (principal.Identity is not null && principal.Identity.IsAuthenticated) { await httpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); } } NavigationManager.NavigateTo("/"); } } The important part of this code is the call to `SignOutAsync`, which removes the ASP.NET Core user token, thus ensuring the current user has been “logged out” from all ASP.NET Core and Blazor server app elements. ### Configuring the Server For the `Login.razor` and `Logout.razor` pages to work, they must be server-static (which is the default for per-component rendering), and `Program.cs` must contain some important configuration. First, some services must be registered: builder.Services.AddHttpContextAccessor(); builder.Services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(); builder.Services.AddCascadingAuthenticationState(); builder.Services.AddTransient<UserValidation>(); The `AddHttpContextAccessor` registration makes it possible to inject an `IHttpContextAccessor` service so your code can access the `HttpContext` instance. > ⚠️ Generally speaking, you should only access `HttpContext` from within a server-static rendered page. The `AddAuthentication` method registers and configures ASP.NET Core authentication. In this case to store the user token in a cookie. The `AddCascadingAuthenticationState` method enables Blazor server components to make use of cascading authentication state. Finally, the `UserValidation` service is registered. This service is implemented by you to verify the user credentials, and to return the user’s claims if the credentials are valid. Some further configuration is required after the services have been registered: app.UseAuthentication(); app.UseAuthorization(); ### Enabling Cascading Authentication State The `Routes.razor` component is where the user authentication state is made available to all Blazor components on the server: <CascadingAuthenticationState> <Router AppAssembly="typeof(Program).Assembly" AdditionalAssemblies="new[] { typeof(Client._Imports).Assembly }"> <Found Context="routeData"> <AuthorizeRouteView RouteData="routeData" DefaultLayout="typeof(Layout.MainLayout)" /> <FocusOnNavigate RouteData="routeData" Selector="h1" /> </Found> </Router> </CascadingAuthenticationState> Notice the addition of the `CascadingAuthenticationState` element, which cascades an `AuthenticationState` instance to all Blazor server components. Also notice the use of `AuthorizeRouteView`, which enables the use of the authorization attribute in Blazor pages, so only an authorized user can access those pages. ### Adding the Login/Logout Links The final step to making authentication work on the server is to enhance the `MainLayout.razor` component to add links for the login and logout pages: @using Microsoft.AspNetCore.Components.Authorization @inherits LayoutComponentBase <div class="page"> <div class="sidebar"> <NavMenu /> </div> <main> <div class="top-row px-4"> <AuthorizeView> <Authorized> Hello, @context!.User!.Identity!.Name <a href="logout">Logout</a> </Authorized> <NotAuthorized> <a href="login">Login</a> </NotAuthorized> </AuthorizeView> </div> <article class="content px-4"> @Body </article> </main> </div> <div id="blazor-error-ui"> An unhandled error has occurred. <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> The `AuthorizeView` component is used, with the `Authorized` block providing content for a logged in user, and the `NotAuthorized` block providing content for an anonymous user. In both cases, the user is directed to the appropriate page to login or logout. At this point, all _server-side_ Blazor components can use authorization, because they have access to the user identity via the cascading `AuthenticationState` object. This doesn’t automatically extend to pages or components running in WebAssembly on the browser. That takes some extra work. ## Blazor WebAssembly User Identity There is nothing built in to Blazor that automatically makes the user identity available to pages or components running in WebAssembly on the client device. You should also be aware that there are possible security implications to making the user identity available on the client device. This is because any client device can be hacked, and so a bad actor could gain access to any `ClaimsIdentity` object that exists on the client device. As a result, a bad actor could get a list of the user’s claims, if those claims are on the client device. In my experience, if developers are using client-side technologies such as WebAssembly, Angular, React, WPF, etc. they’ve already reconciled the security implications of running code on a client device, and so it is probably not an issue to have the user’s roles or other claims on the client. I will, however, call out where you can filter the user’s claims to prevent a sensitive claim from flowing to a client device. The basic process of making the user identity available on a WebAssembly client is to copy the user’s claims from the server, and to use that claims data to create a copy of the `ClaimsIdentity` and `ClaimsPrincipal` on the WebAssembly client. ### A Web API for ClaimsPrincipal The first step is to create a web API endpoint on the ASP.NET Core (and Blazor) server that exposes a copy of the user’s claims so they can be retrieved by the WebAssembly client. For example, here is a controller that provides this functionality: using Microsoft.AspNetCore.Mvc; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Controllers; [ApiController] [Route("[controller]")] public class AuthController(IHttpContextAccessor httpContextAccessor) { [HttpGet] public User GetUser() { ClaimsPrincipal principal = httpContextAccessor!.HttpContext!.User; if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) { // Return a user object with the username and claims var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); return new User { Username = principal.Identity!.Name, Claims = claims }; } else { // Return an empty user object return new User(); } } } public class Credentials { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } public class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } public class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } This code uses an `IHttpContextAccessor` to access `HttpContext` to get the current `ClaimsPrincipal` from ASP.NET Core. It then copies the data from the `ClaimsIdentity` into simple types that can be serialized into JSON for return to the caller. Notice how the code doesn’t have to do any work to determine the identity of the current user. This is because ASP.NET Core has already authenticated the user, and the user identity token cookie has been unpacked by ASP.NET Core before the controller is invoked. The line of code where you could filter sensitive user claims is this: var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); This line copies _all_ claims for serialization to the client. You could filter out claims considered sensitive so they don’t flow to the WebAssembly client. Keep in mind that any code that relies on such claims won’t work in WebAssembly pages or components. In the server `Program.cs` it is necessary to register and map controllers. builder.Services.AddControllers(); and app.MapControllers(); At this point the web API endpoint exists for use by the Blazor WebAssembly client. ### Getting the User Identity in WebAssembly Blazor always maintains the current user identity as a `ClaimsPrincpal` in an `AuthenticationState` object. Behind the scenes, there is an `AuthenticationStateProvider` service that provides access to the `AuthenticationState` object. On the Blazor server we generally don’t need to worry about the `AuthenticationStateProvider` because a default one is provided for our use. On the Blazor WebAssembly client however, we must implement a custom `AuthenticationStateProvider`. For example: using Microsoft.AspNetCore.Components.Authorization; using System.Net.Http.Json; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Client; public class CustomAuthenticationStateProvider(HttpClient HttpClient) : AuthenticationStateProvider { private AuthenticationState AuthenticationState { get; set; } = new AuthenticationState(new ClaimsPrincipal()); private DateTimeOffset? CacheExpire; public override async Task<AuthenticationState> GetAuthenticationStateAsync() { if (!CacheExpire.HasValue || DateTimeOffset.Now > CacheExpire) { var previousUser = AuthenticationState.User; var user = await HttpClient.GetFromJsonAsync<User>("auth"); if (user != null && !string.IsNullOrEmpty(user.Username)) { var claims = new List<System.Security.Claims.Claim>(); foreach (var claim in user.Claims) { claims.Add(new System.Security.Claims.Claim(claim.Type, claim.Value)); } var identity = new ClaimsIdentity(claims, "auth_api"); var principal = new ClaimsPrincipal(identity); AuthenticationState = new AuthenticationState(principal); } else { AuthenticationState = new AuthenticationState(new ClaimsPrincipal()); } if (!ComparePrincipals(previousUser, AuthenticationState.User)) { NotifyAuthenticationStateChanged(Task.FromResult(AuthenticationState)); } CacheExpire = DateTimeOffset.Now + TimeSpan.FromSeconds(30); } return AuthenticationState; } private static bool ComparePrincipals(ClaimsPrincipal principal1, ClaimsPrincipal principal2) { if (principal1.Identity == null || principal2.Identity == null) return false; if (principal1.Identity.Name != principal2.Identity.Name) return false; if (principal1.Claims.Count() != principal2.Claims.Count()) return false; foreach (var claim in principal1.Claims) { if (!principal2.HasClaim(claim.Type, claim.Value)) return false; } return true; } private class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } private class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } } This is a subclass of `AuthenticationStateProvider`, and it provides an implementation of the `GetAuthenticationStateAsync` method. This method invokes the server-side web API controller to get the user’s claims, and then uses them to create a `ClaimsIdentity` and `ClaimsPrincipal` for the current user. This value is then returned within an `AuthenticationState` object for use by Blazor and any other code that requires the user identity on the client device. One key detail in this code is that the `NotifyAuthenticationStateChanged` method is only called in the case that the user identity has changed. The `ComparePrincipals` method compares the existing principal with the one just retrieved from the web API to see if there’s been a change. It is quite common for Blazor and other code to request the `AuthenticationState` very frequently, and that can result in a lot of calls to the web API. Even a cache that lasts a few seconds will reduce the volume of repetitive calls significantly. This code uses a 30 second cache. ### Configuring the WebAssembly Client To make Blazor use our custom provider, and to enable authentication on the client, it is necessary to add some code to `Program.cs` _in the client project_ : using BlazorHolWasmAuthentication.Client; using Marimer.Blazor.RenderMode.WebAssembly; using Microsoft.AspNetCore.Components.Authorization; using Microsoft.AspNetCore.Components.WebAssembly.Hosting; var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) }); builder.Services.AddAuthorizationCore(); builder.Services.AddScoped<AuthenticationStateProvider, CustomAuthenticationStateProvider>(); builder.Services.AddCascadingAuthenticationState(); await builder.Build().RunAsync(); The `CustomAuthenticationStateProvider` requires an `HttpClient` service, and relies on the `AddAuthorizationCore` and `AddCascadingAuthenticationState` to properly function. ## Summary The preexisting integration between ASP.NET Core and Blazor on the server make server-side user authentication fairly straightforward. Extending the authenticated user identity to WebAssembly hosted pages and components requires a little extra work: creating a controller on the server and custom `AuthenticationStateProvider` on the client.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
Blazor EditForm OnSubmit behavior I am working on the open-source KidsIdKit app and have encountered some “interesting” behavior with the `EditForm` component and how buttons trigger the `OnSubmit` event. An `EditForm` is declared similar to this: <EditForm Model="CurrentChild" OnSubmit="SaveData"> I would expect that any `button` component with `type="submit"` would trigger the `OnSubmit` handler. <button class="btn btn-primary" type="submit">Save</button> I would also expect that any `button` component _without_ `type="submit"` would _not_ trigger the `OnSubmit` handler. <button class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> I’d think this was true _especially_ if that second button was in a nested component, so it isn’t even in the `EditForm` directly, but is actually in its own component, and it uses an `EventCallback` to tell the parent component that the button was clicked. ### Actual Results In Blazor 8 I see different behaviors between MAUI Hybrid and Blazor WebAssembly hosts. In a Blazor WebAssembly (web) scenario, my expectations are met. The secondary button in the sub-component does _not_ cause `EditForm` to submit. In a MAUI Hybrid scenario however, the secondary button in the sub-component _does_ cause `EditForm` to submit. I also tried this using the new Blazor 9 MAUI Hybrid plus Web template - though in this case the web version is Blazor server. In my Blazor 9 scenarios, in _both_ hosting cases the secondary button triggers the submit of the `EditForm` - even though the secondary button is in a sub-component (its own `.razor` file)! What I’m getting out of this is that we must assume that _any button_ , even if it is in a nested component, will trigger the `OnSubmit` event of an `EditForm`. Nasty! ### Solution The solution (thanks to @jeffhandley) is to add `type="button"` to all non-submit `button` components. It turns out that the default HTML for `<button />` is `type="submit"`, so if you don’t override that value, then all buttons trigger a submit. What this means is that I _could_ shorten my actual submit button: <button class="btn btn-primary">Save</button> I probably won’t do this though, as being explicit probably increases readability. And I _absolutely must_ be explicit with all my other buttons: <button type="button" class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> This prevents the other buttons (even in nested Razor components) from accidentally triggering the submit behavior in the `EditForm` component.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
Do not throw away your old PCs As many people know, Windows 10 is coming to its end of life (or at least end of support) in 2025. Because Windows 11 requires specialized hardware that isn’t built into a lot of existing PCs running Windows 10, there is no _Microsoft-based_ upgrade path for those devices. The thing is, a lot of those “old” Windows 10 devices are serving their users perfectly well, and there is often no compelling reason for a person to replace their PC just because they can’t upgrade to Windows 11. > ℹ️ If you can afford to replace your PC with a new one, that’s excellent, and I’m not trying to discourage that! However, you can still avoid throwing away your old PC, and you should consider alternatives. Throwing away a PC or laptop - like in the trash - is a _horrible_ thing to do, because PCs contain toxic elements that are bad for the environment. In many places it might actually be illegal. Besides which, whether you want to keep and continue to use your old PC or not, _someone_ can probably make good use of it. > ️⚠️ If you do need to “throw away” your old PC, please make sure to turn it in to a recycling center for e-waste or hazardous waste center. I’d like to discuss some possible alternatives to throwing away or recycling your old PC. Things that provide much better outcomes for people and the environment! It might be that you can continue to use your PC or laptop, or someone else may be able to give it new life. Here are some options. ## Continue Using the PC Although you may be unable to upgrade to Windows 11, there are alternative operating systems that will breathe new life into your existing PC. The question you should ask first, is what do you do on your PC? The following may require Windows: * Windows-only software (like CAD drawing or other software) * Hard-core gaming On the other hand, if you use your PC entirely for things like: * Browsing the web * Writing documents * Simple spreadsheets * Web-based games in a browser Then you can probably replace Windows with an alternative and continue to be very happy with your PC. What are these “alternative operating systems”? They are all variations of Linux. If you’ve never heard of Linux, or have heard it is complicated and only for geeks, rest assured that there are some variations of Linux that are no more complex than Windows 10. ### “Friendly” Variations of Linux Some of the friendliest variations of Linux include: * Cinnamon Mint - Linux with a desktop that is very similar to Windows * Ubuntu Desktop - Linux with its own style of graphical desktop that isn’t too hard to learn if you are used to Windows There are many others, these are just a couple that I’ve used and found to be easy to install and learn. > 🛑 Before installing Linux on your PC make sure to copy all the files you want to keep onto a thumb drive or something! Installing Linux will _entirely delete your existing hard drive_ and none of your existing files will be on the PC when you are done. Once you’ve installed Linux, you’ll need software to do the things you do today. ### Browsers on Linux Linux often comes with the Firefox browser pre-installed. Other browsers that you can install include: * Chrome * Edge I am sure other browsers are available as well. Keep in mind that most modern browsers provide comparable features and let you use nearly every web site, so you may be happy with Firefox or whatever comes pre-installed with Linux. ### Software similar to Office on Linux Finally, most people use their PC to write documents, create spreadsheets and do other things that are often done using Microsoft Office. Some alternatives to Office available on Linux include: * OneDrive - Microsoft on-line file storage and web-based versions of Word, Excel, and more * Google Docs - Google on-line file storage and web-based word processor, spreadsheet, and more * LibreOffice - Software you install on your PC that provides word processing, spreadsheets, and more. File formats are compatible with Word, Excel, and other Office tools. Other options exist, these are the ones I’ve used and find to be most common. ## Donate your PC Even if your needs can’t be met by running Linux on your old PC, or perhaps installing a new operating system just isn’t for you - please consider that there are people all over the world, including near you, that would _love_ to have access to a free computer. This might include kids, adults, or seniors in your area who can’t afford a PC (or to have their own PC). In the US, rural and urban areas are _filled_ with young people who could benefit from having a PC to do school work, learn about computers, and more. > 🛑 Before donating your PC, make sure to use the Windows 10 feature to reset the PC to factory settings. This will delete all _your_ files from the PC, ensuring that the new owner can’t access any of your information. Check with your church and community organizations to find people who may benefit from having access to a computer. ## Build a Server If you know people, or are someone, who likes to tinker with computers, there are a lot of alternative uses for an old PC or laptop. You can install Linux _server_ software on an old PC and then use that server for all sorts of fun things: * Create a file server for your photos and other media - can be done with a low-end PC that has a large hard drive * Build a Kubernetes cluster out of discarded devices - requires PCs with at least 2 CPU cores and 8 gigs of memory, though more is better Here are a couple articles with other good ideas: * Avoid the Trash Heap: 17 Creative Uses for an Old Computer * 10 Creative Things to Do With an Old Computer If you aren’t the type to tinker with computers, just ask around your family and community. It is amazing how many people do enjoy this sort of thing, and would love to have access to a free device that can be used for something other than being hazardous waste. ## Conclusion I worry that 2025 will be a bad year for e-waste and hazardous waste buildup in landfills and elsewhere around the world, as people realize that their Windows 10 PC or laptop can’t be upgraded and “needs to be replaced”. My intent in writing this post is to provide some options to consider that may breathe new life into your “old” PC. For yourself, or someone else, that computer may have many more years of productivity ahead of it.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
Why MAUI Blazor Hybrid It can be challenging to choose a UI technology in today’s world. Even if you narrow it down to wanting to build “native” apps for phones, tablets, and PCs there are so many options. In the Microsoft .NET space, there are _still_ many options, including .NET MAUI, Uno, Avalonia, and others. The good news is that these are good options - Uno and Avalonia are excellent, and MAUI is coming along nicely. At this point in time, my _default_ choice is usually something called a MAUI Hybrid app, where you build your app using Blazor, and the app is hosted in MAUI so it is built as a native app for iOS, Android, Windows, and Mac. Before I get into why this is my default, I want to point out that I (personally) rarely build mobile apps that “represent the brand” of a company. Take the Marriott or Delta apps as examples - the quality of these apps and the way they work differently on iOS vs Android can literally cost these companies customers. They are a primary contact point that can irritate a customer or please them. This is not the space for MAUI Blazor Hybrid in my view. ## Common Code MAUI Blazor Hybrid is (in my opinion) for apps that need to have rich functionality, good design, and be _common across platforms_ , often including phones, tablets, and PCs. Most of my personal work is building business apps - apps that a business creates to enable their employees, vendors, partners, and sometimes even customers, to interact with important business systems and functionality. Blazor (the .NET web UI framework) turns out to be an excellent choice for building business apps. Though this is a bit of a tangent, Blazor is my go-to for modernizing (aka replacing) Windows Forms, WPF, Web Forms, MVC, Silverlight, and other “legacy” .NET app user experiences. The one thing Blazor doesn’t do by itself, is create native apps that can run on devices. It creates web sites (server hosted) or web apps (browser hosted) or a combination of the two. Which is wonderful for a lot of scenarios, but sometimes you really need things like offline functionality or access to per-platform APIs and capabilities. This is where MAUI Hybrid comes into the picture, because in this model you build your Blazor app, and that app is _hosted_ by MAUI, and therefore is a native app on each platform: iOS, Android, Windows, Mac. That means that your Blazor app is installed as a native app (therefore can run offline), and it can tap into per-platform APIs like any other native app. ## Per-Platform In most business apps there is little (or no) per-platform difference, and so the vast majority of your app is just Blazor - C#, html, css. It is common across all the native platforms, and optionally (but importantly) also the browser. When you do have per-platform differences, like needing to interact with serial or USB port devices, or arbitrary interactions with local storage/hard drives, you can do that. And if you do that with a little care, you still end up with the vast majority of your app in Blazor, with small bits of C# that are per-platform. ## End User Testing I mentioned that a MAUI Hybrid app can not only create native apps but that it can also target the browser. This is fantastic for end user testing, because it can be challenging to do testing via the Apple, Google, and Microsoft stores. Each requires app validation, on their schedule not yours, and some have limits on the numbers of test users. > In .NET 9, the ability to create a MAUI Hyrid that also targets the browser is a pre-built template. Previously you had to set it up yourself. What this means is that you can build your Blazor app, have your users do a lot of testing of your app via the browser, and once you are sure it is ready to go, then you can do some final testing on a per-platform basis via the stores (or whatever scheme you use to install native apps). ## Rich User Experience Blazor, with its use of html and css backed by C#, directly enables rich user experiences and high levels of interactivity. The defacto UI language is html/css after all, and we all know how effective it can be at building great experiences in browsers - as well as native apps. There is a rapidly growing and already excellent ecosystem around Blazor, with open-source and commercial UI toolkits and frameworks available that make it easy to create many different types of user experience, including Material design and others. From a developer perspective, it is nice to know that learning any of these Blazor toolsets is a skill that spans native and web development, as does Blazor itself. In some cases you’ll want to tap into per-platform capabilities as well. The MAUI Community Toolkit is available and often provides pre-existing abstractions for many per-platform needs. Some highlights include: * File system interaction * Badge/notification systems * Images * Speech to text Between the basic features of Blazor, advanced html/css, and things like the toolkit, it is pretty easy to build some amazing experiences for phones, tablets, and PCs - as well as the browser. ## Offline Usage Blazor itself can provide a level of offline app support if you build a Progressive Web App (PWA). To do this, you create a standlone Blazor WebAssembly app that includes the PWA manifest and worker job code (in JavaScript). PWAs are quite powerful and are absolutely something to consider as an option for some offline app requirements. The challenge with a PWA is that it is running in a browser (even though it _looks_ like a native app) and therefore is limited by the browser sandbox and the local operating system. For example, iOS devices place substantial limitations on what a PWA can do and how much data it can store locally. There are commercial reasons why Apple doesn’t like PWAs competing with “real apps” in its store, and the end result is that PWAs _might_ work for you, as long as you don’t need too much local storage or too many native features. MAUI Hybrid apps are actual native apps installed on the end user’s device, and so they can do anything a native app can do. Usually this means asking the end user for permission to access things like storage, location, and other services. As a smartphone user you are certainly aware of that type of request as an app is installed. The benefit then, is that if the user gives your app permission, your app can do things it couldn’t do in a PWA from inside the browser sandbox. In my experience, the most important of these things is effectively unlimited access to local storage for offline data that is required by the app. ## Conclusion This has been a high level overview of my rationale for why MAUI Blazor Hybrid is my “default start point” when thinking about building native apps for iOS, Android, Windows, and/or Mac. Can I be convinced that some other option is better for a specific set of business and technical requirements? Of course!! However, having a well-known and very capable option as a starting point provides a short-cut for discussing the business and technical requirements - to determine if each requirement is or isn’t already met. And in many cases, MAUI Hybrid apps offer very high developer productivity, the functionality needed by end users, and long-term maintainability.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
Running Linux on My Surface Go I have a first-generation Surface Go, the little 10” tablet Microsoft created to try and compete with the iPad. I’ll confess that I never used it a lot. I _tried_ , I really did! But it is underpowered, and I found that my Surface Pro devices were better for nearly everything. My reasoning for having a smaller tablet was that I travel quite a lot, more back then than now, and I thought having a tablet might be nicer for watching movies and that sort of thing, especially on the plane. It turns out that the Surface Pro does that too, without having to carry a second device. Even when I switched to my Surface Studio Laptop, I _still_ didn’t see the need to carry a second device - though the Surface Pro is absolutely better for traveling in my view. I’ve been saying for quite some time that I think people need to look at Linux as a way to avoid the e-waste involved in discarding their Windows 10 PCs - the ones that can’t run Windows 11. I use Linux regularly, though usually via the command line for software development, and so I thought I’d put it on my Surface Go to gain real-world experience. > I have quite a few friends and family who have Windows 10 devices that are perfectly good. Some of those folks don’t want to buy a new PC, due to financial constraints, or just because their current PC works fine. End of support for Windows 10 is a problem for them! The Surface Go is a bit trickier than most mainstream Windows 10 laptops or desktops, because it is a convertable tablet with a touch screen and specialized (rare) hardware - as compared to most of the devices in the market. So I did some reading, and used Copilot, and found a decent (if old) article on installing Linux on a Surface Go. > ⚠️ One quick warning: Surface Go was designed around Windows, and while it does work reasonably well with Linux, it isn’t as good. Scrolling is a bit laggy, and the cameras don’t have the same quality (by far). If you want to use the Surface Go as a small, lightweight laptop I think it is pretty good; if you are looking for a good _tablet_ experience you should probably just buy a new device - and donate the old one to someone who needs a basic PC. Fortunately, Linux hasn’t evolved all that much or all that rapidly, and so this article remains pretty valid even today. ## Using Ubuntu Desktop I chose to install Ubuntu, identified in the article as a Linux distro (distribution, or variant, or version) that has decent support for the Surface Go. I also chose Ubuntu because this is normally what I use for my other purposes, and so I’m familiar with it in general. However, I installed the latest Ubuntu Desktop (version 25.04), not the older version mentioned in the article. This was a good choice, because support for the Surface hardware has improved over time - though the other steps in the article remain valid. ## Download and Set Up Media The steps to get ready are: 1. Download Ubuntu Desktop - this downloads a file with a `.iso` extension 2. Download software to create a bootable flash drive based on the `.iso` file. I used software called Rufus - just be careful to avoid the flashy (spammy) download buttons, and find the actual download link text in the page 3. Get a flash drive (either new, or one you can erase) and insert it into your PC 4. Run rufus, and identify the `.iso` file and your flash drive 5. Rufus will write the data to the flash drive, and make the flash drive bootable so you can use it to install Linux on any PC 6. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 ## Install Ubuntu on the Surface At this point you have a bootable flash drive and a Surface Go device, and you can do the installation. This is where the zdnet article is a bit dated - the process is smoother and simpler than it was back then, so just do the install like this: 1. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 2. Insert the flash drive into the Surface USB port (for the Surface Go I had to use an adapter from type C to type A) 3. Press the Windows key and type “reset” and choose the settings option to reset your PC 4. That will bring up the settings page where you can choose Advanced and reset the PC for booting from a USB device 5. What I found is that the first time I did this, my Linux boot device didn’t appear, so I rebooted to Windows and did step 4 again 6. The second time, an option was there for Linux. It had an odd name: Linpus (as described in the zdnet article) 7. Boot from “Linpus” and your PC will sit and spin for quite some time (the Surface Go is quite old and slow by modern standards), and eventually will come up with Ubuntu 8. The thing is, it is _running_ Ubuntu, but it hasn’t _installed_ Ubuntu. So go through the wizard and answer the questions - especially the wifi setup 9. Once you are on the Ubuntu (really Gnome) desktop, you’ll see an icon for _installing_ Ubuntu. Double-click that and the actual installation process will begin 10. I chose to have the installer totally reformat my hard drive, and I recommend doing that, because the Surface Go doesn’t have a huge drive to start with, and I want all of it available for my new operating system 11. Follow the rest of the installer steps and let the PC reboot 12. Once it has rebooted, you can remove the flash drive ## Installing Updates At this point you should be sitting at your new desktop. The first thing Linux will want to do is install updates, and you should let it do so. I laugh a bit, because people make fun of Windows updates, and Patch Tuesday. Yet all modern and secure operating systems need regular updates to remain functional and secure, and Linux is no exception. Whether automated or not, you should do regular (at least monthly) updates to keep Linux secure and happy. ## Installing Missing Features Immediately upon installation, Ubuntu 25.04 seems to have very good support for the Surface Go, including multi-touch on the screen and trackpad, use of the Surface Pen, speakers, and the external (physical) keyboard. What doesn’t work right away, at least what I found, are the cameras or any sort of onscreen/soft keyboard. You need to take extra steps for these. The zdnet article is helpful here. ### Getting the Cameras Working The zdnet article walks through the process to get the cameras working. I actually think the camera drivers are now just part of Ubuntu, but I did have to take steps to get them working, and even then they don’t have great quality - this is clearly an area where moving to Linux is a step backward. At times I found the process a bit confusing, but just plowed ahead figuring I could always reinstall Linux again if necessary. It did work fine in the end, no reinstall needed. 1. Install the Linux Surface kernel - which sounds intimidating, but is really just following some steps as documented in their GitHub repo; other stuff in the document is quite intimidating, but isn’t really relevant if all you want to do is get things running 2. That GitHub repo also has information about the various camera drivers for different Surface devices, and I found that to be a bit overwhelming; fortunately, it really amounts to just running one command 3. Make sure you also run these commands to give your Linux account permissions to use the camera 4. At this point I was able to follow instructions to run `cam` and see the cameras - including some other odd entries I igored 5. And I was able to run `qcam`, which is a command that brings up a graphical app so you can see through each camera > ⚠️ Although the cameras technically work, I am finding that a lot of apps still don’t see the cameras, and in all cases the camera quality is quite poor. ### Getting a Soft or Onscreen Keyboard Because the Surface Go is _technically_ a tablet, I expected there to be a soft or onscreen keyboard. It turns out that there is a primitive one built into Ubuntu, but it really doesn’t work very well. It is pretty, but I was unable to figure out how to get it to appear via touch, which kind of defeats the purpose (I needed my physical keyboard to get the virtual one to appear). I found an article that has some good suggestions for Linux onscreen keyboard (OSK) improvements. I used what the article calls “Method 2” to install an Extension Manager, which allowed me to install extensions for the keyboard. 1. Install the Extension Manager `sudo apt install gnome-shell-extension-manager` 2. Open the Extension Manager app 3. This is where the article fell down, because the extension they suggested doesn’t seem to exist any longer, and there are numerous other options to explore 4. I installed an extension called “Touch X” which has the ability to add an icon to the upper-right corner of the screen by which you can open the virtual keyboard at any time (it can also do a cool ripple animation when you touch the screen if you’d like) 5. I also installed “GJS OSK”, which is a replacement soft keyboard that has a lot more configurability than the built-in default; you can try both and see which you prefer ## Installing Important Apps This section is mostly editorial, because I use certain apps on a regular basis, and you might use other apps. Still, you should be aware that there are a couple ways to install apps on Ubuntu: snap and apt. The “snap” concept is specific to Ubuntu, and can be quite nice, as it installs each app into a sort of sandbox that is managed by Ubuntu. The “app store” in Ubuntu lists and installs apps via snap. The “apt” concept actually comes from Ubuntu’s parent, Debian. Since Debian and Ubuntu make up a very large percentage of the Linux install base, the `apt` command is extremely common. This is something you do from a terminal command line. Using snap is very convenient, and when it works I love it. Sometimes I find that apps installed via snap don’t have access to things like speakers, cameras, or other things. I think that’s because they run in a sandbox. I’m pretty sure there are ways to address these issues - my normal way of addressing them is to uninstall the snap and use `apt`. ### My “Important” Apps I installed apps via snap, apt, and as PWAs. #### Snap and Apt Apps Here are the apps I installed right away: 1. Microsoft Edge browser - because I use Edge on my Windows devices and Android phone, I want to use the same browser here to sync all my history, settings, etc. - I installed this using the default Firefix browser, then switched the default to Edge 2. Visual Studio Code - I’m a developer, and find it hard to imagine having a device without some way to write code - and I use vscode on Windows, so I’m used to it, and it works the same on Linux - I installed this as a snap via App Center 3. git - again, I’m a developer and all my stuff is on GitHub, which means using git as a primary tool - I installed this using `apt` 4. Discord - I use discord for many reasons - talking to friends, gaming, hosting the CSLA .NET Discord server - so it is something I use all the time - I installed this as a snap via App Center 5. Thunderbird Email - I’m not sold on this yet - it seems to be the “default” email app for Linux, but feels like Outlook from 10-15 years ago, and I do hope to find something a lot more modern - I installed this as a snap via App Center 6. Copilot Desktop - I’ve been increasingly using Copilot on Windows 11, and was delighted to find that Ken VanDine wrote a Copilot shell for Linux; it is in the App Center and installs as a snap, providing the same basic experience as Copilot on Windows or Android - I installed this as a snap via App Center 7. .NET SDK - I mostly develop using .NET and Blazor, and so installing the .NET software developer kit seemed obvious; Ubuntu has a snap to install version 8, but I used apt to install version 9 #### PWA Apps Once I got Edge installed, I used it to install a number of progressive web apps (PWAs) that I use on nearly every device. A PWA is an app that is installed and updates via your browser, and is a great way to get cross-platform apps. Exactly how you install a PWA will vary from browser to browser. Some have a little icon when you are on the web page, others have an “install app” option or “install on desktop” or similar. The end result is that you get what appears to be an app icon on your phone, PC, whatever - and when you click the icon the PWA app runs in a window like any other app. 1. Elk - I use Mastodon (social media) a lot, and my preferred client is Elk - fast, clean, works great 2. Bluesky - I use Bluesky (social media) a lot, and Bluesky can be installed as a PWA 3. LinkedIn - I use LinkedIn quite a bit, and it can be installed as a PWA 4. Facebook - I still use Facebook a little, and it can be installed as a PWA #### Using Microsoft 365 Office Most people want the edit documents and maybe spreadsheets on their PC. A lot of people, including me, use Word and Excel for this purpose. Those apps aren’t available on Linux - at least not directly. Fortunately there are good alternatives, including: 1. Use https://onedrive.com to create and edit documents and spreadsheets in the browser 2. Use https://office.com to access Office online if you have a subscription 3. Install LibreOffice, an open-source office productivity suite sort of like Office I use OneDrive for a lot of personal documents, photos, etc. And I use actual Office for work. The LibreOffice idea is something I might explore at some point, but the online versions of the Office apps are usually enough for casual work - which is all I’m going to do on the little Surface Go device anyway. One feature of Edge is the ability to have multiple profiles. I use this all the time on Windows, having a personal and two work profiles. This feature works on Linux as well, though I found it had some glitches. My default Edge profile is my personal one, so all those PWAs I installed are connected to that profile. I set up another Edge profile for my CSLA work, and it is connected to my marimer.llc email address. This is where I log into the M365 office.com apps, and I have that page installed as a PWA. When I run “Office” it opens in my work profile and I have access to all my work documents. On my personal profile I don’t use the Office apps as much, but when I do open something from my personal OneDrive, it opens in that profile. The limitation is that I can only edit documents while online, but for my purposes with this device, that’s fine. I can edit my documents and spreadsheets as necessary. ## Conclusion At this point I’m pretty happy. I don’t expect to use this little device to do any major software development, but it actually does run vscode and .NET just fine (and also Jetbrains Rider if you prefer a more powerful option). I mostly use it for browsing the web, discord, Mastodon, and Bluesky. Will I bring this with when I travel? No, because my normal Windows 11 PC does everything I want. Could I live with this as my “one device”? Well, no, but that’s because it is underpowered and physically too small. But could I live with a modern laptop running Ubuntu? Yes, I certainly could. I wouldn’t _prefer_ it, because I like full-blown Visual Studio and way too many high end Steam games. The thing is, I am finding myself leaving the Surface Go in the living room, and reaching for it to scan the socials while watching TV. Something I could have done just as well with Windows, and can now do with Linux.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
CSLA 2-tier Data Portal Behavior History The CSLA data portal originally treated 2- and 3-tier differently, primarily for performance reasons. Back in the early 2000’s, the data portal did not serialize the business object graph in 2-tier scenarios. That behavior still exists and can be enabled via configuration, but is not the default for the reasons discussed in this post. Passing the object graph by reference (instead of serializing it) does provide much better performance, but at the cost of being behaviorally/semantically different from 3-tier. In a 3-tier (or generally n-tier) deployment, there is at least one network hop between the client and any server, and the object graph _must be serialized_ to cross that network boundary. When different 2-tier and 3-tier behaviors existed, a lot of people did their dev work in 2-tier and then tried to switch to 3-tier. Usually they’d discover all sorts of issues in their code, because they were counting on the logical client and server using the same reference to the object graph. A variety of issues are solved by serializing the graph even in 2-tier scenarios, including: 1. Consistency with 3-tier deployment (enabling location transparency in code) 2. Preventing data binding from reacting to changes to the object graph on the logical server (nasty performance and other issues would occur) 3. Ensuring that a failure on the logical server (especially part-way through the graph) leaves the graph on the logical client in a stable/known state There are other issues as well - and ultimately those issues drove the decision (I want to say around 2006 or 2007?) to default to serializing the object graph even in 2-tier scenarios. There is a performance cost to that serialization, but having _all_ n-tier scenarios enjoy the same semamantic behaviors has eliminated so many issues and support questions on the forums that I regret nothing.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
A Simple CSLA MCP Server In a recent CSLA discussion thread, a user asked about setting up a simple CSLA Mobile Client Platform (MCP) server. https://github.com/MarimerLLC/csla/discussions/4685 I’ve written a few MCP servers over the past several months with varying degrees of success. Getting the MCP protocol right is tricky (or was), and using semantic matching with vectors isn’t always the best approach, because I find it often misses the most obvious results. Recently however, Anthropic published a C# SDK (and NuGet package) that makes it easier to create and host an MCP server. The SDK handles the MCP protocol details, so you can focus on implementing your business logic. https://github.com/modelcontextprotocol/csharp-sdk Also, I’ve been reading up on the idea of hybrid search, which combines traditional search techniques with vector-based semantic search. This approach can help improve the relevance of search results by leveraging the strengths of both methods. The code I’m going to walk through in this post can be easily adapted to any scenario, not just CSLA. In fact, the MCP server just searches and returns markdown files from a folder. To use it for any scenario, you just need to change the source files and update the descriptions of the server, tools, and parameters that are in the attributes in code. Perhaps a future enhancement for this project will be to make those dynamic so you can change them without recompiling the code. The code for this article can be found in this GitHub repository. > ℹ️ Most of the code was actually written by Claude Sonnet 4 with my collaboration. Or maybe I wrote it with the collaboration of the AI? The point is, I didn’t do much of the typing myself. Before getting into the code, I want to point out that this MCP server really is useful. Yes, the LLMs already know all about CSLA because CSLA is open source. However, the LLMs often return outdated or incorrect information. By providing a custom MCP server that searches the actual CSLA code samples and snippets, the LLM can return accurate and up-to-date information. ## The MCP Server Host The MCP server itself is a console app that uses Spectre.Console to provide a nice command-line interface. The project also references the Anthropic C# SDK and some other packages. It targets .NET 10.0, though I believe the code should work with .NET 8.0 or later. I am not going to walk through every line of code, but I will highlight the key parts. > ⚠️ The modelcontextprotocol/csharp-sdk package is evolving rapidly, so you may need to adapt to use whatever is latest when you try to build your own. Also, all the samples in their GitHub repository use static tool methods, and I do as well. At some point I hope to figure out how to use instance methods instead, because that will allow the use of dependency injection. Right now the code has a lot of `Console.WriteLine` statements that would be better handled by a logging framework. Although the project is a console app, it does use ASP.NET Core to host the MCP server. var builder = WebApplication.CreateBuilder(); builder.Services.AddMcpServer() .WithHttpTransport() .WithTools<CslaCodeTool>(); The `AddMcpServer` method adds the MCP server services to the ASP.NET Core dependency injection container. The `WithHttpTransport` method configures the server to use HTTP as the transport protocol. The `WithTools<CslaCodeTool>` method registers the `CslaCodeTool` class as a tool that can be used by the MCP server. There is also a `WithStdioTransport` method that can be used to configure the server to use standard input and output as the transport protocol. This is useful if you want to run the server locally when using a locally hosted LLM client. The nice thing about using the modelcontextprotocol/csharp-sdk package is that it handles all the details of the MCP protocol for you. You just need to implement your tools and their methods. All the subtleties of the MCP protocol are handled by the SDK. ## Implementing the Tools The `CslaCodeTool` class is where the main logic of the MCP server resides. This class is decorated with the `McpServerToolType` attribute, which indicates that this class will contain MCP tool methods. [McpServerToolType] public class CslaCodeTool ### The Search Method The first tool is Search, defined by the `Search` method. This method is decorated with the `McpServerTool` attribute, which indicates that this method is an MCP tool method. The attribute also provides a description of the tool and what it will return. This description is used by the LLM to determine when to use this tool. My description here is probably a bit too short, but it seems to work okay. Any parameters for the tool method are decorated with the `Description` attribute, which provides a description of the parameter. This description is used by the LLM to understand what the parameter is for, and what kind of value to provide. [McpServerTool, Description("Searches CSLA .NET code samples and snippets for examples of how to implement code that makes use of #cslanet. Returns a JSON object with two sections: SemanticMatches (vector-based semantic similarity) and WordMatches (traditional keyword matching). Both sections are ordered by their respective scores.")] public static string Search([Description("Keywords used to match against CSLA code samples and snippets. For example, read-write property, editable root, read-only list.")]string message) #### Word Matching The orginal implementation (which works very well) uses only word matching. To do this, it gets a list of all the files in the target directory, and searches them for any words from the LLM’s `message` parameter that are 4 characters or longer. It counts the number of matches in each file to generate a score for that file. Here’s the code that gets the list of search terms from `message`: // Extract words longer than 4 characters from the message var searchWords = message .Split(new char[] { ' ', '\t', '\n', '\r', '.', ',', ';', ':', '!', '?', '(', ')', '[', ']', '{', '}', '"', '\'', '-', '_' }, StringSplitOptions.RemoveEmptyEntries) .Where(word => word.Length > 3) .Select(word => word.ToLowerInvariant()) .Distinct() .ToList(); Console.WriteLine($"[CslaCodeTool.Search] Extracted search words: [{string.Join(", ", searchWords)}]"); It then loops through each file and counts the number of matching words. The final result is sorted by score and then file name: var sortedResults = results.OrderByDescending(r => r.Score).ThenBy(r => r.FileName).ToList(); #### Semantic Matching More recently I added semantic matching as well, resulting in a hybrid search approach. The search tool now returns two sets of results: one based on traditional word matching, and one based on vector-based semantic similarity. The semantic search behavior comes in two parts: indexing the source files, and then matching against the message parameter from the LLM. ##### Indexing the Source Files Indexing source files takes time and effort. To minimize startup time, the MCP server actually starts and will work without the vector data. In that case it relies on the word matching only. After a few minutes, the vector indexing will be complete and the semantic search results will be available. The indexing is done by calling a text embedding model to generate a vector representation of the text in each file. The vectors are then stored in memory along with the file name and content. Or the vectors could be stored in a database to avoid having to re-index the files each time the server is started. I’m relying on a `vectorStore` object to index each document: await vectorStore.IndexDocumentAsync(fileName, content); This `VectorStoreService` class is a simple in-memory vector store that uses Ollama to generate the embeddings: public VectorStoreService(string ollamaEndpoint = "http://localhost:11434", string modelName = "nomic-embed-text:latest") { _httpClient = new HttpClient(); _vectorStore = new Dictionary<string, DocumentEmbedding>(); _ollamaEndpoint = ollamaEndpoint; _modelName = modelName; } This could be (and probably will be) adapted to use a cloud-based embedding model instead of a local Ollama instance. Ollama is free and easy to use, but it does require a local installation. The actual embedding is created by a call to the Ollama endpoint: var response = await _httpClient.PostAsync($"{_ollamaEndpoint}/api/embeddings", content); The embedding is just a list of floating-point numbers that represent the semantic meaning of the text. This needs to be extracted from the JSON response returned by the Ollama endpoint. var responseJson = await response.Content.ReadAsStringAsync(); var result = JsonSerializer.Deserialize<JsonElement>(responseJson); if (result.TryGetProperty("embedding", out var embeddingElement)) { var embedding = embeddingElement.EnumerateArray() .Select(e => (float)e.GetDouble()) .ToArray(); return embedding; } > 👩‍🔬 All those floating-point numbers are the magic of this whole thing. I don’t understand any of the math, but it obviously represents the semantic “meaning” of the file in a way that a query can be compared later to see if it is a good match. All those embeddings are stored in memory for later use. ##### Matching Against the Message When the `Search` method is called, it first generates an embedding for the `message` parameter using the same embedding model. It then compares that embedding to each of the document embeddings in the vector store to calculate a similarity score. All that work is delegated to the `VectorStoreService`: var semanticResults = VectorStore.SearchAsync(message, topK: 10).GetAwaiter().GetResult(); In the `VectorStoreService` class, the `SearchAsync` method generates the embedding for the query message: var queryEmbedding = await GetTextEmbeddingAsync(query); It then calculates the cosine similarity between the query embedding and each document embedding in the vector store: foreach (var doc in _vectorStore.Values) { var similarity = CosineSimilarity(queryEmbedding, doc.Embedding); results.Add(new SemanticSearchResult { FileName = doc.FileName, SimilarityScore = similarity }); } The results are then sorted by similarity score and the top K results are returned. var topResults = results .OrderByDescending(r => r.SimilarityScore) .Take(topK) .Where(r => r.SimilarityScore > 0.5f) // Filter out low similarity scores .ToList(); ##### The Final Result The final result of the `Search` method is a JSON object that contains two sections: `SemanticMatches` and `WordMatches`. Each section contains a list of results ordered by their respective scores. var combinedResult = new CombinedSearchResult { SemanticMatches = semanticMatches, WordMatches = sortedResults }; It is up to the calling LLM to decide which set of results to use. In the end, the LLM will use the fetch tool to retrieve the content of one or more of the files returned by the search tool. ### The Fetch Method The second tool is Fetch, defined by the `Fetch` method. This method is also decorated with the `McpServerTool` attribute, which provides a description of the tool and what it will return. [McpServerTool, Description("Fetches a specific CSLA .NET code sample or snippet by name. Returns the content of the file that can be used to properly implement code that uses #cslanet.")] public static string Fetch([Description("FileName from the search tool.")]string fileName) This method has some defensive code to prevent path traversal attacks and other things, but ultimately it just reads the content of the specified file and returns it as a string. var content = File.ReadAllText(filePath); return content; ## Hosting the MCP Server The MCP server can be hosted in a variety of ways. The simplest is to run it as a console app on your local machine. This is useful for development and testing. You can also host it in a cloud environment, such as Azure App Service or AWS Elastic Beanstalk. This allows you to make the MCP server available to other applications and services. Like most things, I am running it in a Docker container so I can choose to host it anywhere, including on my local Kubernetes cluster. For real use in your organization, you will want to ensure that the MCP server endpoint is available to all your developers from their vscode or Visual Studio environments. This might be a public IP, or one behind a VPN, or some other secure way to access it. I often use tools like Tailscale or ngrok to make local services available to remote clients. ## Testing the MCP Server Testing an MCP server isn’t as straightforward as testing a regular web API. You need an LLM client that can communicate with the MCP server using the MCP protocol. Anthropic has an npm package that can be used to test the MCP server. You can find it here: https://github.com/modelcontextprotocol/inspector This package provides a GUI or CLI tool that can be used to interact with the MCP server. You can use it to send messages to the server and see the responses. It is a great way to test and debug your MCP server. Another option is to use the MCP support built into recent vscode versions. Once you add your MCP server endpoint to your vscode settings, you can use the normal AI chat interface to ask the chat bot to interact with the MCP server. For example: call the csla-mcp-server tools to see if they work This will cause the chat bot to invoke the `Search` tool, and then the `Fetch` tool to get the content of one of the files returned by the search. Once you have the MCP server working and returning the types of results you want, add it to your vscode or Visual Studio settings so all your developers can use it. In my experience the LLM chat clients are pretty good about invoking the MCP server to determine the best way to author code that uses CSLA .NET. ## Conclusion Setting up a simple CSLA MCP server is not too difficult, especially with the help of the Anthropic C# SDK. By implementing a couple of tools to search and fetch code samples, you can provide a powerful resource for developers using CSLA .NET. The hybrid search approach, combining traditional word matching with vector-based semantic similarity, helps improve the relevance of search results. This makes it easier for developers to find the code samples they need. I hope this article has been helpful in understanding how to set up a simple CSLA MCP server. If you have any questions or need further assistance, feel free to reach out on the CSLA discussion forums or GitHub repository for the csla-mcp project.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
Unit Testing CSLA Rules With Rocks One of the most powerful features of CSLA .NET is its business rules engine. It allows you to encapsulate validation, authorization, and other business logic in a way that is easy to manage and maintain. In CSLA, a rule is a class that implements `IBusinessRule`, `IBusinessRuleAsync`, `IAuthorizationRule`, or `IAuthorizationRuleAsync`. These interfaces define the contract for a rule, including methods for executing the rule and properties for defining the rule’s behavior. Normally a rule inherits from an existing base class that implements one of these interfaces. When you create a rule, you typically associate it with a specific property or set of properties on a business object. The rule is then executed automatically by the CSLA framework whenever the associated property or properties change. The advantage of a CSLA rule being a class, is that you can unit test it in isolation. This is where the Rocks mocking framework comes in. Rocks allows you to create mock objects for your unit tests, making it easier to isolate the behavior of the rule you are testing. You can create a mock business object and set up expectations for how the rule should interact with that object. This allows you to test the rule’s behavior without having to worry about the complexities of the entire business object. In summary, the combination of CSLA’s business rules engine and the Rocks mocking framework provides a powerful way to create and test business rules in isolation, ensuring that your business logic is both robust and maintainable. All code for this article can be found in this GitHub repository in Lab 02. ## Creating a Business Rule As an example, consider a business rule that sets an `IsActive` property based on the value of a `LastOrderDate` property. If the `LastOrderDate` is within the last year, then `IsActive` should be true; otherwise, it should be false. using Csla.Core; using Csla.Rules; namespace BusinessLibrary.Rules; public class LastOrderDateRule : BusinessRule { public LastOrderDateRule(IPropertyInfo lastOrderDateProperty, IPropertyInfo isActiveProperty) : base(lastOrderDateProperty) { InputProperties.Add(lastOrderDateProperty); AffectedProperties.Add(isActiveProperty); } protected override void Execute(IRuleContext context) { var lastOrderDate = (DateTime)context.InputPropertyValues[PrimaryProperty]; var isActive = lastOrderDate > DateTime.Now.AddYears(-1); context.AddOutValue(AffectedProperties[1], isActive); } } This rule inherits from `BusinessRule`, which is a base class provided by CSLA that implements the `IBusinessRule` interface. The constructor takes two `IPropertyInfo` parameters: one for the `LastOrderDate` property and one for the `IsActive` property. The `InputProperties` collection is used to specify which properties the rule depends on, and the `AffectedProperties` collection is used to specify which properties the rule affects. The `Execute` method is where the rule’s logic is implemented. It retrieves the value of the `LastOrderDate` property from the `InputPropertyValues` dictionary, checks if it is within the last year, and then sets the value of the `IsActive` property using the `AddOutValue` method. ## Unit Testing the Business Rule Now that we have our business rule, we can create a unit test for it using the Rocks mocking framework. First, we need to bring in a few namespaces: using BusinessLibrary.Rules; using Csla; using Csla.Configuration; using Csla.Core; using Csla.Rules; using Microsoft.Extensions.DependencyInjection; using Rocks; using System.Security.Claims; Next, we can use Rocks attributes to define the mock types we need for our test: [assembly: Rock(typeof(IPropertyInfo), BuildType.Create | BuildType.Make)] [assembly: Rock(typeof(IRuleContext), BuildType.Create | BuildType.Make)] These lines of code only need to be included once in your test project, because they are assembly-level attributes. They tell Rocks to create mock implementations of the `IPropertyInfo` and `IRuleContext` interfaces, which we will use in our unit test. Now we can create our unit test method to test the `LastOrderDateRule`. To do this, we need to arrange the necessary mock objects and set up their expectations. Then we can execute the rule and verify that it behaves as expected. The rule has a constructor that takes two `IPropertyInfo` parameters, so we need to create mock implementations of that interface. We also need to create a mock implementation of the `IRuleContext` interface, which is used to pass information to the rule when it is executed. [TestMethod] public void LastOrderDateRule_SetsIsActiveBasedOnLastOrderDate() { // Arrange var inputProperties = new Dictionary<IPropertyInfo, object>(); using var context = new RockContext(); var lastOrderPropertyExpectations = context.Create<IPropertyInfoCreateExpectations>(); lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); var lastOrderProperty = lastOrderPropertyExpectations.Instance(); var isActiveProperty = new IPropertyInfoMakeExpectations().Instance(); var ruleContextExpectations = context.Create<IRuleContextCreateExpectations>(); ruleContextExpectations.Properties.Getters.InputPropertyValues().ReturnValue(inputProperties); ruleContextExpectations.Methods.AddOutValue(Arg.Is(isActiveProperty), true); inputProperties.Add(lastOrderProperty, new DateTime(2025, 9, 24, 18, 3, 40)); // Act var rule = new LastOrderDateRule(lastOrderProperty, isActiveProperty); (rule as IBusinessRule).Execute(ruleContextExpectations.Instance()); // Assert is automatically done by Rocks when disposing the context } Notice how the Rocks mock objects have expectations set up for their properties and methods. This allows us to verify that the rule interacts with the context as expected. This is a little different from more explicit `Assert` statements, but it is a powerful way to ensure that the rule behaves correctly. For example, notice how the `Name` property of the `lastOrderProperty` mock is expected to be called twice. If the rule does not call this property the expected number of times, the test will fail when the `context` is disposed at the end of the `using` block: lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); This is a powerful feature of Rocks that allows you to verify the behavior of your code without having to write explicit assertions. The test creates an instance of the `LastOrderDateRule` and calls its `Execute` method, passing in the mock `IRuleContext`. The rule should set the `IsActive` property to true because the `LastOrderDate` is within the last year. When the test completes, Rocks will automatically verify that all expectations were met. If any expectations were not met, the test will fail. This is a simple example, but it demonstrates how you can use Rocks to unit test CSLA business rules in isolation. By creating mock objects for the dependencies of the rule, you can focus on testing the rule’s behavior without having to worry about the complexities of the entire business object. ## Conclusion CSLA’s business rules engine is a powerful feature that allows you to encapsulate business logic in a way that is easy to manage and maintain. By using the Rocks mocking framework, you can create unit tests for your business rules that isolate their behavior and ensure that they work as expected. This combination of CSLA and Rocks provides a robust and maintainable way to implement and test business logic in your applications.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
MCP and A2A Basics I have been spending a lot of time lately, learning about the Model Context Protocol (MCP) and Agent to Agent (A2A) protocols. And a little about a slightly older technology called the activity protocol that comes from the Microsoft bot framework. I’m writing this blog post mostly for myself, because writing content helps me organize my thoughts and solidify my understanding of concepts. As they say with AIs, mistakes are possible, because my understanding of all this technology is still evolving. (disclaimer: unless otherwise noted, I wrote this post myself, with my own fingers on a keyboard) ## Client-Server is Alive and Well First off, I think it is important to recognize that the activity protocol basically sits on top of REST, and so is client-server. The MCP protocol is also client-server, sitting on top of JSON-RPC. A2A _can be_ client-server, or peer-to-peer, depending on how you use it. The sipmlest form is client-server, with peer-to-peer provide a lot more capability, but also complexity. ## Overall Architecture These protocols (in particular MCP and A2A) exist to enable communication between LLM “AI” agents and their environments, or other tools, or other agents. ### Activity Protocol The activity protocol is a client-server protocol that sits on top of REST. It is primarily used for communication between a user and a bot, or between bots. The protocol defines a set of RESTful APIs for sending and receiving activities, which are JSON objects that represent a message, event, or command. The activity protocol is widely used in the Microsoft Bot Framework and is supported by many bot channels, such as Microsoft Teams, Slack, and Facebook Messenger. (that previous paragraph was written by AI - but it is pretty good) ### MCP The Model Context Protocol is really a standard and flexible way to expand the older concept of LLM tool or function calling. The primary intent is to allow an LLM AI to call tools that interact with the environment, call other apps, get data from services, or do other client-server style interactions. The rate of change here is pretty staggering. The idea of an LLM being able to call functions or “tools” isn’t that old. The limitation of that approach was that these functions had to be registered with the LLM in a way that wasn’t standard across LLM tools or platforms. MCP provides a standard for registration and interaction, allowing an MCP-enabled LLM to call in-process tools (via standard IO) or remotely (via HTTP). If you dig a little into the MCP protocol, it is erily reminiscent of COM from the 1990’s (and I suspect CORBA as well). We provide the LLM “client” with an endpoint for the MCP server. The client can ask the MCP server what it does, and also for a list of tools it provides. Much like `IUnknown` in COM. Once the LLM client has the description of the server and all the tools, it can then decide when and if it should call those tools to solve problems. You might create a tool that deletes a file, or creates a file, or blinks a light on a device, or returns some data, or sends a message, or creates a record in a database. Really, the sky is the limit in terms of what you can build with MCP. ### A2A Agent to Agent (A2A) communication is a newer and more flexible protocol that (I think) has the potential to do a couple things: 1. I could see it replacing MCP, because you can use A2A for client-server calls from an LLM client to an A2A “tool” or agent. This is often done over HTTP. 2. It also can be used to implement bi-directional, peer-to-peer communication between agents, enabling more complex and dynamic interactions. This is often done over WebSockets or (better yet) queuing systems like RabbitMQ. ## Metadata Rules In any case, the LLM that is going to call a tool or send a message to another agent needs a way to understand the capabilities and requirements of that tool or agent. This is where metadata comes into play. Metadata provides essential information about the tool or agent, such as its name, description, input and output parameters, and more. “Metadata” in this context is human language descriptions. Remember that the calling LLM is an AI model that is generally good with language. However, some of the metadata might also describe JSON schemas or other structured data formats to precisely define the inputs and outputs. But even that is usually surrounded by human-readable text that describes the purpose of the scheme or data formats. This is where the older activity protocol falls down, because it doesn’t provide metadata like MCP or A2A. The newer protocols include the ability to provide descriptions of the service/agent, and of tool methods or messages that are exchanged. ## Authentication and Identity In all cases, these protocols aren’t terribly complex. Even the A2A peer-to-peer isn’t that difficult if you have an understanding of async messaging concepts and protocols. What does seem to _always_ be complex is managing authentication and identity across these interactions. There seem to be multiple layers at work here: 1. The client needs to authenticate to call the service - often with some sort of service identity represented by a token. 2. The service needs to authenticate the client, so that service token is important 3. HOWEVER, the service also usually needs to “impersonate” or act on behalf of a user or another identity, which can be a separate token or credential Getting these tokens, and validating them correctly, is often the hardest part of implementing these protocols. This is especially true when you are using abstract AI/LLM hosting environments. It is hard enough in code like C#, where you can see the token handling explicitly, but in many AI hosting platforms, these details are abstracted away, making it challenging to implement robust security. ## Summary The whole concept of an LLM AI calling tools and then service and then having peer-to-peer interactions has evolved very rapidly over the past couple of years, and it is _still_ evolving very rapidly. Just this week, for example, Microsoft announced the Microsoft Agent Framework that replaces Semantic Kernel and Autogen. And that’s just one example! What makes me feel better though, is that at their heart, these protocols are just client-server protocols with some added layers for metadata. Or a peer-to-peer communication protocol that relies on asynchronous messaging patterns. While these frameworks (to a greater or lesser degree) have some support for authentication and token passing, that seems to be the weakest part of the tooling, and the hardest to solve in real-life implementations.
28.10.2025 16:37 — 👍 0    🔁 0    💬 0    📌 0
Accessing User Identity on a Blazor Wasm Client On the server, Blazor authentication is fairly straightforward because it uses the underlying ASP.NET Core authentication mechanism. I’ll quickly review server authentication before getting to the WebAssembly part so you have an end-to-end understanding. I should note that this post is all about a Blazor 8 app that uses per-component rendering, so there is an ASP.NET Core server hosting Blazor server pages, and there may also be pages using `InterativeAuto` or `InteractiveWebAssembly` that run in WebAssembly on the client device. ## Blazor Server Authentication Blazor Server components are running in an ASP.NET Core hosted web server environment. This means that they can have access to all that ASP.NET Core has to offer. For example, a server-static rendered Blazor server page can use HttpContext, and therefore can use the standard ASP.NET Core `SignInAsync` and `SignOutAsync` methods like you’d use in MVC or Razor Pages. ### Blazor Login Page Here’s the razor markup for a simple `Login.razor` page from a Blazor 8 server project with per-component rendering: @page "/login" @using BlazorHolWasmAuthentication.Services @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @using System.Security.Claims @inject UserValidation UserValidation @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <PageTitle>Login</PageTitle> <h1>Login</h1> <div> <EditForm Model="userInfo" OnSubmit="LoginUser" FormName="loginform"> <div> <label>Username</label> <InputText @bind-Value="userInfo.Username" /> </div> <div> <label>Password</label> <InputText type="password" @bind-Value="userInfo.Password" /> </div> <button>Login</button> </EditForm> </div> <div style="background-color:lightgray"> <p>User identities:</p> <p>admin, admin</p> <p>user, user</p> </div> <div><p class="alert-danger">@Message</p></div> This form uses the server-static form of the `EditForm` component, which does a standard postback to the server. Blazor uses the `FormName` and `OnSubmit` attributes to route the postback to a `LoginUser` method in the code block: @code { [SupplyParameterFromForm] public UserInfo userInfo { get; set; } = new(); public string Message { get; set; } = ""; private async Task LoginUser() { Message = ""; ClaimsPrincipal principal; if (UserValidation.ValidateUser(userInfo.Username, userInfo.Password)) { // create authenticated principal var identity = new ClaimsIdentity("custom"); var claims = new List<Claim>(); claims.Add(new Claim(ClaimTypes.Name, userInfo.Username)); var roles = UserValidation.GetRoles(userInfo.Username); foreach (var item in roles) claims.Add(new Claim(ClaimTypes.Role, item)); identity.AddClaims(claims); principal = new ClaimsPrincipal(identity); var httpContext = httpContextAccessor.HttpContext; if (httpContext is null) { Message = "HttpContext is null"; return; } AuthenticationProperties authProperties = new AuthenticationProperties(); await httpContext.SignInAsync( CookieAuthenticationDefaults.AuthenticationScheme, principal, authProperties); NavigationManager.NavigateTo("/"); } else { Message = "Invalid credentials"; } } public class UserInfo { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } } The username and password are validated by a `UserValidation` service. That service returns whether the credentials were valid, and if they were valid, it returns the user’s claims. The code then uses that list of claims to create a `ClaimsIdentity` and `ClaimsPrincpal`. That pair of objects represents the user’s identity in .NET. The `SignInAsync` method is then called on the `HttpContext` object to create a cookie for the user’s identity (or whatever storage option was configured in `Program.cs`). From this point forward, ASP.NET Core code (such as a web API endpoint) and Blazor server components (via the Blazor `AuthenticationStateProvider` and `CascadingAuthenticationState`) all have consistent access to the current user identity. ### Blazor Logout Page The `Logout.razor` page is simpler still, since it doesn’t require any input from the user:  @page "/logout" @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <h3>Logout</h3> @code { protected override async Task OnInitializedAsync() { var httpContext = httpContextAccessor.HttpContext; if (httpContext != null) { var principal = httpContext.User; if (principal.Identity is not null && principal.Identity.IsAuthenticated) { await httpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); } } NavigationManager.NavigateTo("/"); } } The important part of this code is the call to `SignOutAsync`, which removes the ASP.NET Core user token, thus ensuring the current user has been “logged out” from all ASP.NET Core and Blazor server app elements. ### Configuring the Server For the `Login.razor` and `Logout.razor` pages to work, they must be server-static (which is the default for per-component rendering), and `Program.cs` must contain some important configuration. First, some services must be registered: builder.Services.AddHttpContextAccessor(); builder.Services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(); builder.Services.AddCascadingAuthenticationState(); builder.Services.AddTransient<UserValidation>(); The `AddHttpContextAccessor` registration makes it possible to inject an `IHttpContextAccessor` service so your code can access the `HttpContext` instance. > ⚠️ Generally speaking, you should only access `HttpContext` from within a server-static rendered page. The `AddAuthentication` method registers and configures ASP.NET Core authentication. In this case to store the user token in a cookie. The `AddCascadingAuthenticationState` method enables Blazor server components to make use of cascading authentication state. Finally, the `UserValidation` service is registered. This service is implemented by you to verify the user credentials, and to return the user’s claims if the credentials are valid. Some further configuration is required after the services have been registered: app.UseAuthentication(); app.UseAuthorization(); ### Enabling Cascading Authentication State The `Routes.razor` component is where the user authentication state is made available to all Blazor components on the server: <CascadingAuthenticationState> <Router AppAssembly="typeof(Program).Assembly" AdditionalAssemblies="new[] { typeof(Client._Imports).Assembly }"> <Found Context="routeData"> <AuthorizeRouteView RouteData="routeData" DefaultLayout="typeof(Layout.MainLayout)" /> <FocusOnNavigate RouteData="routeData" Selector="h1" /> </Found> </Router> </CascadingAuthenticationState> Notice the addition of the `CascadingAuthenticationState` element, which cascades an `AuthenticationState` instance to all Blazor server components. Also notice the use of `AuthorizeRouteView`, which enables the use of the authorization attribute in Blazor pages, so only an authorized user can access those pages. ### Adding the Login/Logout Links The final step to making authentication work on the server is to enhance the `MainLayout.razor` component to add links for the login and logout pages: @using Microsoft.AspNetCore.Components.Authorization @inherits LayoutComponentBase <div class="page"> <div class="sidebar"> <NavMenu /> </div> <main> <div class="top-row px-4"> <AuthorizeView> <Authorized> Hello, @context!.User!.Identity!.Name <a href="logout">Logout</a> </Authorized> <NotAuthorized> <a href="login">Login</a> </NotAuthorized> </AuthorizeView> </div> <article class="content px-4"> @Body </article> </main> </div> <div id="blazor-error-ui"> An unhandled error has occurred. <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> The `AuthorizeView` component is used, with the `Authorized` block providing content for a logged in user, and the `NotAuthorized` block providing content for an anonymous user. In both cases, the user is directed to the appropriate page to login or logout. At this point, all _server-side_ Blazor components can use authorization, because they have access to the user identity via the cascading `AuthenticationState` object. This doesn’t automatically extend to pages or components running in WebAssembly on the browser. That takes some extra work. ## Blazor WebAssembly User Identity There is nothing built in to Blazor that automatically makes the user identity available to pages or components running in WebAssembly on the client device. You should also be aware that there are possible security implications to making the user identity available on the client device. This is because any client device can be hacked, and so a bad actor could gain access to any `ClaimsIdentity` object that exists on the client device. As a result, a bad actor could get a list of the user’s claims, if those claims are on the client device. In my experience, if developers are using client-side technologies such as WebAssembly, Angular, React, WPF, etc. they’ve already reconciled the security implications of running code on a client device, and so it is probably not an issue to have the user’s roles or other claims on the client. I will, however, call out where you can filter the user’s claims to prevent a sensitive claim from flowing to a client device. The basic process of making the user identity available on a WebAssembly client is to copy the user’s claims from the server, and to use that claims data to create a copy of the `ClaimsIdentity` and `ClaimsPrincipal` on the WebAssembly client. ### A Web API for ClaimsPrincipal The first step is to create a web API endpoint on the ASP.NET Core (and Blazor) server that exposes a copy of the user’s claims so they can be retrieved by the WebAssembly client. For example, here is a controller that provides this functionality: using Microsoft.AspNetCore.Mvc; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Controllers; [ApiController] [Route("[controller]")] public class AuthController(IHttpContextAccessor httpContextAccessor) { [HttpGet] public User GetUser() { ClaimsPrincipal principal = httpContextAccessor!.HttpContext!.User; if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) { // Return a user object with the username and claims var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); return new User { Username = principal.Identity!.Name, Claims = claims }; } else { // Return an empty user object return new User(); } } } public class Credentials { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } public class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } public class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } This code uses an `IHttpContextAccessor` to access `HttpContext` to get the current `ClaimsPrincipal` from ASP.NET Core. It then copies the data from the `ClaimsIdentity` into simple types that can be serialized into JSON for return to the caller. Notice how the code doesn’t have to do any work to determine the identity of the current user. This is because ASP.NET Core has already authenticated the user, and the user identity token cookie has been unpacked by ASP.NET Core before the controller is invoked. The line of code where you could filter sensitive user claims is this: var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); This line copies _all_ claims for serialization to the client. You could filter out claims considered sensitive so they don’t flow to the WebAssembly client. Keep in mind that any code that relies on such claims won’t work in WebAssembly pages or components. In the server `Program.cs` it is necessary to register and map controllers. builder.Services.AddControllers(); and app.MapControllers(); At this point the web API endpoint exists for use by the Blazor WebAssembly client. ### Getting the User Identity in WebAssembly Blazor always maintains the current user identity as a `ClaimsPrincpal` in an `AuthenticationState` object. Behind the scenes, there is an `AuthenticationStateProvider` service that provides access to the `AuthenticationState` object. On the Blazor server we generally don’t need to worry about the `AuthenticationStateProvider` because a default one is provided for our use. On the Blazor WebAssembly client however, we must implement a custom `AuthenticationStateProvider`. For example: using Microsoft.AspNetCore.Components.Authorization; using System.Net.Http.Json; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Client; public class CustomAuthenticationStateProvider(HttpClient HttpClient) : AuthenticationStateProvider { private AuthenticationState AuthenticationState { get; set; } = new AuthenticationState(new ClaimsPrincipal()); private DateTimeOffset? CacheExpire; public override async Task<AuthenticationState> GetAuthenticationStateAsync() { if (!CacheExpire.HasValue || DateTimeOffset.Now > CacheExpire) { var previousUser = AuthenticationState.User; var user = await HttpClient.GetFromJsonAsync<User>("auth"); if (user != null && !string.IsNullOrEmpty(user.Username)) { var claims = new List<System.Security.Claims.Claim>(); foreach (var claim in user.Claims) { claims.Add(new System.Security.Claims.Claim(claim.Type, claim.Value)); } var identity = new ClaimsIdentity(claims, "auth_api"); var principal = new ClaimsPrincipal(identity); AuthenticationState = new AuthenticationState(principal); } else { AuthenticationState = new AuthenticationState(new ClaimsPrincipal()); } if (!ComparePrincipals(previousUser, AuthenticationState.User)) { NotifyAuthenticationStateChanged(Task.FromResult(AuthenticationState)); } CacheExpire = DateTimeOffset.Now + TimeSpan.FromSeconds(30); } return AuthenticationState; } private static bool ComparePrincipals(ClaimsPrincipal principal1, ClaimsPrincipal principal2) { if (principal1.Identity == null || principal2.Identity == null) return false; if (principal1.Identity.Name != principal2.Identity.Name) return false; if (principal1.Claims.Count() != principal2.Claims.Count()) return false; foreach (var claim in principal1.Claims) { if (!principal2.HasClaim(claim.Type, claim.Value)) return false; } return true; } private class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } private class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } } This is a subclass of `AuthenticationStateProvider`, and it provides an implementation of the `GetAuthenticationStateAsync` method. This method invokes the server-side web API controller to get the user’s claims, and then uses them to create a `ClaimsIdentity` and `ClaimsPrincipal` for the current user. This value is then returned within an `AuthenticationState` object for use by Blazor and any other code that requires the user identity on the client device. One key detail in this code is that the `NotifyAuthenticationStateChanged` method is only called in the case that the user identity has changed. The `ComparePrincipals` method compares the existing principal with the one just retrieved from the web API to see if there’s been a change. It is quite common for Blazor and other code to request the `AuthenticationState` very frequently, and that can result in a lot of calls to the web API. Even a cache that lasts a few seconds will reduce the volume of repetitive calls significantly. This code uses a 30 second cache. ### Configuring the WebAssembly Client To make Blazor use our custom provider, and to enable authentication on the client, it is necessary to add some code to `Program.cs` _in the client project_ : using BlazorHolWasmAuthentication.Client; using Marimer.Blazor.RenderMode.WebAssembly; using Microsoft.AspNetCore.Components.Authorization; using Microsoft.AspNetCore.Components.WebAssembly.Hosting; var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) }); builder.Services.AddAuthorizationCore(); builder.Services.AddScoped<AuthenticationStateProvider, CustomAuthenticationStateProvider>(); builder.Services.AddCascadingAuthenticationState(); await builder.Build().RunAsync(); The `CustomAuthenticationStateProvider` requires an `HttpClient` service, and relies on the `AddAuthorizationCore` and `AddCascadingAuthenticationState` to properly function. ## Summary The preexisting integration between ASP.NET Core and Blazor on the server make server-side user authentication fairly straightforward. Extending the authenticated user identity to WebAssembly hosted pages and components requires a little extra work: creating a controller on the server and custom `AuthenticationStateProvider` on the client.
28.10.2025 14:37 — 👍 0    🔁 0    💬 0    📌 0
Do not throw away your old PCs As many people know, Windows 10 is coming to its end of life (or at least end of support) in 2025. Because Windows 11 requires specialized hardware that isn’t built into a lot of existing PCs running Windows 10, there is no _Microsoft-based_ upgrade path for those devices. The thing is, a lot of those “old” Windows 10 devices are serving their users perfectly well, and there is often no compelling reason for a person to replace their PC just because they can’t upgrade to Windows 11. > ℹ️ If you can afford to replace your PC with a new one, that’s excellent, and I’m not trying to discourage that! However, you can still avoid throwing away your old PC, and you should consider alternatives. Throwing away a PC or laptop - like in the trash - is a _horrible_ thing to do, because PCs contain toxic elements that are bad for the environment. In many places it might actually be illegal. Besides which, whether you want to keep and continue to use your old PC or not, _someone_ can probably make good use of it. > ️⚠️ If you do need to “throw away” your old PC, please make sure to turn it in to a recycling center for e-waste or hazardous waste center. I’d like to discuss some possible alternatives to throwing away or recycling your old PC. Things that provide much better outcomes for people and the environment! It might be that you can continue to use your PC or laptop, or someone else may be able to give it new life. Here are some options. ## Continue Using the PC Although you may be unable to upgrade to Windows 11, there are alternative operating systems that will breathe new life into your existing PC. The question you should ask first, is what do you do on your PC? The following may require Windows: * Windows-only software (like CAD drawing or other software) * Hard-core gaming On the other hand, if you use your PC entirely for things like: * Browsing the web * Writing documents * Simple spreadsheets * Web-based games in a browser Then you can probably replace Windows with an alternative and continue to be very happy with your PC. What are these “alternative operating systems”? They are all variations of Linux. If you’ve never heard of Linux, or have heard it is complicated and only for geeks, rest assured that there are some variations of Linux that are no more complex than Windows 10. ### “Friendly” Variations of Linux Some of the friendliest variations of Linux include: * Cinnamon Mint - Linux with a desktop that is very similar to Windows * Ubuntu Desktop - Linux with its own style of graphical desktop that isn’t too hard to learn if you are used to Windows There are many others, these are just a couple that I’ve used and found to be easy to install and learn. > 🛑 Before installing Linux on your PC make sure to copy all the files you want to keep onto a thumb drive or something! Installing Linux will _entirely delete your existing hard drive_ and none of your existing files will be on the PC when you are done. Once you’ve installed Linux, you’ll need software to do the things you do today. ### Browsers on Linux Linux often comes with the Firefox browser pre-installed. Other browsers that you can install include: * Chrome * Edge I am sure other browsers are available as well. Keep in mind that most modern browsers provide comparable features and let you use nearly every web site, so you may be happy with Firefox or whatever comes pre-installed with Linux. ### Software similar to Office on Linux Finally, most people use their PC to write documents, create spreadsheets and do other things that are often done using Microsoft Office. Some alternatives to Office available on Linux include: * OneDrive - Microsoft on-line file storage and web-based versions of Word, Excel, and more * Google Docs - Google on-line file storage and web-based word processor, spreadsheet, and more * LibreOffice - Software you install on your PC that provides word processing, spreadsheets, and more. File formats are compatible with Word, Excel, and other Office tools. Other options exist, these are the ones I’ve used and find to be most common. ## Donate your PC Even if your needs can’t be met by running Linux on your old PC, or perhaps installing a new operating system just isn’t for you - please consider that there are people all over the world, including near you, that would _love_ to have access to a free computer. This might include kids, adults, or seniors in your area who can’t afford a PC (or to have their own PC). In the US, rural and urban areas are _filled_ with young people who could benefit from having a PC to do school work, learn about computers, and more. > 🛑 Before donating your PC, make sure to use the Windows 10 feature to reset the PC to factory settings. This will delete all _your_ files from the PC, ensuring that the new owner can’t access any of your information. Check with your church and community organizations to find people who may benefit from having access to a computer. ## Build a Server If you know people, or are someone, who likes to tinker with computers, there are a lot of alternative uses for an old PC or laptop. You can install Linux _server_ software on an old PC and then use that server for all sorts of fun things: * Create a file server for your photos and other media - can be done with a low-end PC that has a large hard drive * Build a Kubernetes cluster out of discarded devices - requires PCs with at least 2 CPU cores and 8 gigs of memory, though more is better Here are a couple articles with other good ideas: * Avoid the Trash Heap: 17 Creative Uses for an Old Computer * 10 Creative Things to Do With an Old Computer If you aren’t the type to tinker with computers, just ask around your family and community. It is amazing how many people do enjoy this sort of thing, and would love to have access to a free device that can be used for something other than being hazardous waste. ## Conclusion I worry that 2025 will be a bad year for e-waste and hazardous waste buildup in landfills and elsewhere around the world, as people realize that their Windows 10 PC or laptop can’t be upgraded and “needs to be replaced”. My intent in writing this post is to provide some options to consider that may breathe new life into your “old” PC. For yourself, or someone else, that computer may have many more years of productivity ahead of it.
28.10.2025 14:37 — 👍 0    🔁 0    💬 0    📌 0
Why MAUI Blazor Hybrid It can be challenging to choose a UI technology in today’s world. Even if you narrow it down to wanting to build “native” apps for phones, tablets, and PCs there are so many options. In the Microsoft .NET space, there are _still_ many options, including .NET MAUI, Uno, Avalonia, and others. The good news is that these are good options - Uno and Avalonia are excellent, and MAUI is coming along nicely. At this point in time, my _default_ choice is usually something called a MAUI Hybrid app, where you build your app using Blazor, and the app is hosted in MAUI so it is built as a native app for iOS, Android, Windows, and Mac. Before I get into why this is my default, I want to point out that I (personally) rarely build mobile apps that “represent the brand” of a company. Take the Marriott or Delta apps as examples - the quality of these apps and the way they work differently on iOS vs Android can literally cost these companies customers. They are a primary contact point that can irritate a customer or please them. This is not the space for MAUI Blazor Hybrid in my view. ## Common Code MAUI Blazor Hybrid is (in my opinion) for apps that need to have rich functionality, good design, and be _common across platforms_ , often including phones, tablets, and PCs. Most of my personal work is building business apps - apps that a business creates to enable their employees, vendors, partners, and sometimes even customers, to interact with important business systems and functionality. Blazor (the .NET web UI framework) turns out to be an excellent choice for building business apps. Though this is a bit of a tangent, Blazor is my go-to for modernizing (aka replacing) Windows Forms, WPF, Web Forms, MVC, Silverlight, and other “legacy” .NET app user experiences. The one thing Blazor doesn’t do by itself, is create native apps that can run on devices. It creates web sites (server hosted) or web apps (browser hosted) or a combination of the two. Which is wonderful for a lot of scenarios, but sometimes you really need things like offline functionality or access to per-platform APIs and capabilities. This is where MAUI Hybrid comes into the picture, because in this model you build your Blazor app, and that app is _hosted_ by MAUI, and therefore is a native app on each platform: iOS, Android, Windows, Mac. That means that your Blazor app is installed as a native app (therefore can run offline), and it can tap into per-platform APIs like any other native app. ## Per-Platform In most business apps there is little (or no) per-platform difference, and so the vast majority of your app is just Blazor - C#, html, css. It is common across all the native platforms, and optionally (but importantly) also the browser. When you do have per-platform differences, like needing to interact with serial or USB port devices, or arbitrary interactions with local storage/hard drives, you can do that. And if you do that with a little care, you still end up with the vast majority of your app in Blazor, with small bits of C# that are per-platform. ## End User Testing I mentioned that a MAUI Hybrid app can not only create native apps but that it can also target the browser. This is fantastic for end user testing, because it can be challenging to do testing via the Apple, Google, and Microsoft stores. Each requires app validation, on their schedule not yours, and some have limits on the numbers of test users. > In .NET 9, the ability to create a MAUI Hyrid that also targets the browser is a pre-built template. Previously you had to set it up yourself. What this means is that you can build your Blazor app, have your users do a lot of testing of your app via the browser, and once you are sure it is ready to go, then you can do some final testing on a per-platform basis via the stores (or whatever scheme you use to install native apps). ## Rich User Experience Blazor, with its use of html and css backed by C#, directly enables rich user experiences and high levels of interactivity. The defacto UI language is html/css after all, and we all know how effective it can be at building great experiences in browsers - as well as native apps. There is a rapidly growing and already excellent ecosystem around Blazor, with open-source and commercial UI toolkits and frameworks available that make it easy to create many different types of user experience, including Material design and others. From a developer perspective, it is nice to know that learning any of these Blazor toolsets is a skill that spans native and web development, as does Blazor itself. In some cases you’ll want to tap into per-platform capabilities as well. The MAUI Community Toolkit is available and often provides pre-existing abstractions for many per-platform needs. Some highlights include: * File system interaction * Badge/notification systems * Images * Speech to text Between the basic features of Blazor, advanced html/css, and things like the toolkit, it is pretty easy to build some amazing experiences for phones, tablets, and PCs - as well as the browser. ## Offline Usage Blazor itself can provide a level of offline app support if you build a Progressive Web App (PWA). To do this, you create a standlone Blazor WebAssembly app that includes the PWA manifest and worker job code (in JavaScript). PWAs are quite powerful and are absolutely something to consider as an option for some offline app requirements. The challenge with a PWA is that it is running in a browser (even though it _looks_ like a native app) and therefore is limited by the browser sandbox and the local operating system. For example, iOS devices place substantial limitations on what a PWA can do and how much data it can store locally. There are commercial reasons why Apple doesn’t like PWAs competing with “real apps” in its store, and the end result is that PWAs _might_ work for you, as long as you don’t need too much local storage or too many native features. MAUI Hybrid apps are actual native apps installed on the end user’s device, and so they can do anything a native app can do. Usually this means asking the end user for permission to access things like storage, location, and other services. As a smartphone user you are certainly aware of that type of request as an app is installed. The benefit then, is that if the user gives your app permission, your app can do things it couldn’t do in a PWA from inside the browser sandbox. In my experience, the most important of these things is effectively unlimited access to local storage for offline data that is required by the app. ## Conclusion This has been a high level overview of my rationale for why MAUI Blazor Hybrid is my “default start point” when thinking about building native apps for iOS, Android, Windows, and/or Mac. Can I be convinced that some other option is better for a specific set of business and technical requirements? Of course!! However, having a well-known and very capable option as a starting point provides a short-cut for discussing the business and technical requirements - to determine if each requirement is or isn’t already met. And in many cases, MAUI Hybrid apps offer very high developer productivity, the functionality needed by end users, and long-term maintainability.
28.10.2025 14:37 — 👍 0    🔁 0    💬 0    📌 0
Blazor EditForm OnSubmit behavior I am working on the open-source KidsIdKit app and have encountered some “interesting” behavior with the `EditForm` component and how buttons trigger the `OnSubmit` event. An `EditForm` is declared similar to this: <EditForm Model="CurrentChild" OnSubmit="SaveData"> I would expect that any `button` component with `type="submit"` would trigger the `OnSubmit` handler. <button class="btn btn-primary" type="submit">Save</button> I would also expect that any `button` component _without_ `type="submit"` would _not_ trigger the `OnSubmit` handler. <button class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> I’d think this was true _especially_ if that second button was in a nested component, so it isn’t even in the `EditForm` directly, but is actually in its own component, and it uses an `EventCallback` to tell the parent component that the button was clicked. ### Actual Results In Blazor 8 I see different behaviors between MAUI Hybrid and Blazor WebAssembly hosts. In a Blazor WebAssembly (web) scenario, my expectations are met. The secondary button in the sub-component does _not_ cause `EditForm` to submit. In a MAUI Hybrid scenario however, the secondary button in the sub-component _does_ cause `EditForm` to submit. I also tried this using the new Blazor 9 MAUI Hybrid plus Web template - though in this case the web version is Blazor server. In my Blazor 9 scenarios, in _both_ hosting cases the secondary button triggers the submit of the `EditForm` - even though the secondary button is in a sub-component (its own `.razor` file)! What I’m getting out of this is that we must assume that _any button_ , even if it is in a nested component, will trigger the `OnSubmit` event of an `EditForm`. Nasty! ### Solution The solution (thanks to @jeffhandley) is to add `type="button"` to all non-submit `button` components. It turns out that the default HTML for `<button />` is `type="submit"`, so if you don’t override that value, then all buttons trigger a submit. What this means is that I _could_ shorten my actual submit button: <button class="btn btn-primary">Save</button> I probably won’t do this though, as being explicit probably increases readability. And I _absolutely must_ be explicit with all my other buttons: <button type="button" class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> This prevents the other buttons (even in nested Razor components) from accidentally triggering the submit behavior in the `EditForm` component.
28.10.2025 14:37 — 👍 0    🔁 0    💬 0    📌 0
.NET Terminology I was recently part of a conversation thread online, which reinforced the naming confusion that exists around the .NET (dotnet) ecosystem. I thought I’d summarize my responses to that thread, as it surely can be confusing to a newcomer, or even someone who blinked and missed a bit of time, as things change fast. ## .NET Framework There is the Microsoft .NET Framework, which is tied to Windows and has been around since 2002 (give or take). It is now considered “mature” and is at version 4.8. We all expect that’s the last version, as it is in maintenance mode. I consider .NET Framework (netfx) to be legacy. ## Modern .NET There is modern .NET (dotnet), which is cross-platform and isn’t generally tied to any specific operating system. I suppose the term “.NET” encompasses both, but most of us that write and speak in this space tend to use “.NET Framework” for legacy, and “.NET” for modern .NET. The .NET Framework and modern .NET both have a bunch of sub-components that have their own names too. Subsystems for talking to databases, creating various types of user experience, and much more. Some are tied to Windows, others are cross platform. Some are legacy, others are modern. It is important to remember that modern .NET is cross-platform and you can develop and deploy to Linux, Mac, Android, iOS, Windows, and other operating systems. It also supports various CPU architectures, and isn’t tied to x64. ## Modern Terminology The following table tries to capture most of the major terminology around .NET today. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET (dotnet) 5+ | modern | No | Platform ASP.NET Core | modern | No | Web Framework Blazor | modern | No | Web SPA framework ASP.NET Core MVC | modern | No | Web UI framework ASP.NET Core Razor Pages | modern | No | Web UI framework .NET MAUI | modern | No | Mobile/Desktop UI framework MAUI Blazor Hybrid | modern | no | Mobile/Desktop UI framework ADO.NET | modern | No | Data access framework Entity Framework | modern | No | Data access framework WPF | modern | Yes | Windows UI Framework Windows Forms | modern | Yes | Windows UI Framework ## Legacy Terminology And here is the legacy terminology. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET Framework (netfx) 4.8 | legacy | Yes | Platform ASP.NET | legacy | Yes | Web Framework ASP.NET Web Forms | legacy | Yes | Web UI Framework ASP.NET MVC | legacy | Yes | Web UI Framework Xamarin | legacy (deprecated) | No | Mobile UI Framework ADO.NET | legacy | Yes | Data access framework Entity Framework | legacy | Yes | Data access framework UWP | legacy | Yes | Windows UI Framework WPF | legacy | Yes | Windows UI Framework Windows Forms | legacy | Yes | Windows UI Framework ## Messy History Did I leave out some history? Sure, there’s the whole “.NET Core” thing, and the .NET Core 1.0-3.1 timespan, and .NET Standard (2 versions). Are those relevant in the world right now, today? Hopefully not really! They are cool bits of history, but just add confusion to anyone trying to approach modern .NET today. ## What I Typically Use What do _I personally_ tend to use these days? I mostly: * Develop modern dotnet on Windows using mostly Visual Studio, but also VS Code and Rider * Build my user experiences using Blazor and/or MAUI Blazor Hybrid * Build my web API services using ASP.NET Core * Use ADO.NET (often with the open source Dapper) for data access * Use the open source CSLA .NET for maintainable business logic * Test on Linux using Ubuntu on WSL * Deploy to Linux containers on the server (Azure, Kubernetes, etc.) ## Other .NET UI Frameworks Finally, I would be remiss if I didn’t mention some other fantastic cross-platform UI frameworks based on modern .NET: * Uno Platform * Avalonia * OpenSilver
28.10.2025 14:37 — 👍 0    🔁 0    💬 0    📌 0
MCP and A2A Basics I have been spending a lot of time lately, learning about the Model Context Protocol (MCP) and Agent to Agent (A2A) protocols. And a little about a slightly older technology called the activity protocol that comes from the Microsoft bot framework. I’m writing this blog post mostly for myself, because writing content helps me organize my thoughts and solidify my understanding of concepts. As they say with AIs, mistakes are possible, because my understanding of all this technology is still evolving. (disclaimer: unless otherwise noted, I wrote this post myself, with my own fingers on a keyboard) ## Client-Server is Alive and Well First off, I think it is important to recognize that the activity protocol basically sits on top of REST, and so is client-server. The MCP protocol is also client-server, sitting on top of JSON-RPC. A2A _can be_ client-server, or peer-to-peer, depending on how you use it. The sipmlest form is client-server, with peer-to-peer provide a lot more capability, but also complexity. ## Overall Architecture These protocols (in particular MCP and A2A) exist to enable communication between LLM “AI” agents and their environments, or other tools, or other agents. ### Activity Protocol The activity protocol is a client-server protocol that sits on top of REST. It is primarily used for communication between a user and a bot, or between bots. The protocol defines a set of RESTful APIs for sending and receiving activities, which are JSON objects that represent a message, event, or command. The activity protocol is widely used in the Microsoft Bot Framework and is supported by many bot channels, such as Microsoft Teams, Slack, and Facebook Messenger. (that previous paragraph was written by AI - but it is pretty good) ### MCP The Model Context Protocol is really a standard and flexible way to expand the older concept of LLM tool or function calling. The primary intent is to allow an LLM AI to call tools that interact with the environment, call other apps, get data from services, or do other client-server style interactions. The rate of change here is pretty staggering. The idea of an LLM being able to call functions or “tools” isn’t that old. The limitation of that approach was that these functions had to be registered with the LLM in a way that wasn’t standard across LLM tools or platforms. MCP provides a standard for registration and interaction, allowing an MCP-enabled LLM to call in-process tools (via standard IO) or remotely (via HTTP). If you dig a little into the MCP protocol, it is erily reminiscent of COM from the 1990’s (and I suspect CORBA as well). We provide the LLM “client” with an endpoint for the MCP server. The client can ask the MCP server what it does, and also for a list of tools it provides. Much like `IUnknown` in COM. Once the LLM client has the description of the server and all the tools, it can then decide when and if it should call those tools to solve problems. You might create a tool that deletes a file, or creates a file, or blinks a light on a device, or returns some data, or sends a message, or creates a record in a database. Really, the sky is the limit in terms of what you can build with MCP. ### A2A Agent to Agent (A2A) communication is a newer and more flexible protocol that (I think) has the potential to do a couple things: 1. I could see it replacing MCP, because you can use A2A for client-server calls from an LLM client to an A2A “tool” or agent. This is often done over HTTP. 2. It also can be used to implement bi-directional, peer-to-peer communication between agents, enabling more complex and dynamic interactions. This is often done over WebSockets or (better yet) queuing systems like RabbitMQ. ## Metadata Rules In any case, the LLM that is going to call a tool or send a message to another agent needs a way to understand the capabilities and requirements of that tool or agent. This is where metadata comes into play. Metadata provides essential information about the tool or agent, such as its name, description, input and output parameters, and more. “Metadata” in this context is human language descriptions. Remember that the calling LLM is an AI model that is generally good with language. However, some of the metadata might also describe JSON schemas or other structured data formats to precisely define the inputs and outputs. But even that is usually surrounded by human-readable text that describes the purpose of the scheme or data formats. This is where the older activity protocol falls down, because it doesn’t provide metadata like MCP or A2A. The newer protocols include the ability to provide descriptions of the service/agent, and of tool methods or messages that are exchanged. ## Authentication and Identity In all cases, these protocols aren’t terribly complex. Even the A2A peer-to-peer isn’t that difficult if you have an understanding of async messaging concepts and protocols. What does seem to _always_ be complex is managing authentication and identity across these interactions. There seem to be multiple layers at work here: 1. The client needs to authenticate to call the service - often with some sort of service identity represented by a token. 2. The service needs to authenticate the client, so that service token is important 3. HOWEVER, the service also usually needs to “impersonate” or act on behalf of a user or another identity, which can be a separate token or credential Getting these tokens, and validating them correctly, is often the hardest part of implementing these protocols. This is especially true when you are using abstract AI/LLM hosting environments. It is hard enough in code like C#, where you can see the token handling explicitly, but in many AI hosting platforms, these details are abstracted away, making it challenging to implement robust security. ## Summary The whole concept of an LLM AI calling tools and then service and then having peer-to-peer interactions has evolved very rapidly over the past couple of years, and it is _still_ evolving very rapidly. Just this week, for example, Microsoft announced the Microsoft Agent Framework that replaces Semantic Kernel and Autogen. And that’s just one example! What makes me feel better though, is that at their heart, these protocols are just client-server protocols with some added layers for metadata. Or a peer-to-peer communication protocol that relies on asynchronous messaging patterns. While these frameworks (to a greater or lesser degree) have some support for authentication and token passing, that seems to be the weakest part of the tooling, and the hardest to solve in real-life implementations.
28.10.2025 14:37 — 👍 0    🔁 0    💬 0    📌 0
Running Linux on My Surface Go I have a first-generation Surface Go, the little 10” tablet Microsoft created to try and compete with the iPad. I’ll confess that I never used it a lot. I _tried_ , I really did! But it is underpowered, and I found that my Surface Pro devices were better for nearly everything. My reasoning for having a smaller tablet was that I travel quite a lot, more back then than now, and I thought having a tablet might be nicer for watching movies and that sort of thing, especially on the plane. It turns out that the Surface Pro does that too, without having to carry a second device. Even when I switched to my Surface Studio Laptop, I _still_ didn’t see the need to carry a second device - though the Surface Pro is absolutely better for traveling in my view. I’ve been saying for quite some time that I think people need to look at Linux as a way to avoid the e-waste involved in discarding their Windows 10 PCs - the ones that can’t run Windows 11. I use Linux regularly, though usually via the command line for software development, and so I thought I’d put it on my Surface Go to gain real-world experience. > I have quite a few friends and family who have Windows 10 devices that are perfectly good. Some of those folks don’t want to buy a new PC, due to financial constraints, or just because their current PC works fine. End of support for Windows 10 is a problem for them! The Surface Go is a bit trickier than most mainstream Windows 10 laptops or desktops, because it is a convertable tablet with a touch screen and specialized (rare) hardware - as compared to most of the devices in the market. So I did some reading, and used Copilot, and found a decent (if old) article on installing Linux on a Surface Go. > ⚠️ One quick warning: Surface Go was designed around Windows, and while it does work reasonably well with Linux, it isn’t as good. Scrolling is a bit laggy, and the cameras don’t have the same quality (by far). If you want to use the Surface Go as a small, lightweight laptop I think it is pretty good; if you are looking for a good _tablet_ experience you should probably just buy a new device - and donate the old one to someone who needs a basic PC. Fortunately, Linux hasn’t evolved all that much or all that rapidly, and so this article remains pretty valid even today. ## Using Ubuntu Desktop I chose to install Ubuntu, identified in the article as a Linux distro (distribution, or variant, or version) that has decent support for the Surface Go. I also chose Ubuntu because this is normally what I use for my other purposes, and so I’m familiar with it in general. However, I installed the latest Ubuntu Desktop (version 25.04), not the older version mentioned in the article. This was a good choice, because support for the Surface hardware has improved over time - though the other steps in the article remain valid. ## Download and Set Up Media The steps to get ready are: 1. Download Ubuntu Desktop - this downloads a file with a `.iso` extension 2. Download software to create a bootable flash drive based on the `.iso` file. I used software called Rufus - just be careful to avoid the flashy (spammy) download buttons, and find the actual download link text in the page 3. Get a flash drive (either new, or one you can erase) and insert it into your PC 4. Run rufus, and identify the `.iso` file and your flash drive 5. Rufus will write the data to the flash drive, and make the flash drive bootable so you can use it to install Linux on any PC 6. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 ## Install Ubuntu on the Surface At this point you have a bootable flash drive and a Surface Go device, and you can do the installation. This is where the zdnet article is a bit dated - the process is smoother and simpler than it was back then, so just do the install like this: 1. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 2. Insert the flash drive into the Surface USB port (for the Surface Go I had to use an adapter from type C to type A) 3. Press the Windows key and type “reset” and choose the settings option to reset your PC 4. That will bring up the settings page where you can choose Advanced and reset the PC for booting from a USB device 5. What I found is that the first time I did this, my Linux boot device didn’t appear, so I rebooted to Windows and did step 4 again 6. The second time, an option was there for Linux. It had an odd name: Linpus (as described in the zdnet article) 7. Boot from “Linpus” and your PC will sit and spin for quite some time (the Surface Go is quite old and slow by modern standards), and eventually will come up with Ubuntu 8. The thing is, it is _running_ Ubuntu, but it hasn’t _installed_ Ubuntu. So go through the wizard and answer the questions - especially the wifi setup 9. Once you are on the Ubuntu (really Gnome) desktop, you’ll see an icon for _installing_ Ubuntu. Double-click that and the actual installation process will begin 10. I chose to have the installer totally reformat my hard drive, and I recommend doing that, because the Surface Go doesn’t have a huge drive to start with, and I want all of it available for my new operating system 11. Follow the rest of the installer steps and let the PC reboot 12. Once it has rebooted, you can remove the flash drive ## Installing Updates At this point you should be sitting at your new desktop. The first thing Linux will want to do is install updates, and you should let it do so. I laugh a bit, because people make fun of Windows updates, and Patch Tuesday. Yet all modern and secure operating systems need regular updates to remain functional and secure, and Linux is no exception. Whether automated or not, you should do regular (at least monthly) updates to keep Linux secure and happy. ## Installing Missing Features Immediately upon installation, Ubuntu 25.04 seems to have very good support for the Surface Go, including multi-touch on the screen and trackpad, use of the Surface Pen, speakers, and the external (physical) keyboard. What doesn’t work right away, at least what I found, are the cameras or any sort of onscreen/soft keyboard. You need to take extra steps for these. The zdnet article is helpful here. ### Getting the Cameras Working The zdnet article walks through the process to get the cameras working. I actually think the camera drivers are now just part of Ubuntu, but I did have to take steps to get them working, and even then they don’t have great quality - this is clearly an area where moving to Linux is a step backward. At times I found the process a bit confusing, but just plowed ahead figuring I could always reinstall Linux again if necessary. It did work fine in the end, no reinstall needed. 1. Install the Linux Surface kernel - which sounds intimidating, but is really just following some steps as documented in their GitHub repo; other stuff in the document is quite intimidating, but isn’t really relevant if all you want to do is get things running 2. That GitHub repo also has information about the various camera drivers for different Surface devices, and I found that to be a bit overwhelming; fortunately, it really amounts to just running one command 3. Make sure you also run these commands to give your Linux account permissions to use the camera 4. At this point I was able to follow instructions to run `cam` and see the cameras - including some other odd entries I igored 5. And I was able to run `qcam`, which is a command that brings up a graphical app so you can see through each camera > ⚠️ Although the cameras technically work, I am finding that a lot of apps still don’t see the cameras, and in all cases the camera quality is quite poor. ### Getting a Soft or Onscreen Keyboard Because the Surface Go is _technically_ a tablet, I expected there to be a soft or onscreen keyboard. It turns out that there is a primitive one built into Ubuntu, but it really doesn’t work very well. It is pretty, but I was unable to figure out how to get it to appear via touch, which kind of defeats the purpose (I needed my physical keyboard to get the virtual one to appear). I found an article that has some good suggestions for Linux onscreen keyboard (OSK) improvements. I used what the article calls “Method 2” to install an Extension Manager, which allowed me to install extensions for the keyboard. 1. Install the Extension Manager `sudo apt install gnome-shell-extension-manager` 2. Open the Extension Manager app 3. This is where the article fell down, because the extension they suggested doesn’t seem to exist any longer, and there are numerous other options to explore 4. I installed an extension called “Touch X” which has the ability to add an icon to the upper-right corner of the screen by which you can open the virtual keyboard at any time (it can also do a cool ripple animation when you touch the screen if you’d like) 5. I also installed “GJS OSK”, which is a replacement soft keyboard that has a lot more configurability than the built-in default; you can try both and see which you prefer ## Installing Important Apps This section is mostly editorial, because I use certain apps on a regular basis, and you might use other apps. Still, you should be aware that there are a couple ways to install apps on Ubuntu: snap and apt. The “snap” concept is specific to Ubuntu, and can be quite nice, as it installs each app into a sort of sandbox that is managed by Ubuntu. The “app store” in Ubuntu lists and installs apps via snap. The “apt” concept actually comes from Ubuntu’s parent, Debian. Since Debian and Ubuntu make up a very large percentage of the Linux install base, the `apt` command is extremely common. This is something you do from a terminal command line. Using snap is very convenient, and when it works I love it. Sometimes I find that apps installed via snap don’t have access to things like speakers, cameras, or other things. I think that’s because they run in a sandbox. I’m pretty sure there are ways to address these issues - my normal way of addressing them is to uninstall the snap and use `apt`. ### My “Important” Apps I installed apps via snap, apt, and as PWAs. #### Snap and Apt Apps Here are the apps I installed right away: 1. Microsoft Edge browser - because I use Edge on my Windows devices and Android phone, I want to use the same browser here to sync all my history, settings, etc. - I installed this using the default Firefix browser, then switched the default to Edge 2. Visual Studio Code - I’m a developer, and find it hard to imagine having a device without some way to write code - and I use vscode on Windows, so I’m used to it, and it works the same on Linux - I installed this as a snap via App Center 3. git - again, I’m a developer and all my stuff is on GitHub, which means using git as a primary tool - I installed this using `apt` 4. Discord - I use discord for many reasons - talking to friends, gaming, hosting the CSLA .NET Discord server - so it is something I use all the time - I installed this as a snap via App Center 5. Thunderbird Email - I’m not sold on this yet - it seems to be the “default” email app for Linux, but feels like Outlook from 10-15 years ago, and I do hope to find something a lot more modern - I installed this as a snap via App Center 6. Copilot Desktop - I’ve been increasingly using Copilot on Windows 11, and was delighted to find that Ken VanDine wrote a Copilot shell for Linux; it is in the App Center and installs as a snap, providing the same basic experience as Copilot on Windows or Android - I installed this as a snap via App Center 7. .NET SDK - I mostly develop using .NET and Blazor, and so installing the .NET software developer kit seemed obvious; Ubuntu has a snap to install version 8, but I used apt to install version 9 #### PWA Apps Once I got Edge installed, I used it to install a number of progressive web apps (PWAs) that I use on nearly every device. A PWA is an app that is installed and updates via your browser, and is a great way to get cross-platform apps. Exactly how you install a PWA will vary from browser to browser. Some have a little icon when you are on the web page, others have an “install app” option or “install on desktop” or similar. The end result is that you get what appears to be an app icon on your phone, PC, whatever - and when you click the icon the PWA app runs in a window like any other app. 1. Elk - I use Mastodon (social media) a lot, and my preferred client is Elk - fast, clean, works great 2. Bluesky - I use Bluesky (social media) a lot, and Bluesky can be installed as a PWA 3. LinkedIn - I use LinkedIn quite a bit, and it can be installed as a PWA 4. Facebook - I still use Facebook a little, and it can be installed as a PWA #### Using Microsoft 365 Office Most people want the edit documents and maybe spreadsheets on their PC. A lot of people, including me, use Word and Excel for this purpose. Those apps aren’t available on Linux - at least not directly. Fortunately there are good alternatives, including: 1. Use https://onedrive.com to create and edit documents and spreadsheets in the browser 2. Use https://office.com to access Office online if you have a subscription 3. Install LibreOffice, an open-source office productivity suite sort of like Office I use OneDrive for a lot of personal documents, photos, etc. And I use actual Office for work. The LibreOffice idea is something I might explore at some point, but the online versions of the Office apps are usually enough for casual work - which is all I’m going to do on the little Surface Go device anyway. One feature of Edge is the ability to have multiple profiles. I use this all the time on Windows, having a personal and two work profiles. This feature works on Linux as well, though I found it had some glitches. My default Edge profile is my personal one, so all those PWAs I installed are connected to that profile. I set up another Edge profile for my CSLA work, and it is connected to my marimer.llc email address. This is where I log into the M365 office.com apps, and I have that page installed as a PWA. When I run “Office” it opens in my work profile and I have access to all my work documents. On my personal profile I don’t use the Office apps as much, but when I do open something from my personal OneDrive, it opens in that profile. The limitation is that I can only edit documents while online, but for my purposes with this device, that’s fine. I can edit my documents and spreadsheets as necessary. ## Conclusion At this point I’m pretty happy. I don’t expect to use this little device to do any major software development, but it actually does run vscode and .NET just fine (and also Jetbrains Rider if you prefer a more powerful option). I mostly use it for browsing the web, discord, Mastodon, and Bluesky. Will I bring this with when I travel? No, because my normal Windows 11 PC does everything I want. Could I live with this as my “one device”? Well, no, but that’s because it is underpowered and physically too small. But could I live with a modern laptop running Ubuntu? Yes, I certainly could. I wouldn’t _prefer_ it, because I like full-blown Visual Studio and way too many high end Steam games. The thing is, I am finding myself leaving the Surface Go in the living room, and reaching for it to scan the socials while watching TV. Something I could have done just as well with Windows, and can now do with Linux.
28.10.2025 14:36 — 👍 0    🔁 0    💬 0    📌 0
CSLA 2-tier Data Portal Behavior History The CSLA data portal originally treated 2- and 3-tier differently, primarily for performance reasons. Back in the early 2000’s, the data portal did not serialize the business object graph in 2-tier scenarios. That behavior still exists and can be enabled via configuration, but is not the default for the reasons discussed in this post. Passing the object graph by reference (instead of serializing it) does provide much better performance, but at the cost of being behaviorally/semantically different from 3-tier. In a 3-tier (or generally n-tier) deployment, there is at least one network hop between the client and any server, and the object graph _must be serialized_ to cross that network boundary. When different 2-tier and 3-tier behaviors existed, a lot of people did their dev work in 2-tier and then tried to switch to 3-tier. Usually they’d discover all sorts of issues in their code, because they were counting on the logical client and server using the same reference to the object graph. A variety of issues are solved by serializing the graph even in 2-tier scenarios, including: 1. Consistency with 3-tier deployment (enabling location transparency in code) 2. Preventing data binding from reacting to changes to the object graph on the logical server (nasty performance and other issues would occur) 3. Ensuring that a failure on the logical server (especially part-way through the graph) leaves the graph on the logical client in a stable/known state There are other issues as well - and ultimately those issues drove the decision (I want to say around 2006 or 2007?) to default to serializing the object graph even in 2-tier scenarios. There is a performance cost to that serialization, but having _all_ n-tier scenarios enjoy the same semamantic behaviors has eliminated so many issues and support questions on the forums that I regret nothing.
28.10.2025 14:36 — 👍 0    🔁 0    💬 0    📌 0
A Simple CSLA MCP Server In a recent CSLA discussion thread, a user asked about setting up a simple CSLA Mobile Client Platform (MCP) server. https://github.com/MarimerLLC/csla/discussions/4685 I’ve written a few MCP servers over the past several months with varying degrees of success. Getting the MCP protocol right is tricky (or was), and using semantic matching with vectors isn’t always the best approach, because I find it often misses the most obvious results. Recently however, Anthropic published a C# SDK (and NuGet package) that makes it easier to create and host an MCP server. The SDK handles the MCP protocol details, so you can focus on implementing your business logic. https://github.com/modelcontextprotocol/csharp-sdk Also, I’ve been reading up on the idea of hybrid search, which combines traditional search techniques with vector-based semantic search. This approach can help improve the relevance of search results by leveraging the strengths of both methods. The code I’m going to walk through in this post can be easily adapted to any scenario, not just CSLA. In fact, the MCP server just searches and returns markdown files from a folder. To use it for any scenario, you just need to change the source files and update the descriptions of the server, tools, and parameters that are in the attributes in code. Perhaps a future enhancement for this project will be to make those dynamic so you can change them without recompiling the code. The code for this article can be found in this GitHub repository. > ℹ️ Most of the code was actually written by Claude Sonnet 4 with my collaboration. Or maybe I wrote it with the collaboration of the AI? The point is, I didn’t do much of the typing myself. Before getting into the code, I want to point out that this MCP server really is useful. Yes, the LLMs already know all about CSLA because CSLA is open source. However, the LLMs often return outdated or incorrect information. By providing a custom MCP server that searches the actual CSLA code samples and snippets, the LLM can return accurate and up-to-date information. ## The MCP Server Host The MCP server itself is a console app that uses Spectre.Console to provide a nice command-line interface. The project also references the Anthropic C# SDK and some other packages. It targets .NET 10.0, though I believe the code should work with .NET 8.0 or later. I am not going to walk through every line of code, but I will highlight the key parts. > ⚠️ The modelcontextprotocol/csharp-sdk package is evolving rapidly, so you may need to adapt to use whatever is latest when you try to build your own. Also, all the samples in their GitHub repository use static tool methods, and I do as well. At some point I hope to figure out how to use instance methods instead, because that will allow the use of dependency injection. Right now the code has a lot of `Console.WriteLine` statements that would be better handled by a logging framework. Although the project is a console app, it does use ASP.NET Core to host the MCP server. var builder = WebApplication.CreateBuilder(); builder.Services.AddMcpServer() .WithHttpTransport() .WithTools<CslaCodeTool>(); The `AddMcpServer` method adds the MCP server services to the ASP.NET Core dependency injection container. The `WithHttpTransport` method configures the server to use HTTP as the transport protocol. The `WithTools<CslaCodeTool>` method registers the `CslaCodeTool` class as a tool that can be used by the MCP server. There is also a `WithStdioTransport` method that can be used to configure the server to use standard input and output as the transport protocol. This is useful if you want to run the server locally when using a locally hosted LLM client. The nice thing about using the modelcontextprotocol/csharp-sdk package is that it handles all the details of the MCP protocol for you. You just need to implement your tools and their methods. All the subtleties of the MCP protocol are handled by the SDK. ## Implementing the Tools The `CslaCodeTool` class is where the main logic of the MCP server resides. This class is decorated with the `McpServerToolType` attribute, which indicates that this class will contain MCP tool methods. [McpServerToolType] public class CslaCodeTool ### The Search Method The first tool is Search, defined by the `Search` method. This method is decorated with the `McpServerTool` attribute, which indicates that this method is an MCP tool method. The attribute also provides a description of the tool and what it will return. This description is used by the LLM to determine when to use this tool. My description here is probably a bit too short, but it seems to work okay. Any parameters for the tool method are decorated with the `Description` attribute, which provides a description of the parameter. This description is used by the LLM to understand what the parameter is for, and what kind of value to provide. [McpServerTool, Description("Searches CSLA .NET code samples and snippets for examples of how to implement code that makes use of #cslanet. Returns a JSON object with two sections: SemanticMatches (vector-based semantic similarity) and WordMatches (traditional keyword matching). Both sections are ordered by their respective scores.")] public static string Search([Description("Keywords used to match against CSLA code samples and snippets. For example, read-write property, editable root, read-only list.")]string message) #### Word Matching The orginal implementation (which works very well) uses only word matching. To do this, it gets a list of all the files in the target directory, and searches them for any words from the LLM’s `message` parameter that are 4 characters or longer. It counts the number of matches in each file to generate a score for that file. Here’s the code that gets the list of search terms from `message`: // Extract words longer than 4 characters from the message var searchWords = message .Split(new char[] { ' ', '\t', '\n', '\r', '.', ',', ';', ':', '!', '?', '(', ')', '[', ']', '{', '}', '"', '\'', '-', '_' }, StringSplitOptions.RemoveEmptyEntries) .Where(word => word.Length > 3) .Select(word => word.ToLowerInvariant()) .Distinct() .ToList(); Console.WriteLine($"[CslaCodeTool.Search] Extracted search words: [{string.Join(", ", searchWords)}]"); It then loops through each file and counts the number of matching words. The final result is sorted by score and then file name: var sortedResults = results.OrderByDescending(r => r.Score).ThenBy(r => r.FileName).ToList(); #### Semantic Matching More recently I added semantic matching as well, resulting in a hybrid search approach. The search tool now returns two sets of results: one based on traditional word matching, and one based on vector-based semantic similarity. The semantic search behavior comes in two parts: indexing the source files, and then matching against the message parameter from the LLM. ##### Indexing the Source Files Indexing source files takes time and effort. To minimize startup time, the MCP server actually starts and will work without the vector data. In that case it relies on the word matching only. After a few minutes, the vector indexing will be complete and the semantic search results will be available. The indexing is done by calling a text embedding model to generate a vector representation of the text in each file. The vectors are then stored in memory along with the file name and content. Or the vectors could be stored in a database to avoid having to re-index the files each time the server is started. I’m relying on a `vectorStore` object to index each document: await vectorStore.IndexDocumentAsync(fileName, content); This `VectorStoreService` class is a simple in-memory vector store that uses Ollama to generate the embeddings: public VectorStoreService(string ollamaEndpoint = "http://localhost:11434", string modelName = "nomic-embed-text:latest") { _httpClient = new HttpClient(); _vectorStore = new Dictionary<string, DocumentEmbedding>(); _ollamaEndpoint = ollamaEndpoint; _modelName = modelName; } This could be (and probably will be) adapted to use a cloud-based embedding model instead of a local Ollama instance. Ollama is free and easy to use, but it does require a local installation. The actual embedding is created by a call to the Ollama endpoint: var response = await _httpClient.PostAsync($"{_ollamaEndpoint}/api/embeddings", content); The embedding is just a list of floating-point numbers that represent the semantic meaning of the text. This needs to be extracted from the JSON response returned by the Ollama endpoint. var responseJson = await response.Content.ReadAsStringAsync(); var result = JsonSerializer.Deserialize<JsonElement>(responseJson); if (result.TryGetProperty("embedding", out var embeddingElement)) { var embedding = embeddingElement.EnumerateArray() .Select(e => (float)e.GetDouble()) .ToArray(); return embedding; } > 👩‍🔬 All those floating-point numbers are the magic of this whole thing. I don’t understand any of the math, but it obviously represents the semantic “meaning” of the file in a way that a query can be compared later to see if it is a good match. All those embeddings are stored in memory for later use. ##### Matching Against the Message When the `Search` method is called, it first generates an embedding for the `message` parameter using the same embedding model. It then compares that embedding to each of the document embeddings in the vector store to calculate a similarity score. All that work is delegated to the `VectorStoreService`: var semanticResults = VectorStore.SearchAsync(message, topK: 10).GetAwaiter().GetResult(); In the `VectorStoreService` class, the `SearchAsync` method generates the embedding for the query message: var queryEmbedding = await GetTextEmbeddingAsync(query); It then calculates the cosine similarity between the query embedding and each document embedding in the vector store: foreach (var doc in _vectorStore.Values) { var similarity = CosineSimilarity(queryEmbedding, doc.Embedding); results.Add(new SemanticSearchResult { FileName = doc.FileName, SimilarityScore = similarity }); } The results are then sorted by similarity score and the top K results are returned. var topResults = results .OrderByDescending(r => r.SimilarityScore) .Take(topK) .Where(r => r.SimilarityScore > 0.5f) // Filter out low similarity scores .ToList(); ##### The Final Result The final result of the `Search` method is a JSON object that contains two sections: `SemanticMatches` and `WordMatches`. Each section contains a list of results ordered by their respective scores. var combinedResult = new CombinedSearchResult { SemanticMatches = semanticMatches, WordMatches = sortedResults }; It is up to the calling LLM to decide which set of results to use. In the end, the LLM will use the fetch tool to retrieve the content of one or more of the files returned by the search tool. ### The Fetch Method The second tool is Fetch, defined by the `Fetch` method. This method is also decorated with the `McpServerTool` attribute, which provides a description of the tool and what it will return. [McpServerTool, Description("Fetches a specific CSLA .NET code sample or snippet by name. Returns the content of the file that can be used to properly implement code that uses #cslanet.")] public static string Fetch([Description("FileName from the search tool.")]string fileName) This method has some defensive code to prevent path traversal attacks and other things, but ultimately it just reads the content of the specified file and returns it as a string. var content = File.ReadAllText(filePath); return content; ## Hosting the MCP Server The MCP server can be hosted in a variety of ways. The simplest is to run it as a console app on your local machine. This is useful for development and testing. You can also host it in a cloud environment, such as Azure App Service or AWS Elastic Beanstalk. This allows you to make the MCP server available to other applications and services. Like most things, I am running it in a Docker container so I can choose to host it anywhere, including on my local Kubernetes cluster. For real use in your organization, you will want to ensure that the MCP server endpoint is available to all your developers from their vscode or Visual Studio environments. This might be a public IP, or one behind a VPN, or some other secure way to access it. I often use tools like Tailscale or ngrok to make local services available to remote clients. ## Testing the MCP Server Testing an MCP server isn’t as straightforward as testing a regular web API. You need an LLM client that can communicate with the MCP server using the MCP protocol. Anthropic has an npm package that can be used to test the MCP server. You can find it here: https://github.com/modelcontextprotocol/inspector This package provides a GUI or CLI tool that can be used to interact with the MCP server. You can use it to send messages to the server and see the responses. It is a great way to test and debug your MCP server. Another option is to use the MCP support built into recent vscode versions. Once you add your MCP server endpoint to your vscode settings, you can use the normal AI chat interface to ask the chat bot to interact with the MCP server. For example: call the csla-mcp-server tools to see if they work This will cause the chat bot to invoke the `Search` tool, and then the `Fetch` tool to get the content of one of the files returned by the search. Once you have the MCP server working and returning the types of results you want, add it to your vscode or Visual Studio settings so all your developers can use it. In my experience the LLM chat clients are pretty good about invoking the MCP server to determine the best way to author code that uses CSLA .NET. ## Conclusion Setting up a simple CSLA MCP server is not too difficult, especially with the help of the Anthropic C# SDK. By implementing a couple of tools to search and fetch code samples, you can provide a powerful resource for developers using CSLA .NET. The hybrid search approach, combining traditional word matching with vector-based semantic similarity, helps improve the relevance of search results. This makes it easier for developers to find the code samples they need. I hope this article has been helpful in understanding how to set up a simple CSLA MCP server. If you have any questions or need further assistance, feel free to reach out on the CSLA discussion forums or GitHub repository for the csla-mcp project.
28.10.2025 14:36 — 👍 0    🔁 0    💬 0    📌 0
Unit Testing CSLA Rules With Rocks One of the most powerful features of CSLA .NET is its business rules engine. It allows you to encapsulate validation, authorization, and other business logic in a way that is easy to manage and maintain. In CSLA, a rule is a class that implements `IBusinessRule`, `IBusinessRuleAsync`, `IAuthorizationRule`, or `IAuthorizationRuleAsync`. These interfaces define the contract for a rule, including methods for executing the rule and properties for defining the rule’s behavior. Normally a rule inherits from an existing base class that implements one of these interfaces. When you create a rule, you typically associate it with a specific property or set of properties on a business object. The rule is then executed automatically by the CSLA framework whenever the associated property or properties change. The advantage of a CSLA rule being a class, is that you can unit test it in isolation. This is where the Rocks mocking framework comes in. Rocks allows you to create mock objects for your unit tests, making it easier to isolate the behavior of the rule you are testing. You can create a mock business object and set up expectations for how the rule should interact with that object. This allows you to test the rule’s behavior without having to worry about the complexities of the entire business object. In summary, the combination of CSLA’s business rules engine and the Rocks mocking framework provides a powerful way to create and test business rules in isolation, ensuring that your business logic is both robust and maintainable. All code for this article can be found in this GitHub repository in Lab 02. ## Creating a Business Rule As an example, consider a business rule that sets an `IsActive` property based on the value of a `LastOrderDate` property. If the `LastOrderDate` is within the last year, then `IsActive` should be true; otherwise, it should be false. using Csla.Core; using Csla.Rules; namespace BusinessLibrary.Rules; public class LastOrderDateRule : BusinessRule { public LastOrderDateRule(IPropertyInfo lastOrderDateProperty, IPropertyInfo isActiveProperty) : base(lastOrderDateProperty) { InputProperties.Add(lastOrderDateProperty); AffectedProperties.Add(isActiveProperty); } protected override void Execute(IRuleContext context) { var lastOrderDate = (DateTime)context.InputPropertyValues[PrimaryProperty]; var isActive = lastOrderDate > DateTime.Now.AddYears(-1); context.AddOutValue(AffectedProperties[1], isActive); } } This rule inherits from `BusinessRule`, which is a base class provided by CSLA that implements the `IBusinessRule` interface. The constructor takes two `IPropertyInfo` parameters: one for the `LastOrderDate` property and one for the `IsActive` property. The `InputProperties` collection is used to specify which properties the rule depends on, and the `AffectedProperties` collection is used to specify which properties the rule affects. The `Execute` method is where the rule’s logic is implemented. It retrieves the value of the `LastOrderDate` property from the `InputPropertyValues` dictionary, checks if it is within the last year, and then sets the value of the `IsActive` property using the `AddOutValue` method. ## Unit Testing the Business Rule Now that we have our business rule, we can create a unit test for it using the Rocks mocking framework. First, we need to bring in a few namespaces: using BusinessLibrary.Rules; using Csla; using Csla.Configuration; using Csla.Core; using Csla.Rules; using Microsoft.Extensions.DependencyInjection; using Rocks; using System.Security.Claims; Next, we can use Rocks attributes to define the mock types we need for our test: [assembly: Rock(typeof(IPropertyInfo), BuildType.Create | BuildType.Make)] [assembly: Rock(typeof(IRuleContext), BuildType.Create | BuildType.Make)] These lines of code only need to be included once in your test project, because they are assembly-level attributes. They tell Rocks to create mock implementations of the `IPropertyInfo` and `IRuleContext` interfaces, which we will use in our unit test. Now we can create our unit test method to test the `LastOrderDateRule`. To do this, we need to arrange the necessary mock objects and set up their expectations. Then we can execute the rule and verify that it behaves as expected. The rule has a constructor that takes two `IPropertyInfo` parameters, so we need to create mock implementations of that interface. We also need to create a mock implementation of the `IRuleContext` interface, which is used to pass information to the rule when it is executed. [TestMethod] public void LastOrderDateRule_SetsIsActiveBasedOnLastOrderDate() { // Arrange var inputProperties = new Dictionary<IPropertyInfo, object>(); using var context = new RockContext(); var lastOrderPropertyExpectations = context.Create<IPropertyInfoCreateExpectations>(); lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); var lastOrderProperty = lastOrderPropertyExpectations.Instance(); var isActiveProperty = new IPropertyInfoMakeExpectations().Instance(); var ruleContextExpectations = context.Create<IRuleContextCreateExpectations>(); ruleContextExpectations.Properties.Getters.InputPropertyValues().ReturnValue(inputProperties); ruleContextExpectations.Methods.AddOutValue(Arg.Is(isActiveProperty), true); inputProperties.Add(lastOrderProperty, new DateTime(2025, 9, 24, 18, 3, 40)); // Act var rule = new LastOrderDateRule(lastOrderProperty, isActiveProperty); (rule as IBusinessRule).Execute(ruleContextExpectations.Instance()); // Assert is automatically done by Rocks when disposing the context } Notice how the Rocks mock objects have expectations set up for their properties and methods. This allows us to verify that the rule interacts with the context as expected. This is a little different from more explicit `Assert` statements, but it is a powerful way to ensure that the rule behaves correctly. For example, notice how the `Name` property of the `lastOrderProperty` mock is expected to be called twice. If the rule does not call this property the expected number of times, the test will fail when the `context` is disposed at the end of the `using` block: lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); This is a powerful feature of Rocks that allows you to verify the behavior of your code without having to write explicit assertions. The test creates an instance of the `LastOrderDateRule` and calls its `Execute` method, passing in the mock `IRuleContext`. The rule should set the `IsActive` property to true because the `LastOrderDate` is within the last year. When the test completes, Rocks will automatically verify that all expectations were met. If any expectations were not met, the test will fail. This is a simple example, but it demonstrates how you can use Rocks to unit test CSLA business rules in isolation. By creating mock objects for the dependencies of the rule, you can focus on testing the rule’s behavior without having to worry about the complexities of the entire business object. ## Conclusion CSLA’s business rules engine is a powerful feature that allows you to encapsulate business logic in a way that is easy to manage and maintain. By using the Rocks mocking framework, you can create unit tests for your business rules that isolate their behavior and ensure that they work as expected. This combination of CSLA and Rocks provides a robust and maintainable way to implement and test business logic in your applications.
28.10.2025 14:36 — 👍 0    🔁 0    💬 0    📌 0
.NET Terminology I was recently part of a conversation thread online, which reinforced the naming confusion that exists around the .NET (dotnet) ecosystem. I thought I’d summarize my responses to that thread, as it surely can be confusing to a newcomer, or even someone who blinked and missed a bit of time, as things change fast. ## .NET Framework There is the Microsoft .NET Framework, which is tied to Windows and has been around since 2002 (give or take). It is now considered “mature” and is at version 4.8. We all expect that’s the last version, as it is in maintenance mode. I consider .NET Framework (netfx) to be legacy. ## Modern .NET There is modern .NET (dotnet), which is cross-platform and isn’t generally tied to any specific operating system. I suppose the term “.NET” encompasses both, but most of us that write and speak in this space tend to use “.NET Framework” for legacy, and “.NET” for modern .NET. The .NET Framework and modern .NET both have a bunch of sub-components that have their own names too. Subsystems for talking to databases, creating various types of user experience, and much more. Some are tied to Windows, others are cross platform. Some are legacy, others are modern. It is important to remember that modern .NET is cross-platform and you can develop and deploy to Linux, Mac, Android, iOS, Windows, and other operating systems. It also supports various CPU architectures, and isn’t tied to x64. ## Modern Terminology The following table tries to capture most of the major terminology around .NET today. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET (dotnet) 5+ | modern | No | Platform ASP.NET Core | modern | No | Web Framework Blazor | modern | No | Web SPA framework ASP.NET Core MVC | modern | No | Web UI framework ASP.NET Core Razor Pages | modern | No | Web UI framework .NET MAUI | modern | No | Mobile/Desktop UI framework MAUI Blazor Hybrid | modern | no | Mobile/Desktop UI framework ADO.NET | modern | No | Data access framework Entity Framework | modern | No | Data access framework WPF | modern | Yes | Windows UI Framework Windows Forms | modern | Yes | Windows UI Framework ## Legacy Terminology And here is the legacy terminology. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET Framework (netfx) 4.8 | legacy | Yes | Platform ASP.NET | legacy | Yes | Web Framework ASP.NET Web Forms | legacy | Yes | Web UI Framework ASP.NET MVC | legacy | Yes | Web UI Framework Xamarin | legacy (deprecated) | No | Mobile UI Framework ADO.NET | legacy | Yes | Data access framework Entity Framework | legacy | Yes | Data access framework UWP | legacy | Yes | Windows UI Framework WPF | legacy | Yes | Windows UI Framework Windows Forms | legacy | Yes | Windows UI Framework ## Messy History Did I leave out some history? Sure, there’s the whole “.NET Core” thing, and the .NET Core 1.0-3.1 timespan, and .NET Standard (2 versions). Are those relevant in the world right now, today? Hopefully not really! They are cool bits of history, but just add confusion to anyone trying to approach modern .NET today. ## What I Typically Use What do _I personally_ tend to use these days? I mostly: * Develop modern dotnet on Windows using mostly Visual Studio, but also VS Code and Rider * Build my user experiences using Blazor and/or MAUI Blazor Hybrid * Build my web API services using ASP.NET Core * Use ADO.NET (often with the open source Dapper) for data access * Use the open source CSLA .NET for maintainable business logic * Test on Linux using Ubuntu on WSL * Deploy to Linux containers on the server (Azure, Kubernetes, etc.) ## Other .NET UI Frameworks Finally, I would be remiss if I didn’t mention some other fantastic cross-platform UI frameworks based on modern .NET: * Uno Platform * Avalonia * OpenSilver
28.10.2025 12:36 — 👍 0    🔁 0    💬 0    📌 0