This is part II in the blog series Technical Debt – Deal with it Now or it’ll Deal with you. Catch up with Part I: What is Technical Debt

Types of Technical Debt

Naïve/Reckless/Unintentional Technical Debt aka Mess
Naïve Debt, Reckless Debt and Unintentional debt are different names for a form of technical debt that accrues due to irresponsible behavior or immature practices on the part of the people involved. In our experience it is very rare to find this kind of technical debt stemming from conscious irresponsible development behavior. However, it is very common to find this kind of debt originating from developers not trained in current and robust development techniques, when little architectural planning has taken place or the toolchain the development team uses is immature.

Myopic Debt

Myopia is another type of technical debt that can take root during initial development of a major new digital product. When developers or product owners do not look far enough into the future to realize the type of versatility that will be needed for a feature the team can often author short-sighted code that will need to be re-visited later to provide the needed flexibility. An example is choosing to hard code the options in a select list when the end user may want to be able to add options to it in a dynamic fashion.

Unavoidable Technical Debt

A form of technical debt that is usually unpredictable and unpreventable and accrues through no fault of the team building the product. An example of the “unavoidable” flavor of this kind of technical debt would be the need to use of ASP.NET forms just on the brink of Microsoft’s release of the MVC framework – the developer may have had idea a new standard in .NET client-side development was coming but within a small number of years the development community will have moved away from ASP.NET and thus the resulting application laden with Unavoidable Technical Debt.

Strategic Technical Debt

A form of technical debt that is used as a tool to help organizations better quantify and leverage the economics of important, often time-sensitive, decisions. Sometimes taking on technical debt for strategic reasons is a sensible business choice. This is largely a result of conscious decisions around needed speed-to-market or plans to remedy technical debt after meeting a market-imposed (often regulatory in nature) deadline.

From the Trenches: Don’t Forget the Tests

Unit, integration and functional tests are all very important to develop in tandem with the actual software that drives your released product. Some developers are versed in Test Driven Development (TDD) and follow it in day-to-day coding. However, when things get rushed and project deadlines are constrained the development of tests is often the first thing that is skipped “to save time”. Having good test coverage, running tests on every code commit and stopping development until the built up set of tests pass is a key practice to keep technical debt from mounting in your software.

Causes of Technical Debt

There are many root causes for the accumulation of technical debt. While it may appear that technical debt is most often a result of developer carelessness or ignorance, the fact of the matter is that the situation is much more nuanced. Holding such a viewpoint would be like blaming the automotive assembly line worker for your car’s oil changes and brake pad replacement: certainly a car could be built that would not require that maintenance but the cost would be so high no one would buy such a car.

Rushed Releases

Business pressures, where the business considers getting something released sooner before all of the necessary changes are complete, builds up technical debt comprised of those rushed and/or uncompleted changes.

Short Term Thinking

Lack of process or understanding, where businesses are blind to the concept of technical debt, and make decisions without considering the implications.

Tight Code Coupling

Lack of building loosely coupled components, where functions are not modular, the software is not flexible enough to adapt to changes in business needs.

Isolated Work Efforts

Lack of collaboration, where knowledge isn’t shared around the organization and business efficiency suffers, or junior developers are not properly mentored.

Splintered Code Branches

Parallel development at the same time on two or more branches can cause the buildup of technical debt because of the work that will eventually be required to merge the changes into a single source base. The more changes that are done in isolation, the more debt that is piled up.

Lacking Refactoring

Delayed refactoring – As the requirements for a project evolve, it may become clear that parts of the code have become unwieldy and must be refactored to support future requirements. The longer that refactoring is delayed, the more debt that piles up that must be paid at the time of refactoring.

Inexperienced Development Staff

Lack of knowledge, when the developer simply doesn’t know how to write elegant code. A strong culture of software craftsmanship across the development team will keep technical debt down.

Uncoordinated Outsourced Teams

Lack of ownership, when outsourced software efforts result in in-house engineering being required to refactor or rewrite outsourced code.

Incomplete Testing

Lack of a test suite, which encourages quick and risky Band-Aids to fix bugs. A strong test suite for a software product is both a form of technical documentation and also a strong means to ensure future code changes do not break software functionality.

Tools to Measure Technical Debt

Hopefully at this point we have made clear some of the common symptoms and causes of technical debt. Next up is some coverage of tools that can be used to quantitatively sniff out and measure the extent of technical debt in your solutions. Make sure when you use such tools to watch incremental technical debt from check-in to check-in more than hanging your hate on an absolute assessed level of technical debt. The computation for the specific cost to address technical debt is very debatable but the relative changes from one state of the code base to another later version of the code base is a rigorous assessment of the degradation or improvement in the condition of the code.

SQALE

Software Quality Assessment based on Lifecycle Expectations (SQALE) is an analytical method to assess the evaluation of a software application source code. It is a generic method, independent of the language and source code analysis tools, that normalizes best practice software development techniques across languages. A SQALE score is comprised of 8 indices that measure key factors such as code reusability and changeability, all contributing to an application’s technical debt.

SonarQube

SonarQube is an open source software implementation of the SQALE tests to yield scores of technical debt levels of a given code base. At time of writing SonarQube works against the world’s most popular software languages and platforms including Java, C#, Objective-C, JavaScript and PHP. SonarQube is most often implemented as part of a continuous deployment pipeline to assess code quality upon every code commit and build of an application. Additionally, an organization can write custom rule sets for SonarQube to apply to put more (or less) emphasis on particularly problematic types of technical debt their teams are prone to.

SonarQube Dashboard
Figure 1: A sample SonarQube dashboard


SonarQube detail view

Figure 2: A deeper dive into a SonarQube run that exposed significant technical debt

SonarQube Quality Gate Failure
Figure 3: A sample snapshot of a quality gate failed by SonarQube

Static Analysis Tools

There are several other software products that are intended to measure many of the factors contributing to technical debt. Examples include cyclomatic complexity (a measure of the number of possible routes through your software logic), static analysis (automated review of source code to identify lacking best practices) and coupling (the extent to which one software routine depends in turn on another to complete its intent). Software development tools such as Crucible and FxCop are often implemented to keep a lid on these ill effects.

Check out Part III Dealing with Technical Debt.

In this final installment of “Customizing the Sitefinity Content Admin” we’ll be going deep into Sitefinity’s Event Control system managed by what are called ‘Decorators’. This is pretty much for Developers only, however if you stick around until the end of the post I’ll be going over some exciting features that have come out / will be coming out since I started this series.

I’ve created/updated a content item, and I need it to DO something

It’s quite common that on creation of a content item in a CMS we need something to happen other than just publishing the piece.

  • I’d like an email to be sent on publish
  • I’d like to generate a page
  • Update my remote CRM

Out of the box, this is seemingly impossible, but by overriding Sitefinity’s own individual ‘decorators’ we can manage these events. In this post I’ll show you how to implement and override your own decorator for a content item as well as fire off the aforementioned events.

If you haven’t done so already you may want to read part I and part II of the series before continuing. 

The Lifecycle Decorator

To get started we need to implement a variation of our own ‘decorator’ by overriding the LifecycleDecorator class found wrapped in the Sitefinity DLL.

What it does

Sitefinity manages the various stages of a content item’s ‘life’ through this class, whether the item is being created as a draft, being published, being unpublished, or being deleted. Through this cycle we manipulate the data at various stages and trigger events as we please. There are many different methods and events we can override, so I suggest for developers to delve into the LifecycleDecorator class using JustDecompile, a very cool Telerik tool used to decompile DLLs and view the raw code of classes. It’s free (free!) and can be downloaded here. Any decompile tool will do, but this goes hand-and-hand nicely with other Telerik products including Sitefinity and DevCraft.

How to Implement

You can start by creating a class from scratch and have it inherit from the Lifecycle. For this example we’re are going to override Events in particular, so let’s go with naming it EventsDecorator. As you can see from the decompiled code, this requires two parametered constructors in order to satisfy the requirements of the interface. This is tricky (and somewhat unusual) but is necessary to carry out the two major functions of the decorator: delegate the various stages of the content item and to handle various actions.
The only variable that needs to change here is the name of your class. The following structure works fine for each Sitefinity content type:

Code Snippet

Now that we’ve added our Decorator class we need to ‘register’ and override it in the global.asax. This file should already exist in your project, but if not you can create it from scratch and it will automatically fire from the base level of your project.

The specific method we will need to override to ‘register’ our decorator is the Boostrapper_Intialized event. Using the object factory we register and override the functionality of the baked-in decorator by injecting directly, as you can see below:

Code Snippet

Inside this method we also set the type of ContentManager we’ll be utilizing; in this case the Events Manager. I also included a News Item decorator for example.

If we run the project our custom decorator will now handle all Event Content lifecycling and event handling. Since we haven’t changed anything, it will run and handle exactly like the original.

Methods to Note

While there are many exciting methods and events to override on a lifecyle decorator, we’re going to focus on the most important one, ‘Execute on Publish’. This handles the entire process of the content item on publish and garnishes the most availability to the item’s properties and current status. You’ll notice it carries a ‘MasterItem’ state and a ‘LiveItem’ state of the content item. We’ll go into more detail over the difference for the two, but generally the ‘MasterItem’ is the current state of the content item and the one we’ll be working with the most. This method can be implemented as follows:

Code Snippet

Managing Events

If we were to debug this method we’ll see that this method fires on post-back of a content item publish, whether it’s brand new or being edited. From here we can create and transcribe various event methods which will handle any tasks we need to automatically fire.

Sending an Email

One of the most important tools for businesses is undoubtedly communication. When a new event is created we want to make sure people know about it! Whether it’s triggering a dynamic newsletter or even sending an internal message to an administrator, we can easily do this in our decorator.

To show this at its simplest functionality, let’s trigger a one-off email event to an administrator to notify them that an event has been added or edited.

Let’s start by adding a simple email method that passes the current Event object which is being published along with the LiveItem status. The LiveItem basically represents the content item as-is before the publish. Passing the IsPublished property will essentially tell us if this item was live before, meaning if it’s a newly published event or a live one that’s currently being updated. With that information we can send two different emails out: one for an updated event and one for a fresh one.

Code Snippet

 You can use whatever email system you’d like, but in this particular case we just used Sitefinity’s built in FormHelper.

Keep in mind, all validation has been completed for the event item about to be published so unless we have an error here in our custom code, the item will absolutely be published; ensuring the email will be a valid response.

Creating a page for our new content item

A very powerful ability the decorator provides us is the ability to create new pages outside of standard content detail pages. You may find that you want your very own urls and content for the newly created page, especially if you plan to automatically send emails directing users to the front-end of the site to view your event; complete with all the information, images, and specifics of your specified content.

I won’t go into strong detail on using the Sitefinity API to create pages, rather I’ll explain how it works from the decorator. More documentation about the page API can be found here.

So first we’ll create a method to fire inside of the Publish method:

Code Snippet

We then utilize the Fluent API to pull in a page template and create the page based on our event details.

Code Snippet

Notice most of the information drives from the event item itself. To top it off we redirect the admin user who published the page to the newly created page for further editing in the WYSIWYG (denoted by the “/action/edit”). This will give a smooth transition to the user as opposed to making them navigate to the new page manually.

Keep in mind there’s a lot more you can do in page creation like placing layouts and content widgets. You could do this directly after creating the page and utilizing ids from a formally created page template:

Code Snippet

The possibilities for page creation are truly endless for maximizing administration work effort.

Working with Remote Services

In some cases its import to integrate Sitefinity and its content with other outside services like social media or a CRM. In this example I’ll show how we can call a third party CRM like Intellipad to record the event in its system and record an incoming id to the event itself.

Let’s start by making another method to handle the call to Intellipad:

Code Snippet

In this case we expect to get an id back from Intellipad to signify the new record in its system. Ahead of time we created a custom field just for this called “IntellipadEventId”. If the id isn’t higher than ‘0’ we know something failed with the call.

As you can see we can create an event modeled after our own to record in Intellipad for future transactions:

Code Snippet

*For demoing purposes we are grabbing a random user with last name “Christopher” to act as the user committing the transaction and has no real coding value in this demo.

After a DTO is mapped we can send it off to our IntellipadEvent API to be upserted. An Id will be returned to us which will be set on the Event itself before the base.ExecuteOnPublish method is called, ultimately saving the intellipad Id to the event. I recommend wrapping events like these in try-catches as third party calls may fail due to service outages or remote errors. We wouldn’t want to disrupt the publishing process.

Wrapping up the Lifecycle

Once your various method events have fired in your overridden ExecuteOnPublish method, the publish will complete as normal. What I showed was just a taste of complexity you can bring into the decorator; you can even fire off other decorators to match data across multiple content types. Be sure to thoroughly test your code before deploying first, however, as errant erroring could cause major issues if you or your client can’t properly publish the content items. On the other hand, Experimentation is highly recommended!

The Future

I hope you enjoyed and learned some techniques from this series focusing on the various ways you can customize the Sitefinity Content Admin.

When I started this series last year the current Sitefinity version was 7.5. With 9.0 on the horizon I thought I’d pass on some new features that have come out since that first post and those coming out in the near future to help customize your admin experience. Here are some links and descriptions that provide a little food-for-thought:

An engaging webinar that recently aired discussing a plethora of new features coming to Sitefinity 9, including Audience administration and a fresh admin UI:

An in-depth look at some of the new MVC widgets released in the last year, completed with new admin editors:

Another big change coming to Sitefinity, Continuous Delivery. This Blog post from Sitefinity talks about their new approach to site marketing and performance in the form of Digital Business Agility:

Much of this blog series focused on direct customization of content fields in the admin. With newer focus on MVC from Sitefinity, it’s even easier to implement and customize both fields and their widget designers:

I have been working towards my MCSD: Web Applications Certification and am gearing up for the last exam in the series, Developing Microsoft Azure and Web Services (70-487). My experience with WCF has been rather limited so it is something I have been dedicating much of my study time to. At the end of this post I will share several resources I have used for studying but first I wanted to walk through one of the topics that I found very interesting: using the Azure Service Bus to relay a call from the client program to the WCF service.

Getting Setup: Assumptions and requirements

  • This tutorial was completed using Visual Studio 2015 Enterprise edition but can be done using the Community edition
  • Basic knowledge of creating a WCF service and client is assumed
  • See the resources section for links on getting up to speed on WCF
  • An Azure account is required

For full disclosure, some of the setup will require services that are not free (more details when we get to that section). You can set up an Azure account for free here and you will receive $200 in credit which will be more than enough for this and many other things. If you already have an Azure account, you can sign up for an MSDN subscription which will give you access to free Azure credits or Microsoft has a Dev Essentials Program which gives you $25 in Azure credits each month for a year among other benefits.

Overview

The Azure Service Bus Relay allows a client application talk to a WCF service living behind a firewall. While there are ways to set this up without using a Service Bus Relay, this looks to make it easier to set up and manage.

Using the Service Bus Relay while developing lets you run the service locally while someone outside the firewall – either sitting in a Starbucks down the street, across the country or across the ocean – run the client application. When the client calls the service you will be able to step through the code and see the state of the data sent to the service.

My test was to send the application to a friend in the UK and while we were Skyping I could read back the input he submitted to the application.

Demo Project

For this demo I created a very simple WCF service and client which can be found on GitHub at https://github.com/susanwilliams/MNMBlog-WCFAzureServiceBus. The starting project can be found in the StartingSample branch and the completed sample will be in the master branch.

It is a very simple console app that takes a name input, sends it to the service and receives back the name inputted prefixed by “Hello, “. The initial setup uses a simple console app for self-hosting and uses NetTCPBinding.

Azure Service Bus Setup – Azure Portal

The client and host rely on an access key and an endpoint address based on the Service Bus Namespace you create in Azure.

After getting the sample code or starting with your own the first place we will start is in the Azure portal and set up a Service Bus Namespace.

  1. In the new portal select ‘Browse’ and then find ‘Service Bus Namespaces’ in the list.Microsoft Azure Portal Namespaces
  2. This link will take you to the Classic Azure portal. Or you can start from the Classic portal and find Service Bus in the left hand navigation.Service Bus

    I already have one Service Bus Namespace set up but I’ll create a new one for this demo. If you do not have any namespaces created there will be a link in the main window to create a new one.

  3. Select the link on the main window or the “Create” button at the bottom of the page (Selecting the large “New” button in the bottom left will start the quick create process).Adding a new namespace
  4. Enter a unique Namespace Name and select Messaging for the Type
    • The Messaging Tier must be set to Standard. This is where there could be cost associated with the tutorial
    • The pricing page for the Service Bus under Relays says that they are only available under the Standard tier and will cost $0.10 for every 100 relay hours and $0.01 for every 10,000 messages
    • How much testing you do with this tutorial will determine how much it will cost so it can be minimal but I wanted to be open about the potential costs
  5. Select the Region that will work best for your application and finally, the subscription
    • I am using the subscription associated with the Dev Essentials benefit
  6. After hitting OK you will see it in your list and when the status changes to Active select it to view the details
  7. Across the top of the page are a handful of tabs related to the Service Bus. For now we want the “Configure” tab but we will take a look at the “Relays” tab later
  8. On the Configure tab there is a section for Shared Access Policies and Shared Access Key Generator. There is a default policy created for you but we will create a new one for this application
  9. Enter a new policy name and under Permissions select Manage
  10. Then click the save button at the bottom of the pageShared Access Policies
  11. After saving, the new policy will show up in the Policy Name dropdown under the Key Access Generator. Either copy the Primary Access Key to an empty notepad file or keep this page open in the background for later reference.Shared Access Key Generator

This was all we needed to do in Azure to set up the relay. Now on to the code.

Azure Service Bus Setup – Code

There are only a couple things to do to update the solution to use the Service Bus Relay.

First, install the WindowsAzure.ServiceBus to both the WCFServiceBus.Client and WCFServiceBus.Host projects.NuGet Package Manager WCF Service Bus

Next we need to update the endpoints to use the Service Bus. The NuGet package added some relay binding extensions, including netTcpRelayBinding. There are others listed but we are just going to stick with TCP for this post.

If you refer back to the screenshot of creating the Service Bus Namespace, you will see that the name you entered was followed by the note “.servicebus.windows.net”. This prefixed by the name you entered is part of the address that will be used for the endpoint.

In the host and client App.config files the new endpoint configuration looks like this:

---code---
<endpoint address="sb://wcfservicebusmnm.servicebus.windows.net/HelloService" binding="netTcpRelayBinding" contract="WCFServiceBus.Contracts.IHelloService" />
---/code---
The scheme for the endpoint address is changed to "sb" to specify we are using the Service Bus. It is followed by the uri I referenced to above which uses the namespace you created followed by .servicebus.windows.net. You also need to specify a path for the service – it can be anything but needs to be the same on both client and host.
The only other section to add to each configuration file is a new behavior where we will store the access key.
---code---
<behaviors>
  <endpointBehaviors>
    <behavior>
      <transportClientEndpointBehavior>
        <tokenProvider>
          <sharedAccessSignature keyName="Your Key Name" key="Enter Your Key Here" />
        </tokenProvider>
      </transportClientEndpointBehavior>
    </behavior
  </endpointBehaviors>
</behaviors>
---/code---

Once this code is added to the host and client App.config, that’s it! Of course there is more that you can do to build out a full service or even configure it to use credentials but this is the basic setup to use the Azure Service Bus as a relay. You can send the compiled client application to anyone in the world and if they run it while you have the service running on your machine, you will be able to step through the service when it is called.

One way to see the relay working, other than the fact that it is the only endpoint set up in the project is to go back to the Azure portal and in the details for the Service Bus Namespace you created select the Relays tab.

Start the host and put a breakpoint in the implementation of the SayHello operation in WCFServiceBus.Services.HelloManager. Set the Host project as the startup project, begin debugging and run the Client application from the .exe in the Debug folder or set the solution to start with multiple projects.

When your breakpoint gets hit go back to the Relays tab for the Namespace you created in the Azure portal. You will see that it recognizes a relay is being processed.WCF Service Bus project

Conclusion

This is just one of the new things I have learned on my path towards achieving my MCSD certification. It has been great learning new things and how to take services and solutions that have been around for a while and integrate them into Azure.

Resources

Azure

WCF Study Resources

  • Pluralsight Course – WCF End-to-End by Miguel Castro
  • Programming WCF Services by Juval Löwy (Book)

Azure Service Bus

Sample Code