General use JavaScript libraries and frameworks have been revolutionizing web development for a decade now. Since the release of jQuery in 2006, JavaScript libraries have been quickly replacing traditional use of JavaScript. Today, more than 65% highest traffic sites on the web use jQuery. Since then, a number of other frameworks and libraries have been introduced. In this post I am going to concentrate on two such libraries, AngularJS and KnockoutJS, and provide some general comparisons from the perspective of a .NET developer who is fairly new to each.

The Basics

Before getting into my personal thoughts on these frameworks, let’s take a look at a little background information. I am not going to give any code examples in this post since they can be found in numerous places around the web, including on each project’s respective website (knockoutjs.com and angularjs.org)

AngularJS

AngularJS was initially released in 2009 and has grown in popularity since then. According to the AngularJS Wikipedia article,

AngularJS is an open-source web application framework mainly maintained by Google and by a community of individual developers and corporations to address many of the challenges encountered in developing single-page applications. It aims to simplify both the development and the testing of such applications by providing a framework for client-side model–view–controller (MVC) and model–view–viewmodel (MVVM) architectures, along with components commonly used in rich Internet applications.

Which is a technical speak for “it gives developers a convenient and reusable way to maintain a data model on the client side for single page applications.”  

Angular’s stated design goals are to decouple DOM manipulation from application logic and the client side of an application from the server side, and also to provide structure for the complete application creation process.

KnockoutJS

KnockoutJS was initially released the year after AngularJS in 2010, and its Wikipedia description is

KnockoutJS is a standalone JavaScript implementation of the Model-View-ViewModel pattern with templates. The underlying principles are a clear separation between domain data, view components and data to be displayed and the presence of a clearly defined layer of specialized code to manage the relationships between the view components. These features streamline and simplify the specification of complex relationships between view components, which in turn make the display more responsive and the user experience richer. Knockout was developed and is maintained as an open source project by Steve Sanderson, a Microsoft employee. As the author said, "it continues exactly as-is, and will evolve in whatever direction I and its user community wishes to take it", and stressed, "this isn’t a Microsoft product".

On its surface, this description sounds very similar to the Angular description. An open source JavaScript framework that allows simpler client-side manipulation of a data model and supports separation of model from view. The lack of mention of “single-page applications” is a note that will be important when we go to compare the two frameworks.

Knockout’s stated key concepts are to allow declarative binding of the model to the view, automatic UI updates when the model changes, dependency tracking to automatically update computed values and to allow html templating.

It can be seen both from the description and key concepts that knockout puts much less emphasis on its ability to decouple parts of the application than Angular does.

Comparison

The first difference that was immediately obvious to me is that Angular is not designed to play very nicely with non-angular parts of a site. You can make it work, but is definitely a challenge. With Knockout, it is quite easy to have even a very small Knockout part of an otherwise non-Knockout page and I have done this numerous times in existing .NET applications.

The other thing that struck me fairly quickly is that Angular is a complete framework for single page apps and as such is considerably more involved and feels a more intimidating to a beginner. With knockout, there is very little plumbing or understanding needed to get a simple example running. When I was first introduced to Knockout, it seemed quite intuitive, but Angular has taken me longer to warm up to.

One thing with Knockout that can initially be confusing to a beginner is the conversion that is needed between a regular JSON object and a Knockout observable. In order for any model to be operated on by Knockout, it needs to be an observable. Unfortunately, since your server side code will provide and accept only JSON objects, conversion is necessary after data comes from the server and before it is sent back. Angular on the other hand, works directly with JSON objects eliminating the need for conversion.

Another thing that is less streamlined in Knockout is the ability to reuse components. When combining Knockout with MVC.NET, it is not hard to treat pieces as reusable components by putting the markup in a partial that can then be placed in multiple locations, but the script that includes all of the model interactions must be included on any page where the component would be used. In Angular, the ability to add custom directives makes this much more obvious.

The last notable difference I observed was the habits that seem to form with the scripts for each framework. In my experience Knockout tends to encourage long rambling scripts and does not enforce or encourage separation of concerns among different parts of your model. Angular is quite the opposite. Separation of concerns seems to come more naturally. 

The lists below give a summary of the pros and cons for each framework that were discussed:

Angular Pros

  • Complete solution for single page apps
  • Works directly with JSON
  • Built in way to create reusable components
  • Separation of concerns is easier to implement and enforce
  • Great informational links when a console error does show up

Angular Cons

  • Difficult to integrate into non-angular
  • More intimidating to beginners
  • Need to have a fairly good understanding of the overall system to get started
  • Often there is no console error when something fails

Knockout Pros

  • Easily integrated into existing sites and pages
  • Fairly simple concept
  • Can learn as you go

Knockout Cons

  • Developer is responsible for converting between JSON and observables
  • Less obvious how to reuse created components
  • JavaScript files tend to be rambling and have minimal separation of concerns

Sources

https://en.wikipedia.org/wiki/JQuery

https://en.wikipedia.org/wiki/Knockout_(web_framework)

http://knockoutjs.com/

https://en.wikipedia.org/wiki/AngularJS

https://angularjs.org/

Web accessibility is an important criteria for any website or web app, and is one that should be considered by all people involved: designs, developers, site owners, site contributors, site testers, and end users. The United Nations estimates that one in ten of the world’s people lives with a disability, and that number is expected to grow as the world’s population continues to grow older and live longer.

Accessibility in the context of the web includes visual impairments (such as complete blindness, low vision, or color blindness), hearing impairments, physical impairments, and cognitive disabilities (such as dyslexia and ADD/ADHD).

Why should a designer, developer, or site owner care about the accessibility of their site? For one, not providing an accessible website could hurt your sites’ business. More visitors and a greater market can only help improve sales, complete conversions, or meet your goals. For some organizations, it can also be required by law.

Building a site or app with accessibility in mind from the start can also reduce development and maintenance costs. It costs less to build an accessible website from the start than it is to retrofit an existing one into compliance.

Lastly, developing a website with good accessibility standards in mind means your site will most likely work on most browsers and most devices. This is important as there are many smartphone, tablet, and desktop devices on the market, with only more devices appearing in the future.

Let’s jump into some tactics that a designer and developer can take to make their websites and web apps more accessible.

How to Properly Hide Content

To hide elements visually, but not hide from screen readers, use the following CSS snippet:

.screen-reader-only {
position: absolute;
      width: 1px;
      height: 1px;
      margin: -1px;
      padding: 0;
      overflow: hidden;
      clip: rect(0,0,0,0);
      border: 0;
}

To hide content both visually and from screen readers, use the following CSS:

.hidden {
display:none;
        visibility:hidden;
}

Provide Skip to Content & Skip To Navigation Links

You could easily make somebody’s day by simply adding to link elements as the first elements inside of your body tag. This is a good accessibility practice to get into the habit of doing when starting any new web project, and can be easy to add into an existing site or app as well.

<body>
<a href="#main" class=”screen-reader-only”>Skip to main content</a>
<a href="#navigation" class=”screen-reader-only”>Skip to main navigation</a>
…
  <nav id=”navigation” role='navigation'>
…
  </nav>
  <main id="main" role="main">
        …
  </main>
…
</body>

How to Write Great Alt Text for Images

Most people know alt text is important for web accessibility. The key to great alt text is to describe the function of the image first before what it is depicting. For example, a logo image in the header of your website that always links to your homepage might have alt text that reads as the following:

<img alt=”Back to the homepage” src=”…” />

If your image doesn’t have a specific function, then add a detailed description of the image. When adding a descriptive alt text, it’s also best not to start with “A photo of…” or “A picture of…”, as many screen readers do this automatically, which will result in redundancy for the user.

Many content management systems will add alt text automatically based on the name of the image file you upload (such as Sitefinity version 7 and newer). Although this will technically help validate your website for accessibility, take the time to edit the metadata properties of your images to add good alternate text for your users (especially since file names can be abbreviated or cryptic). Most CMSs provide an easy way to change alt text. The example below is the edit screen in Sitefinity for adding alt text.

Sitefinity admin alt text management

Accessible Tables

The best way to ensure accessible tables is to add several elements and attributes:

  • Add the scope attribute to define rows and columns. This attribute will let screen readers speak tables in the correct, human readable order
  • Add a caption that provides a brief description of the table’s contents

An example is provided below.

<table>
<caption>Mercury New Media Employees</caption>
<tr>
<th scope="col">Name</th>
<th scope="col">Title</th>
<th scope="col">Twitter Handle</th>
</tr>
<tr>
<th scope="row">Donald Bickel</th>
<td>Partner</td>
<td>@donaldbickel</td>
</tr>
<tr>
<th scope="row">Zachary Winnie</th>
<td>Senior Interface Designer</td>
<td>@zachwinnie</td>
</tr>
</table>

ARIA Landmark Role Attributes

ARIA landmark roles (also known as Accessible Rich Internet Applications) are used by screen readers to more quickly navigate your website or web app. These are simply attributes added to HTML elements that help define what they are. Important attributes to use on your website include:

  • role=”banner” describes the header of your site, which normally includes a logo, site search and navigation
  • role=”complementary” describes the supporting section of a page or document, such as a blog sidebar that contains related articles
  • role=”form” describes a form
  • role=”main” describes the main content of a page or document
  • role=”navigation” describes links used for navigating to pages and documents
  • role=”search” describes a specific form that provides search functionality

An example of using an ARIA landmark for a header might look like the following:

<header id=”header” role=”banner”>
…
</header>

Keyboard Accessibility

Access Keys

The access key attribute can be added to interactive HTML elements, such as links, to provide quick navigation actions for keyboard users.

An example access key to quickly go to the homepage would look like the following:

<a href="/home" accesskey="H">Home</a>

A user would then press Alt + [the accesskey] on most Windows browsers or Ctrl + Option + [the accesskey] on most Mac browsers to quickly browse to that page.

Although there is no singular standard for what keys should be assigned, there is a fairly common convention for the number keys:

  • accesskey=”1” or accesskey=”H” for homepage link
  • accesskey=”2” for skip to content
  • accesskey=”3” for sitemap
  • accesskey=”4” for search field focus
  • accesskey=”5” for advanced search, if available
  • accesskey=”6” for site nav tree
  • accesskey=”9” for contact information
  • accesskey=”0” for access key details

Tab Indexing

The tabindex attribute allows for designers and developers to customize the order of tabbing through web content and web app features. Tabbing on a site or app lets users skip to key inputs or key elements where an action might take place. An example of using tabindex on a form is as follows:

<form>
<label for="FirstName">First Name</label>
<input type="text" id="FirstName" tabindex="1">
<label for="LastName">Last Name</label>
<input type="text" id="LastName" tabindex="2">
<input type="submit" tabindex="3" value="Submit">
</form>

It is worth noting that the values “0” and “-1” are reserved for default order and removing elements from the tab order, respectively.

Conclusion

Using the tips and tricks above, it’s easy for any designer and developer to improve their accessibility skills, and improve the accessibility of their websites and web apps.

Taking the time to incorporate the strategies above will help to craft sites that are accessible to wider audiences, provide better usability, save costs, make more money, and conform to the growing worldwide accessibility standards and laws.

For further reading and resources, including free website accessibility validators, free browser extensions, and free screen reader software, check out the links below.

Resources & More

Continued Learning

Accessibility Checklist

Validators & Evaluators

Color Blindness Checkers

Screen Reader Software

Browser Extensions

In this post I am going to be showing a simple example of using the Visual Studio Online REST API to retrieve information about a build using PowerShell. Then we’ll take a look at how we can use the information we retrieved in a vNext build step. I am assuming at least some basic understanding of builds, Visual Studio Online and PowerShell.

vNext Builds

The new builds in VSO allow for easy customization of the build pipeline. You can add and rearrange steps to create a process that works best for your application. For this post I will be adding a few steps for demonstration but will not be going into much detail on all of them.

First, I created a simple console application and added it to a Git repository in my VSO account (you can get one for free at http://www.visualstudio.com). The only code I added to the Main method is a simple try/catch that will come into play later.

<code>
try  {
           Console.Write("Hello, world!");            
       }             
catch (Exception ex)            
      {                 
           Console.Write("There was an error!");             
      }
</code>

In VSO go to the Build tab and select the green plus symbol to add a new build definition. Start with an Empty definition.

VSO Build Tab

After selecting ‘OK’ you will be taken to the Completed builds screen which is, of course, empty because we have not queued a build. It defaulted the name of the new definition to “New Empty definition”. Select the Edit link next to the name. On this screen you will see many options for customizing your build.

Select the Repository tab and select the repository and branch you will use for the build. Mine is a Git repository, the name is BuildPipeline and I want to use the master branch.

Repository Build Pipeline

Hit the Save button and you will be prompted to change the name and enter a comment. You will be prompted with this on every save and selecting the History navigation item you can view all of the changes made to the build definition.

Back in the Build tab let’s add our first build step (On the modal that comes up it says “Add Tasks” – task and step are used interchangeably). Select Add next to “Visual Studio Build”. It will add the step but you will need to close the modal to return to the Build screen.

Build Tab Add Next

Set the name for this step to “Compile” and browse for the solution of your project you want built in this step.

Compile

Save your updates and then queue a build.

One last thing to note – select the Variables navigation item and copy the Value for the system.definitionId somewhere. We will need this later.

Now we’ll step out of VSO for a while and take a look at the REST API.

Visual Studio Online REST API & PowerShell

Last year Microsoft released a new API for accessing Visual Studio Online. Using the REST API you can access pretty much anything in VSO – task boards, git commits, project and teams. You can review all of the services available here –https://www.visualstudio.com/en-us/integrate/api/overview.

This post will be focusing on the service for Builds using version 2.0 of the API.

The first URL we will look at is the list of all builds for the build definition we created previously (https://www.visualstudio.com/integrate/api/build/builds#Getalistofbuilds). The parameters I will use are definitions and $top (the api-version parameter is required in all calls). So our URL will look like this:

<code>
https://{account}.visualstudio.com/defaultcollection/{project}/_apis/build/builds?api-version{version}&definitions={definitionId}&$top={int}
</code>

In our PowerShell script we can create a handful of variables to help us out.

<code>
$projectName = "" #Project name in VSO - can be found in the top left header when looking at builds
$account = "" #Your account name 
$username = "" #Alternate credentials username
$password = "" #Alternate credentials password
$definition = "" #definition Id(s) found in the Variables tab of a build definition
$apiVersion = "2.0"
$tfsUrl = 'https://' + $account + '.visualstudio.com/defaultcollection/' + $projectName   #Base url for all of the VSO API calls 
</code>

Accessing the VSO API through PowerShell requires Alternate Credentials set up. I found the following post helpful for the syntax on the headers needed for credentials. It also has some links to information on Alternate Credentials if you do not have them set up. http://stuartpreston.net/2014/05/accessing-visual-studio-online-rest-api-using-powershell-4-0-invoke-restmethod-and-alternate-credentials/.

<code>
$basicAuth = ("{0}:{1}" -f $username,$password)
$basicAuth = [System.Text.Encoding]::UTF8.GetBytes($basicAuth)
$basicAuth = [System.Convert]::ToBase64String($basicAuth)
$headers = @{Authorization=("Basic {0}" -f $basicAuth)}
</code>

Now we’ll put some of the variables together and create the full url to get the most recent build and call it. For the $top parameter I am specifying 1 to only get the most recent build queued.

<code>
[uri] $uri = $tfsUrl + "/_apis/build/builds?api-version=" + $apiVersion + "&definitions=" + $definition + "&`$top=1"
$allBuildDefs = Invoke-RestMethod -Uri $uri -Headers $headers -Method Get
</code>

To review the Json that gets returned, send it out to a text file.

<code>
$allbuildDefs | ConvertTo-Json | Out-File -FilePath ‘d:\topBuildDefinition.txt’ -Force
</code>

There is some interesting information about the build in this data but what we are looking for is the buildNumber so we can start digging deeper into the build details.

The next call to the VSO API we will look at is the Build Details (https://www.visualstudio.com/integrate/api/build/builds#Getbuilddetails).

<code>
https://{account}.visualstudio.com/defaultcollection/{project}/_apis/build/builds/{buildid}/timeline?api-version={version}
</code>

Notice in the URL it ends with the build id and /timeline. This will not only get us general details about the build but also each of the steps that were completed in the build definition.

We will save the buildNumber to a variable and use it in building out the next URL. The build number is found inside the value array.

<code>
$buildNumber = ($allbuildDefs.value).buildNumber
[uri] $uri = $tfsUrl + "/_apis/build/builds/" + $buildNumber + "/timeline?api-version=" + $apiVersion
$currentBuild = (Invoke-RestMethod -Uri $uri -Headers $headers -Method Get)
$currentBuild | ConvertTo-Json | Out-File -FilePath ‘d:\buildDetails.txt’ -Force
</code>

This gets us even more interesting information about the build. We can see the status of each task/step and the progress of it, the number of warnings from the task and even urls to the log files that were generated.

Now we can take this information and start using it to customize our build process even more. For example, if there are certain steps that we want to make sure don’t exceed a set number of warnings, we can fail the build.

Let’s try it out.

PowerShell and vNext Builds

We will need to make a few changes to our script before we can use it in the build definition.
First, the script should be reusable so we will want to add a couple of parameters, one for the specific step to look at and the maximum number of warnings allowed.

<code>
Param(
[string]$maxWarnings,
[string]$taskName)
</code>

Next, instead of having to get the most recent build (which would get the one executing the script) VSO has some environment variables that can be used (https://msdn.microsoft.com/en-us/library/hh850448.aspx) and one is for the build number of the currently running build. It is very helpful to use these environment variables because it results in less code and one less call to the API.

<code>
$buildNumber = $env:BUILD_BUILDNUMBER
</code>

Now let’s get the number of warnings that were generated for the specified Task in the current build. In the JSON returned for the current build it has a record object that holds an array of the tasks. We will select the object in the array where the Name value equals the TaskName parameter. From that object we want to get the warningCount value

<code>
$warningCount = $currentBuild.records | Where { $_.name -eq $TaskName } | select warningCount
</code>

We also want to make sure to fail the build if the number of warnings in the step we are looking at is more than the max number of warnings that were passed in. A simple If statement with a descriptive error can take care of this. To fail the build we just exit the script with a status that is not 0.

<code>
IF ($warningCount.warningCount -gt $maxWarnings)
{
    Write-Error("The number of  warnings (" + $warningCount.warningCount + ") exceeds the number of allowed warnings (" + $maxWarnings + ")")
    exit 1
}
</code>

Below is the full script. I also added in a couple Write-Verbose statements to print out the number of warnings and the max number of warnings that was passed in to the console.

<code>
Param( 
[string]$maxWarnings, 
[string]$taskName )
$account = "YourAccountName"
$username = "YourUsername"
$password = "YourPassword"
$projectName ="YourProjectName"
$definition = YourDefinitionId
$apiVersion = "2.0"
$buildNumber = $env:BUILD_BUILDNUMBER
$tfsUrl = 'https://' + $account + '.visualstudio.com/defaultcollection/' + $projectName
$basicAuth = ("{0}:{1}" -f $username,$password)
$basicAuth = [System.Text.Encoding]::UTF8.GetBytes($basicAuth)
$basicAuth = [System.Convert]::ToBase64String($basicAuth)
$headers = @{Authorization=("Basic {0}" -f $basicAuth)}
[uri] $uri = $tfsUrl + "/_apis/build/builds/" + $buildNumber + "/timeline?api-version=" + $apiVersion
$currentBuild = (Invoke-RestMethod -Uri $uri -Headers $headers -Method Get)
$warningCount = $currentBuild.records | Where { $_.name -eq $TaskName } | select warningCount
Write-Verbose ("Warnings: " + $warningCount.warningCount) -Verbose
Write-Verbose ("Max warnings allowed: " + $maxWarnings) -Verbose
IF ($warningCount.warningCount -gt $maxWarnings)
{
    Write-Error("The number of  warnings (" + $warningCount.warningCount + ") exceeds the number of allowed warnings (" + $maxWarnings + ")")
    exit 1
}
</code>

Save the PowerShell script and add it to your Visual Studio project. I added a new PowerShell folder to the file structure in File Explorer to hold the script. In my solution, I added a new Solution Folder and then added the script using Add > Existing Item. Make sure to sync the changes to the project’s Git repository.

PowerShell Folder

Back in VSO lets add a new step to the build definition we created earlier. This time go to the Utility group and add PowerShell. Make sure it comes after the Compile step already there and give it a more meaningful name. I gave it the name Compile Warning Check. For the Script file name, selecting the browse (…) button will allow you to select a script from your repository. Navigate to the PowerShell script created earlier and select OK.

Utility Group

In the Arguments text box enter the arguments to be passed in to the script. These will be standard argument formatting. I am using named parameters.

<code>
 -maxWarnings "1" -TaskName "Compile" 
</code>

Save the build definition and let’s see it in action. On the left side of the Build page right-click your build and select Queue Build.

The new builds have a very useful console display that you can watch the output on. It shows when a task is starting and when it has finished. On the left hand side it has icons indicating which step is running and if they passed.

Build Success

Here you notice that the build succeeded and the two lines we wrote from the PowerShell script have been written out as well. The number of warnings (also seen higher in the console written in yellow) does not exceed the max number of warnings we specified.

Head back to the build definition and edit the max number of warnings to be 0 (we have very high expectations for this project). Save and queue another build.

Build Failure

Getting this to run was the first time I was excited to successfully fail a build.

Only the beginning

In this post we took a look at three different tools and services that are each useful to know in depth on their own. Instead of diving into one topic we took a little bit of functionality from each and created something that in itself can be very customizable for different situations.