whazzing around

a thing on the internet

Catching Up

| Comments

Since this summer I’ve been quite busy both at work and on local projects and figured it was time to reflect and look towards the upcoming year.

2013: The Year in Review

Let’s talk about work stuff and home stuff.

@Work

At work, I started 2013 as primarily a C/C++ developer working on connecting our flagship accounting application to connected services. It was extremely interesting, if sometimes frustrating, to enhance a 15-year-old code base to talk to a new generation of more nimble web services. Throughout the summer we stamped out bugs and responded to early feedback from testers, as well as worked on merges out to all the major versions of the code base. At the same time we started prepping documentation and transition materials to ease the difficulty in transferring knowledge of a 20+ year-old code base to a new team.

Starting in August I began splitting my time between my desktop development tasks and my new group. I’m now working on ViewMyPaycheck, which is built on Backbone.js/HTML5/SCSS for the front-end and Java Web Services on the backend. It’s really my first time developing in Java, and so I’m learning new things every day.

Luckily, due to my extensive tours of different Javascript frameworks in my Rails projects at home I was able to come strong out of the gate in working on the Backbone code. My experience with Sass allowed me to not only pick up our styles quickly, but re-architect them to ease the evolution and concurrent development by multiple engineers.

There was some tech debt wrapped up in the new code base but my excellent team has thus far done a great job of prioritizing refactoring and build/infrastructure improvements against the race to be feature-complete for the 2013 tax season.

@Home

The holiday season of 2012 was quite a disaster in my family’s annual gift exchange. Each adult had to submit a list of things they wanted to my mom, and then she drew the names for two different exchanges: one for our immediate family and one for our extended family. When Xmas Morning arrived we found a ridiculous outcome: since most everyone had gone for the ‘easy’ thing on their target’s gift list, most people received two of one thing on their list.

It spoke to an age-old problem of holiday gift-giving where, when grandparents, aunts, uncles, etc. all clamor for a list of things a kid wants, they have no way of ensuring that someone else didn’t already buy something on the list. So in the ensuing winter, when I just so happened to have a lot of time on my hands due to the birth of my son, I set out to create a web app to solve the problem.

I finished a pre-alpha version of giftr.us in the spring and my family used it for birthdays throughout the spring and summer. It’s in a pretty good spot right now, but I wasn’t able to finish the formal Gift Exchange functionality for the 2013 Holidays.

This year I also began a journey to help out the MadRailers Meetup which culminated in me taking over the organization in the fall. I’ve been a member of the meetup since roundabouts shortly after I moved back from California, and it’s enabled me to meet a ton of great folks in the Madison tech community. I started out volunteering for talks in the spring, and in the summer I started suggesting other topics or speakers. By the time of the Madison Ruby conference we’d laid out a great schedule of speakers throughout the end of the year, and I’m really pleased with where the group is headed and our new space: the newly-renovated Madison Public Library!

2014: The Year to Come

@Work

I’m incredibly excited to continue the modernization our Backbone app. A short list of the changes and/or improvements I’m planning:

  • Move to using DevOps-provided Vagrant images on developer machines to more closely align development and deployment environments.
  • Move our static site code (HTML/CSS/JS/Images) to a CDN to improve load times.
  • Move from Ant to Grunt to build the site (transpile SCSS, run JS tests, etc.)
  • Move from base Backbone.js to Marionette.js to get better support for composing layouts, regions, and views.
  • Router changes to better control the fetching of data.

These are all technical in nature; we’re obviously going to parallelize these more architectural and platform-ish changes alongside the improvements to the user workflow and enhancements based on user feedback. My ultimate goal is to get the app into a continuously deployable build process. We’re a ways off right now but it’s definitely doable.

@Home

I’m going to continue to develop Giftr’s gift exchange functionality, and I may explore either turning it into a single-page app via Ember.js or similar, or I may dip my toe into mobile native development. I’m incredibly interested in learning more about Xamarin for doing cross-platform mobile development.

I’m also going to continue to develop more programming for MadRailers. In 2014 we’d love to explore joint-sponsored meetups with other local tech groups, as well as move towards more diversity in our speaker lineups and membership. We’ll be sponsoring more Newbie Nights as well as itermittent Hack Days.

And Sooooo…

It’s been a pretty good year! I was somehow able to maintain some level of progress on my own development projects even with the birth of the first kid, and I’m getting out of my comfort zone in my professional development which is refreshing. Here’s to continuing to learn and improve in 2014!

Automated ClickOnce Build and Deploy Using Powershell and MSBuild

| Comments

As I was nearing completion on the WPF app I described in an earlier post I became focused on how to easily build and deploy it to our developer and QA team. I had decided to use ClickOnce to facilitate easy updates, but I wanted to make it dead-simple to build and deploy the installer/updater to the network share so that anyone could easily contribute to the tool development.

At the time I’d been doing quite a bit of Powershell work and coincidentally I stumbled on the Github post on how they build the Github for Windows application. In that post I saw a tantalizing screen capture of their build/deploy script output and knew at once that I must have it.

From that image I reverse engineered the steps my script needed to take, and then I had to figure out how to implement each step. It’s worth noting that ClickOnce setting manipulation and deployment is not available via scripting or MSBuild commands. The code below includes my solution to these issues.

Build & Deployment Script Output

Below is the output of my own build and deployment script. A couple of important notes:

  • My application consisted of one .exe file and one .dll file corresponding to two projects in a single Visual Studio solution. In the example code below I’ve replaced my .exe project name with Executable and my .dll project name with Library. The ClickOnce settings are maintained in the Executable project file.
  • I should note that a decision I made is that the installer version should always be the same as the executable version. For a small tool like this it makes things simpler than to version the installer independently of the application it installs.
1
2
3
4
5
6
7
8
9
PS C:\dev\tools\Executable> .\Deploy.ps1
 Checking prerequisites...
 Checking out the AssemblyInfo.cs files for version increment...
 Cleaning the build directory...
 Building Executable application...
 Building ClickOnce installer...
 Deploying updates to network server...
 Comitting version increments to Perforce...
PS C:\dev\tools\Executable>

The script is written to be very silent unless an error occurs, so below is a description in more detail. The script does the following:

  • Checks to ensure that the current user has prerequisites installed (in this case perforce).
  • Checks out the appropriate files needed to increment the version number of the DLL, executable, and ClickOnce installer.
  • Cleans the build directory.
  • Builds the DLL and executable.
  • Retrieves the (file) version of the newly-build executable.
  • Forces (hacks) the executable version into the executable’s .csproj definition of the ClickOnce settings.
  • Builds the installer with the new version and the just-built binaries.
  • Copies all of the installer files to the appropriate network fileserver.
  • Commit the changes to the AssemblyInfo and csproj files (i.e., the version changes).

Build & Deployment Script

Please note that I’ve changed a few things about the script below:

  • Perforce repository paths
  • Network deployment paths
  • Binary names

I’ve included the entire (sanitized) script below, and after it describe in greater detail the interesting parts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
Write-Host "Tool Build/Deployment Script"

$outputPrefix = " "
$msbuild = "C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.EXE"

Write-Host $outputPrefix"Checking prerequisites..."

$p4Output = p4
if($p4Output -match "'p4' is not recognized as an internal or external command")
{
  Write-Error "Cannot find p4.exe in your PATH."
  Exit
}

Write-Host $outputPrefix"Checking out the AssemblyInfo.cs files for version increment..."
p4 edit //my/project/tool/path/Library/Properties/AssemblyInfo.cs | Out-Null
p4 edit //my/project/tool/path/Executable/Properties/AssemblyInfo.cs | Out-Null
p4 edit //my/project/tool/path/Executable/Executable.csproj | Out-Null

Write-Host $outputPrefix"Cleaning the build directory..."
Invoke-Expression "$msbuild Executable\Executable.csproj /p:Configuration=Release /p:Platform=AnyCPU /t:clean /v:quiet /nologo"

Write-Host $outputPrefix"Building Executable application..."
Invoke-Expression "$msbuild Executable\Executable.csproj /p:Configuration=Release /p:Platform=AnyCPU /t:build /v:quiet /nologo"

$newExeVersion = Get-ChildItem .\Executable\bin\Release\Executable.exe | Select-Object -ExpandProperty VersionInfo | % { $_.FileVersion }
$newLibVersion = Get-ChildItem .\Executable\bin\Release\Library.dll | Select-Object -ExpandProperty VersionInfo | % { $_.FileVersion }

Write-Host $outputPrefix"Building ClickOnce installer..."
#
# Because the ClickOnce target doesn't automatically update or sync the application version
# with the assembly version of the EXE, we need to grab the version off of the built assembly
# and update the Executable.csproj file with the new application version.
#
$ProjectXml = [xml](Get-Content Executable\Executable.csproj)
$ns = new-object Xml.XmlNamespaceManager $ProjectXml.NameTable
$ns.AddNamespace('msb', 'http://schemas.microsoft.com/developer/msbuild/2003')
$AppVersion = $ProjectXml.SelectSingleNode("//msb:Project/msb:PropertyGroup/msb:ApplicationVersion", $ns)
$AppVersion.InnerText = $newExeVersion
$TargetPath = Resolve-Path "Executable\Executable.csproj"
$ProjectXml.Save($TargetPath)

Invoke-Expression "$msbuild Executable\Executable.csproj /p:Configuration=Release /p:Platform=AnyCPU /t:publish /v:quiet /nologo"

Write-Host $outputPrefix"Deploying updates to network server..."
$LocalInstallerPath = (Resolve-Path "Executable\bin\Release\app.publish").ToString() + "\*"
$RemoteInstallerPath = "\\network\path\Executable\DesktopClient\"
Copy-Item $LocalInstallerPath $RemoteInstallerPath -Recurse -Force

Write-Host $outputPrefix"Committing version increments to Perforce..."
p4 submit -d "Updating Executable ClickOnce Installer to version $newExeVersion" //my/project/tool/path/Executable/Executable.csproj | Out-Null
p4 submit -d "Updating Library to version $newLibVersion" //my/project/tool/path/Library/Properties/AssemblyInfo.cs | Out-Null
p4 submit -d "Updating Executable to version $newExeVersion" //my/project/tool/path/Executable/Properties/AssemblyInfo.cs | Out-Null

Automated Version Increment

You may have noticed that I don’t take any specific action to manage the version numbers of Executable.exe and Library.dll even though I explicitly check out the AssemblyInfo.cs files.

The MSBuild Extension Pack is an open-source collection of MSBuild targets that make things like version management much easier. After adding the extensions to a relative path to my projects I just needed to add the following near the bottom of Executable.csproj.

1
2
3
4
5
6
7
8
9
10
11
12
<PropertyGroup>
  <ExtensionTasksPath>..\contrib\ExtensionPack\4.0.6.0\</ExtensionTasksPath>
</PropertyGroup>
<Import Project="$(ExtensionTasksPath)MSBuild.ExtensionPack.VersionNumber.targets"
  Condition=" '$(BuildingInsideVisualStudio)'!='true' " />
<PropertyGroup Condition=" '$(BuildingInsideVisualStudio)'!='true' ">
  <AssemblyMajorVersion>1</AssemblyMajorVersion>
  <AssemblyMinorVersion>3</AssemblyMinorVersion>
  <AssemblyFileMajorVersion>1</AssemblyFileMajorVersion>
  <AssemblyFileMinorVersion>3</AssemblyFileMinorVersion>
  <AssemblyInfoSpec>Properties\AssemblyInfo.cs</AssemblyInfoSpec>
</PropertyGroup>

A couple things to note here:

  • The Condition attributes on lines 5 & 6 ensure that the version increments only occur when I run the Deploy.ps1 script, as opposed to every time I build through the Visual Studio IDE.
  • I am holding the Major and Minor versions fixed via lines 7-10, so that only the Build and Revision numbers are auto-incremented.

The above code is used both in Executable.csproj and Library.csproj, so that both the executable and the library have their version numbers managed. In doing this I can also change the major/minor versions of the executable and library independently.

Propagate Exe Version to ClickOnce Installer

As I mentioned earlier, I wanted to keep the installer version the same as the executable version. The problem was that there’s no way to manage the ClickOnce settings via MSBuild or other API. Lines 35-41 of the script are the, ahem, workaround that I devised.

Since we want to set the ClickOnce installer version to the same as the executable, we must first fetch the executable version:

1
$newExeVersion = Get-ChildItem .\Executable\bin\Release\Executable.exe | Select-Object -ExpandProperty VersionInfo | % { $_.FileVersion }

This line uses the powerful object piping capabilities in Powershell to fetch the FileVersion property from the assembly itself.

Once we have the executable version, we must then somehow insert it into Executable.csproj where the ClickOnce settings are defined. For reference, the associated XML from the csproj file is:

1
2
3
<PropertyGroup>
  <ApplicationVersion>1.3.0407.01</ApplicationVersion>
</PropertyGroup>

Lines 35-41 read in the csproj file as XML and extracts the ApplicationVersion node. It then replaces the contents of that node with the assembly version we read from the executable and saves the entire XML structure back to the csproj file.

Summary

Through automating the build and deployment process I’ve learned a lot about Powershell and MSBuild and I’ll definitely be improving this in the future. The great thing about this particular combination of tools is that Powershell provides the glue that holds together the powerful build automation (and logging) that MSBuild offers.

While it’s unfortunate that ClickOnce has so many manual aspects to it (and I think I know why) the ease of XML manipulation and file processing from Powershell make it easy to work around ClickOnce’s lack of automation.

In the future I may look at moving the install/upgrade process to the WiX Toolset as it’s much more configurable and automatable. ClickOnce was really a stop-gap solution because it’s for an internal tool and simple enough for my bootstrapping needs.

Responsive WPF Applications With ReactiveUI

| Comments

Learn about my newfound love for the ReactiveUI MVVM framework.

My day job is to wrangle 12 million lines of 20+ year-old C and C++ using a custom Win32-based UI library that was built in the mid-90s and never fundamentally improved. It does what it was designed to do really well (bind transactions to views, zoom between list items, the transactions they compose, and the reports on those transactions) but sometimes I idly wonder what’s been going on in bleeding edge Windows app development in the interim.

Last week I downloaded and played around with GitHub’s excellent Windows client. I vaguely remembered a blog post on the gear underlying the desktop app awhile back, and I was interested to see where it was at these days. You know you’re a geek when you read the entire list of licenses in the About view to get a sense of the underlying technology.

After looking around at the various .NET libraries involved, and reading some follow-up blog posts I decided to build a front-end to a developer tool I whipped up for our developers and QA engineers at work.

Essentially, in debug mode QuickBooks Payroll can talk to a variety of backend environments. Configuring the various endpoints involves editing several config files that are in different places depending on whether you have an installed build or a developer build. To make it easier to configure things I wrote a command-line tool that can tell you what environments you’re currently configured to talk to, as well as list available environments and change the current environment. I built the command-line tool on top of a library that implemented all the core logic because I knew that I’d eventually want to build an easier-to-use GUI on top of it.

So as a little weekend project I took inspiration from GitHub for Windows Metro/Modern visual design and a desire to look further into ReactiveUI’s take on multi-threaded UI.

Multi-threaded UI development typically has one sticking point; it’s easy enough to define a lambda or function and set it up to run on a separate thread but one has to be very careful when moving the result of that lambda back onto the UI thread to display it. Reactive’s secret sauce is to simplify this dangerous activity and deliver a true ‘fire-and-forget’ multi-thread solution.

Thus far I’ve found that it’s very easy to chain async commands AND provide a nice responsive user experience. In cases where you have certain application behaviors that are gated upon other actions completing the ReactiveUI framework makes it extremely simple.

An example:

  • App bootstraps.
  • Check for environment definition updates.
  • If updates are available; download and install updates.
  • When updates are complete, or if no updates are available then validate current saved environment settings.
  • If settings are invalid or missing show the special UI telling user to edit the app settings.
  • If settings are valid then discover the current environment.
  • Once the current environment has been determined; set the properties on the view model that are bound to the UI that describe the current environment.

The state machine representing the above workflow was implemented with three View Models bound to one XAML MainWindow. The cleanest one is below, and I’ve commented how the commands to check for updates and download and install updates are implemented (I’m still working on tightening up the other two).

I’ve included the entire file below because I think it’s valuable to view the initialization of the commands and observers and their targets in context. This view model encapsulates all of the updater functionality and how the different states of updating are exposed to the UI.

EnvironmentsUpdaterViewModel.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Reactive.Linq;
using System.Text;
using System.Threading.Tasks;
using Intuit.Payroll.Tools.SetPayrollEnvironment;
using ReactiveUI;
using ReactiveUI.Xaml;

namespace Intuit.Payroll.Tools.PayrollEnvironments
{
    public enum UpdateState
    {
        Uninitialized,      // No action has been taken yet
        CheckingForUpdates, // Checking the update source for possible updates
        UpdateAvailable,    // An update is available for download
        ApplyingUpdates,    // Currently downloading and installing updates
        Completed           // Used when updates have completed or when no updates are available
    }

    public class EnvironmentsUpdaterViewModel : ReactiveObject
    {
        private PayrollEnvironmentConfiguration _EnvConfig;
        private QuickBooksInformation _LocalSettings;

        public EnvironmentsUpdaterViewModel(QuickBooksInformation localInfo)
        {
            _LocalSettings = localInfo;
            Status = _LocalSettings.IsValid ? Properties.Resources.InitialUpdateMsg
                         : Properties.Resources.InvalidSettingsUpdaterMsg;

            UpdateState = UpdateState.Uninitialized;

            InitializeConfiguration();

            // Creating the EnvironmentsFileUpdater implicitly goes out to the update location
            // (likely on a local or VPN network share), so we want to make sure it's async.
            CheckForUpdates = new ReactiveAsyncCommand(null, 1);
            var updaterFuture = CheckForUpdates.RegisterAsyncFunction(_ =>
                {
                    UpdateState = UpdateState.CheckingForUpdates;
                    return new EnvironmentsFileUpdater(_EnvConfig);
                });
            _Updater = new ObservableAsPropertyHelper<EnvironmentsFileUpdater>(updaterFuture, _ => raisePropertyChanged("Updater"));

            // When the updater has been created, initialized and set then check if any updates
            // are available on the update server.
            this.ObservableForProperty(x => x.Updater).Subscribe(_ =>
                {
                    if (Updater != null)
                    {
                        UpdateState = Updater.UpdateAvailable ? UpdateState.UpdateAvailable : UpdateState.Completed;
                        Status = Updater.UpdateAvailable ? Properties.Resources.EnvironmentUpdatesAvailableMsg
                                        : string.Format(Properties.Resources.CurrentEnvironmentMessage, Updater.LatestVersion);
                        // If updates are available; download and install them
                        if (UpdateState == UpdateState.UpdateAvailable)
                        {
                            UpdateState = UpdateState.ApplyingUpdates;
                            DownloadUpdates.Execute(null);
                        }
                    }
                });

            // After we download and apply updates, update the status and mark the workflow as completed.
            DownloadUpdates = new ReactiveAsyncCommand(null, 1);
            DownloadUpdates.RegisterAsyncAction(_ =>
                {
                    if (Updater.UpdateToVersion(Updater.LatestVersion))
                    {
                        Status = string.Format(Properties.Resources.CurrentEnvironmentMessage, Updater.LatestVersion);
                    }
                    else
                    {
                        Status = Properties.Resources.EnvironmentUpdateErrorMsg;
                    }
                    UpdateState = UpdateState.Completed;
                });
        }

        private void InitializeConfiguration()
        {
            try
            {
                _EnvConfig = EnvironmentManager.GetEnvironmentConfiguration(_LocalSettings);
            }
            catch (Exception)
            {
                _EnvConfig = new PayrollEnvironmentConfiguration { Version = 0,
                                                                   UpdatesLocation = string.Empty };
            }
        }

        private ObservableAsPropertyHelper<EnvironmentsFileUpdater> _Updater;
        private EnvironmentsFileUpdater Updater
        {
            get { return _Updater.Value; }
        }

#pragma warning disable 0649
        private UpdateState _UpdateState;
#pragma warning restore 0649
        public UpdateState UpdateState
        {
            get { return _UpdateState; }
            set { this.RaiseAndSetIfChanged(value); }
        }

#pragma warning disable 0649
        private string _Status;
#pragma warning restore 0649
        public string Status
        {
            get { return _Status; }
            set { this.RaiseAndSetIfChanged(value); }
        }

        public ReactiveAsyncCommand CheckForUpdates { get; protected set; }
        public ReactiveAsyncCommand DownloadUpdates { get; protected set; }
    }
}

The data that the updater object fetches from is located on an intranet share, and so it was important to check for and apply updates asynchronously (especially if, like me, you’re working over the VPN).

I’m unsure at this point if it’s ok that I’m updating the UpdateState property from within many of the lambdas. I feel that there’s something risky going on there but everything seems to work fine for me as it is.

I’ll break down the two main uses of Reactive’s async commands below. Note that I’m using the UpdateState property primarily outside of this class; the window binds certain elements’ visibility to the state of the updater object, but only when the state has certain values.

Initialize the Check for Updates Command
1
2
3
4
5
6
7
8
9
10
11
CheckForUpdates = new ReactiveAsyncCommand(null, 1);
var updaterFuture = CheckForUpdates.RegisterAsyncFunction(_ =>
    {
        // Set the current state
        UpdateState = UpdateState.CheckingForUpdates;
        // The network access occurs in the updater constructor; this line is
        // why we want this to be done asynchronously
        return new EnvironmentsFileUpdater(_EnvConfig);
    });
// Connect the future object to the helper object that backs the Updater property.
_Updater = new ObservableAsPropertyHelper<EnvironmentsFileUpdater>(updaterFuture, _ => raisePropertyChanged("Updater"));

When the async function returns the new EnvironmentsFileUpdater instance and sets the backing field of the Updater property, I then lean on an Observable. Below you can see that I set it up such that when the Updater field changes we check whether an update is available and then either kick off the DownloadUpdates command or set the overall state to UpdateState.Completed to signal to the external listeners that the update process is complete.

The Status property is the string value that’s bound to the UI that updates the user as to what’s happening in the update process.

Initialize the Updater Observer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// When the updater has been created, initialized and set then check if any updates
// are available on the update server.
this.ObservableForProperty(x => x.Updater).Subscribe(_ =>
    {
        if (Updater != null)
        {
            UpdateState = Updater.UpdateAvailable ? UpdateState.UpdateAvailable : UpdateState.Completed;
            Status = Updater.UpdateAvailable ? Properties.Resources.EnvironmentUpdatesAvailableMsg
                            : string.Format(Properties.Resources.CurrentEnvironmentMessage, Updater.LatestVersion);
            // If updates are available; download and install them
            if (UpdateState == UpdateState.UpdateAvailable)
            {
                UpdateState = UpdateState.ApplyingUpdates;
                DownloadUpdates.Execute(null);
            }
        }
    });

The DownloadUpdates command actually handles the work of downloading the latest environment definitions and installing them to the appropriate local storage mechanism. Again, it must reach out over the intranet and so it’s best to make this action asynchronous. Most of the code you see below is concerned with updating the UI-bound status value; line 5 is the key method.

Initialize the Download Updates Command
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
DownloadUpdates = new ReactiveAsyncCommand(null, 1);
DownloadUpdates.RegisterAsyncAction(_ =>
    {
        // Update to the latest version of the environment definitions
        if (Updater.UpdateToVersion(Updater.LatestVersion))
        {
            Status = string.Format(Properties.Resources.CurrentEnvironmentMessage, Updater.LatestVersion);
        }
        else
        {
            Status = Properties.Resources.EnvironmentUpdateErrorMsg;
        }
        // Finalize the state of the update process
        UpdateState = UpdateState.Completed;
    });

I hope this has been a decent non-trivial example on how to use the ReactiveUI framework to build WPF view models; if it looks interesting have a look at the great docs that have been synthesized from Paul Betts blog post examples of different usages of the framework.

Fitbit Logins via Omniauth in Rails

| Comments

It all started with a comment and a pull request.

I hadn’t put time into maintaining my fitbit gem reference app in quite awhile, and when I got a pull request to update it to the latest rails version I at first hesitated. After talking with some folks at my coworking space, however, I decided to use some holiday downtime to pull in the changes and see if I could build on them to improve the quality of the site.

At first I was horrified to learn that I’d stopped in mid-refactoring and I realized it was no wonder that Marcel (the submitter) reported he’d had trouble getting tests to run. The first step was to delete whole directories of files and tests that no longer conformed to the current design of the site and ensure that the files that remained all contributed to that design.

I grabbed Marcel’s pull request and got to work merging it in, at which point I looked at the login code and got embarrassed again. The truth is, the fitgem library wasn’t really built for managing oauth logins; instead it is optimized for using a token/secret for a given user to fetch data from the API. Yes, you could use fitgem to facilitate an oauth login process, but it certainly wasn’t built with rails in mind and you had to roll your own controllers, views, and token and secret management.

The thing is, we already have a library that does that sort of thing: OmniAuth is a fantastic library for consuming pluggable login strategies. I didn’t have to search long before I found a fitbit strategy for omniauth, and I set to work integrating it into the reference app.

Luckily I had been working a lot with OmniAuth lately, as I’d implemented logins via Facebook and Twitter in another project just the week before. As always I leaned heavily on Ryan Bates’ excellent Railscasts in getting up to speed on how OmniAuth worked with Devise, and specifically with Twitter and Facebook. <small-plug>Railscasts Pro ($9/month) has been totally worth it for me. The Pro episodes go into a lot of depth and I’ve learned a ton from them.</small-plug>

Using the omniauth-fitbit gem I was able to delete a lot of now-redundant code for signing into/up for the application. There were, however, a few hiccups along the way that I wanted to note:

  • Unlike Twitter and Facebook, logging into Fitbit via omniauth-fitbit doesn’t allow devise to remember you from session to session, so you end having to re-login every time. I haven’t yet figured out why but it may be something about how Fitbit conducts OAuth logins or it may be a configuration issue with omniauth-fitbit.
  • Logging in via Fitbit is fine and dandy, but if you want to use the token/secret for the logged-in user to fetch data from the API through the fitgem interface you’ll need to store them. I just added fields to my user object:
Add Oauth fields to User model_add_oauth_fields_to_users.rb
1
2
3
4
5
6
class AddOauthFieldsToUsers < ActiveRecord::Migration
  def change
    add_column :users, :oauth_token, :string
    add_column :users, :oauth_secret, :string
  end
end
  • I made some, ahem, unorthodox decisions that will have to refactored later. Chief among them was to add the FitbitClient to the User model:
Access to FitbitClient through User modeluser.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
class User < ActiveRecord::Base

  # Elided for conciseness

  def linked?
    oauth_token.present? && oauth_secret.present?
  end

  def fitbit_data
    raise "Account is not linked with a Fitbit account" unless linked?
    @client ||= Fitgem::Client.new(
                :consumer_key => ENV["FITBIT_CONSUMER_KEY"],
                :consumer_secret => ENV["FITBIT_CONSUMER_SECRET"],
                :token => oauth_token,
                :secret => oauth_secret,
                :user_id => uid
              )
  end

  def has_fitbit_data?
    !@client.nil?
  end

  # Elided for conciseness

end

Eventually I should use a better pattern for this, but for now it’s simplistic and anywhere you have a logged-in user in your application you will be able to call fitbit_data to get access to a configured FitbitClient instance.

And with that we’re logging in via Fitbit and using the token/secret returned via omniuath-fitbit to fuel data retrieval from the Fitbit API through fitgem.

List of Resources

I Seem to Be Getting Along Fine Without Facebook

| Comments

I’m trying really hard to not read Hacker News these days, as the contrarian argument-for-argument’s-sake nature of the comments tends to enrage me, but one of the more mystifying memes that seems to persist down in the Comments Dungeon is that the social contract of the United States of America requires you to be on Facebook.

In each thread where the topic appears there’s someone that says “I wish I didn’t have to have a Facebook account but unfortunately it’s required.” Someone responds and notes that no, it’s not required, and all they have to do is delete their account. That (sensible) response is then responded to a million times with the now-ubiquitous oath: to be a fully-fledged member of a social group in 2012 requires one to be on Facebook. And to that I restate the obvious: no, it really isn’t.

How many devices do you use on a daily basis? Phones, laptops, tablets, desktops, kiosks, and more. There exists a whole world outside of Facebook and all you have to do is grasp it. I prefer phone calls or coffee with friends over email but it’ll do in a pinch. I share my photos through Flickr and Twitter (depending on whether they came from my DSLR or iPhone). I blog at a skrillion different venues. People know where to find me online and IRL, and I’m confident that within my social circle I won’t be left off some invite just because I’m not on Facebook. My friends are my friends— we enjoy each others’ company! What I found when I deleted my Facebook account was that none of my friends really used it for anything meaningful. My wall was eternally filled with high school people I hadn’t seen/talked to/cared about in a decade, and extended family that I didn’t talk to much anyways.

In the last 5 years I’ve taken two significant (though very First-World-Problemy) steps: I got rid of my car, and I deleted my Facebook account. Interestingly enough at the time I did each of these things I was much more uncomfortable and worried about having no car than that I would miss some crucial social interaction by leaving Facebook.

When I left Facebook there was much more a sense of relief than anything else; I suggest you try it.

Madison Ruby 2012

| Comments

Please note: Madison Ruby photos are now online at Flickr.

I’m working off the conference hangover. Email catchup, priotizing tasks, etc. took up my time this morning, but I couldn’t let Madison’s premier software conference fade without writing down some thoughts.

Firstly: much love to Jim and Jen Remsik as well as all the volunteers, speakers, and organizers that helped make v2 of Madison Ruby even more fun and entertaining than the inaugural conf last year. The diversity of topics was again front and center: from team dynamics, to talks about social justice, to funky drummers, to sources of inspiration and the finer points of immigration law, the organizers found a way to communicate technical content while advancing the community in other areas as well.

First up was the Design Eye for the Dev Guy or Gal workshop on Thursday. I misunderstood the focus, which is mostly because I signed up for it before a comprehensive description had been published. Wynn Netherland did a great job laying out the finer points of HTML5, Sass, Compass, and more. I had expected the workshop to range more towards the philosophy of web design, rather than a technical how-to around using the tools. Even so, however, I learned a lot about Compass. My previous aversion stemmed from when compass and blueprint were sold as a boxed set, but I’m definitely going to check it out on the next major project.

Highlight: In a discussion about the ubiquity of Twitter Bootstrap Wynn asked how many folks were using Octopress for their dev blog, and then added, “You know, you don’t HAVE to use the default style… you can add some color in there.” sheepish grin

I may have had a little too much fun at the Github-sponsored drinkup on Thursday evening; waking up and riding my bike downtown at 7am was a monumental task.

Friday’s talks were very interesting, especially the Anti-Opression 101 talk by Lindsey Bieda and Steve Klabnik. Though I had seen it a previous Mad-Railers meetup I also enjoyed Matthew Rathbone’s talk about how Foursquare uses Hadoop for its various data processing needs. Unfortunately, I had to take off early on Friday and missed some of the afternoon talks, including what I heard was a fantastic 45 minutes with Clyde Stubblefield.

Saturday I was feeling a lot better, and hit the Farmer’s Market early for coffee and pastries. I’m embarassed to say that I hadn’t looked at the schedule closely enough and so it was an extremely pleasant surprise that the first talk was by Paolo Perrotta, author of the best Ruby book in existence: Metaprogramming Ruby. It was a great talk about ghosts, fake ghosts, and all manner of potential problems one will encounter when using the metaprogramming aspect of the Ruby language.

Later in the day were several successive talks that excited the hell out of me, but none moreso than Leon Gersing’s talk on the Weird in programming. Difficult to describe so I’ll just say that if he’s speaking at a conference make sure you get your ass to that talk and be prepared to enjoy it.

The Teaching Rails panel discussion late in the day on Saturday was interesting. I almost didn’t attend because I thought it was going to be about how best to teach Rails to new folks. Instead it was more about the business of teaching Rails and how each of the four panelists approached it. Ultimately the panel dovetailed into a talk about certification, payscales, and other highly interesting topics to any software developer. I felt that Jeff Casimir’s innocent suggestion of some kind of certification was not considered for a single second before being shouted down by everyone else on the panel and in attendance. In particular, I think his idea has merit but the operational reality is bad, which is why everyone just said “NO CERTIFICATION” when I think what is needed is a discussion about what certification is trying to accomplish and how it can be done without turning into a paradise for grifters and dummies.

This was just a taste of the goings-on at Madison Ruby 2012. Others will likely have a fuller rundown of the events so I just hit what were, to me, the highlights. If you’re down on the Ruby tip I highly suggest you try to get to Madison for next year’s conference. You can already hit the Very Early Bird registration page and get a ticket for the low, low price of $199. Do it, and come hang out with me next year!

I Have an Idea

| Comments

Once upon a time I was in Vegas with some friends. Over the course of 72 hours myself and half a dozen friends drifted in and out of our hotel room at different times of the day and night as we gambled and boozed our way across the landscape.

At around 3am one friend drunkenly burst into the hotel room and started going on and on about how great he was doing, gambling-wise. He talked at length even as it became obvious that he was winding down like some kind of strange children’s toy. Then he surged out of bed and said, “I have an idea!” Another friend asked what it was and got the classic response, “I’ll need $300 of your money.”

“And then what?” asked the friend, genuinely intrigued.

Alas, friend one was already passed out. When asked the next morning what the great $300 was he had no idea what we were talking about.

I think about this story every time I see a vaguely defined, ill-conceived Kickstarter campaign that will ultimately result in a fugue state where everyone wonders what happened to their money in the morning.

(Note: this is not a jab at Kickstarter itself; I think there’s some great stuff going on there.)

Becoming the Finisher

| Comments

Via Hacker News this morning I read Jacques Mattheij’s The Starter, the Architect, the Debugger and the Finisher. It was a great topic and one that I think about often; for me personally finishing projects smoothly is my biggest challenge.

For me, starting projects is a great time to be alive. I love bootstrapping repeatable builds, testing systems, automatic doc generation, etc. It’s a reflexive response having seen so many projects that were doomed from the word ‘go’ by developers who named folders any old thing. Even folks that joined the team within the first year were confused as to where code was located and what (if any) naming conventions existed.

Similarly, I’ve been writing software for millions of users for ten years now and these days I think intensely about architecture when bootstrapping new components or applications. Making the appropriate trade-offs between short- and long-term design up front will accelerate development while leaving you open to pivoting or scaling when the time is right.

My problem comes to the final touches of a release. For example, often when I’m working on a new release of fitgem I code up the functionality, write new tests, write inline API documentation as I create new methods, etc. After I think I’m done I’ll take the library for a spin in Pry and start coding against the new API functions. Things will go well for awhile and then I’ll hit a snag where I forgot a parameter or had a typo in the code. Great! My ad-hoc testing revealed something I hadn’t included in the unit tests. AND THEN I RELEASE. This is not a good thing, but I often spaz out when I get 95% of the way to finishing, hit a snag, and then fix my issue. I guess I feel exasperated about how long this thing is taking and then just wig out and push it (WE’LL DO IT LIVE!)

To be a good finisher I need to slow down and restart my release process (as light as it is!) when I hit problems. I need to suppress the anxiousness when I hit minor problems before pushing live, go back to the top of my checklist, and start it over. Doing that on small projects like a gem will instill good mental resistance to spazzing for large, important releases.

Don’t Call Me an Engineer

| Comments

When I go on campus to do university recruiting, or I’m in a non-technical social situation where I must explain what I do for a living, I often take great pains to explain what should be a simple thing. My title in my company is “Senior Software Engineer” but I always very precisely call myself a “software developer,” “programmer,” or “software nerd.”

I am uncomfortable specifically with the “engineer” label, and that discomfort stems primarily from a panel discussion I attended in conjunction with the UW Engineering Career Services group.

For some background, at the University of Wisconsin-Madison, Computer Science is not part of the School of Engineering, but a department in the School of Letters & Science. Even so, Computer Science students are given leave to particpate in School of Engineering career fairs and recruiting due to the crossover technical skillset.

As someone who conducts on-campus interviews and attends career fairs for my company at UW I was invited to a panel discussion on “starting your first engineering job after graduating.” It was a fantastic topic for a panel because many seniors are so busy interviewing, wrapping up courses, and thinking about relocation concerns that they often don’t think much about the actual beginning of a professional career. Other folks on the panel included chemical engineers, industrial engineers, civil engineers, mechanical engineers and more.

Topics ranged across the board (Do you work a lot of hours? Do you like your boss? Have you been promoted yet?) but one that stuck with me was the question: “Should I pay/study for my engineer certification? Is it better to do it before interviewing or will my employer fund the cost?” Many engineers on the panel had various opinions and it really did vary widely by field, but when it got to me I just shrugged and told the truth: there is no professional certification to be a software “engineer”. There are certainly certifications for working with a particular software system, but there is no certification that will allow you to be insured against writing buggy software.1

I admire engineers and really only want to drive on bridges and use power tools that were designed by professional engineers. Software developers CAN produce incredible feats of redundant, bug-free software system design and implementation but with the stipulation that time and process are needed to accomplish such things. There likely do exist folks with the sufficient experience and knowledge to be called software engineers, but I am not one of them and they are few and far between.

In the tech sphere this is really more of an academic conversation- we all speak the same language. Everyone at your software company has some flavor of “engineer” in their title, and no one thinks twice about it. When thrust into the larger world of engineers, however, the loosey-goosey nature of software development and it’s similarity to more of a craft than a discipline make strict adherance to standards of proven technical excellence difficult.

What do you think? Does the typical professional software development environment make you think of “engineering” in the same way that mechanical or civil engineers practice it? Does the fact that we do TDD/BDD make up for the fact that software tends not to have perfectly understood natural laws it must follow?

  1. This argument has indeed been made many times before; this post is more my feelings about it.

Whazzing.com Moved to Github

| Comments

Two things forced my hand to move my development blog off of my own Rails-based blogging codebase and onto Github Pages via Octopress:

  1. My VPS is becoming severely overcrowded with deployed apps; I use it as a testbed for a lot of ideas as well as my main swearing-at-pop-culture blog and I didn’t want to host yet another heavyweight rails app to chew through limited resources.
  2. I reaaally like the ‘one-file-per-post’ model of Octopress/Jekyll, and keeping my development blog/presentations right next to my open source code is very appealing.

Since I wrote my latest blog engine to take markdown input it was easy-as-pie to export the existing posts to files and then use them to bootstrap this Octopress site. For now I’ll ride on the default theme and see how it goes. I hope to spend more time iterating on my presentations and a bunch of C++ code that I’m working on for a work project, so it’s also a plus that I don’t have maintain the platform source tree anymore.