Let’s talk about work stuff and home stuff.
At work, I started 2013 as primarily a C/C++ developer working on connecting our flagship accounting application to connected services. It was extremely interesting, if sometimes frustrating, to enhance a 15-year-old code base to talk to a new generation of more nimble web services. Throughout the summer we stamped out bugs and responded to early feedback from testers, as well as worked on merges out to all the major versions of the code base. At the same time we started prepping documentation and transition materials to ease the difficulty in transferring knowledge of a 20+ year-old code base to a new team.
Starting in August I began splitting my time between my desktop development tasks and my new group. I’m now working on ViewMyPaycheck, which is built on Backbone.js/HTML5/SCSS for the front-end and Java Web Services on the backend. It’s really my first time developing in Java, and so I’m learning new things every day.
Luckily, due to my extensive tours of different Javascript frameworks in my Rails projects at home I was able to come strong out of the gate in working on the Backbone code. My experience with Sass allowed me to not only pick up our styles quickly, but re-architect them to ease the evolution and concurrent development by multiple engineers.
There was some tech debt wrapped up in the new code base but my excellent team has thus far done a great job of prioritizing refactoring and build/infrastructure improvements against the race to be feature-complete for the 2013 tax season.
The holiday season of 2012 was quite a disaster in my family’s annual gift exchange. Each adult had to submit a list of things they wanted to my mom, and then she drew the names for two different exchanges: one for our immediate family and one for our extended family. When Xmas Morning arrived we found a ridiculous outcome: since most everyone had gone for the ‘easy’ thing on their target’s gift list, most people received two of one thing on their list.
It spoke to an age-old problem of holiday gift-giving where, when grandparents, aunts, uncles, etc. all clamor for a list of things a kid wants, they have no way of ensuring that someone else didn’t already buy something on the list. So in the ensuing winter, when I just so happened to have a lot of time on my hands due to the birth of my son, I set out to create a web app to solve the problem.
I finished a pre-alpha version of giftr.us in the spring and my family used it for birthdays throughout the spring and summer. It’s in a pretty good spot right now, but I wasn’t able to finish the formal Gift Exchange functionality for the 2013 Holidays.
This year I also began a journey to help out the MadRailers Meetup which culminated in me taking over the organization in the fall. I’ve been a member of the meetup since roundabouts shortly after I moved back from California, and it’s enabled me to meet a ton of great folks in the Madison tech community. I started out volunteering for talks in the spring, and in the summer I started suggesting other topics or speakers. By the time of the Madison Ruby conference we’d laid out a great schedule of speakers throughout the end of the year, and I’m really pleased with where the group is headed and our new space: the newly-renovated Madison Public Library!
I’m incredibly excited to continue the modernization our Backbone app. A short list of the changes and/or improvements I’m planning:
These are all technical in nature; we’re obviously going to parallelize these more architectural and platform-ish changes alongside the improvements to the user workflow and enhancements based on user feedback. My ultimate goal is to get the app into a continuously deployable build process. We’re a ways off right now but it’s definitely doable.
I’m going to continue to develop Giftr’s gift exchange functionality, and I may explore either turning it into a single-page app via Ember.js or similar, or I may dip my toe into mobile native development. I’m incredibly interested in learning more about Xamarin for doing cross-platform mobile development.
I’m also going to continue to develop more programming for MadRailers. In 2014 we’d love to explore joint-sponsored meetups with other local tech groups, as well as move towards more diversity in our speaker lineups and membership. We’ll be sponsoring more Newbie Nights as well as itermittent Hack Days.
It’s been a pretty good year! I was somehow able to maintain some level of progress on my own development projects even with the birth of the first kid, and I’m getting out of my comfort zone in my professional development which is refreshing. Here’s to continuing to learn and improve in 2014!
]]>At the time I’d been doing quite a bit of Powershell work and coincidentally I stumbled on the Github post on how they build the Github for Windows application. In that post I saw a tantalizing screen capture of their build/deploy script output and knew at once that I must have it.
From that image I reverse engineered the steps my script needed to take, and then I had to figure out how to implement each step. It’s worth noting that ClickOnce setting manipulation and deployment is not available via scripting or MSBuild commands. The code below includes my solution to these issues.
Below is the output of my own build and deployment script. A couple of important notes:
1 2 3 4 5 6 7 8 9 |
|
The script is written to be very silent unless an error occurs, so below is a description in more detail. The script does the following:
Please note that I’ve changed a few things about the script below:
I’ve included the entire (sanitized) script below, and after it describe in greater detail the interesting parts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
You may have noticed that I don’t take any specific action to manage the version numbers of Executable.exe and Library.dll even though I explicitly check out the AssemblyInfo.cs files.
The MSBuild Extension Pack is an open-source collection of MSBuild targets that make things like version management much easier. After adding the extensions to a relative path to my projects I just needed to add the following near the bottom of Executable.csproj
.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
A couple things to note here:
Condition
attributes on lines 5 & 6 ensure that the version increments only occur when I run the Deploy.ps1
script, as opposed to every time I build through the Visual Studio IDE.The above code is used both in Executable.csproj
and Library.csproj
, so that both the executable and the library have their version numbers managed. In doing this I can also change the major/minor versions of the executable and library independently.
As I mentioned earlier, I wanted to keep the installer version the same as the executable version. The problem was that there’s no way to manage the ClickOnce settings via MSBuild or other API. Lines 35-41 of the script are the, ahem, workaround that I devised.
Since we want to set the ClickOnce installer version to the same as the executable, we must first fetch the executable version:
1
|
|
This line uses the powerful object piping capabilities in Powershell to fetch the FileVersion property from the assembly itself.
Once we have the executable version, we must then somehow insert it into Executable.csproj
where the ClickOnce settings are defined. For reference, the associated XML from the csproj file is:
1 2 3 |
|
Lines 35-41 read in the csproj file as XML and extracts the ApplicationVersion
node. It then replaces the contents of that node with the assembly version we read from the executable and saves the entire XML structure back to the csproj file.
Through automating the build and deployment process I’ve learned a lot about Powershell and MSBuild and I’ll definitely be improving this in the future. The great thing about this particular combination of tools is that Powershell provides the glue that holds together the powerful build automation (and logging) that MSBuild offers.
While it’s unfortunate that ClickOnce has so many manual aspects to it (and I think I know why) the ease of XML manipulation and file processing from Powershell make it easy to work around ClickOnce’s lack of automation.
In the future I may look at moving the install/upgrade process to the WiX Toolset as it’s much more configurable and automatable. ClickOnce was really a stop-gap solution because it’s for an internal tool and simple enough for my bootstrapping needs.
]]>My day job is to wrangle 12 million lines of 20+ year-old C and C++ using a custom Win32-based UI library that was built in the mid-90s and never fundamentally improved. It does what it was designed to do really well (bind transactions to views, zoom between list items, the transactions they compose, and the reports on those transactions) but sometimes I idly wonder what’s been going on in bleeding edge Windows app development in the interim.
Last week I downloaded and played around with GitHub’s excellent Windows client. I vaguely remembered a blog post on the gear underlying the desktop app awhile back, and I was interested to see where it was at these days. You know you’re a geek when you read the entire list of licenses in the About view to get a sense of the underlying technology.
After looking around at the various .NET libraries involved, and reading some follow-up blog posts I decided to build a front-end to a developer tool I whipped up for our developers and QA engineers at work.
Essentially, in debug mode QuickBooks Payroll can talk to a variety of backend environments. Configuring the various endpoints involves editing several config files that are in different places depending on whether you have an installed build or a developer build. To make it easier to configure things I wrote a command-line tool that can tell you what environments you’re currently configured to talk to, as well as list available environments and change the current environment. I built the command-line tool on top of a library that implemented all the core logic because I knew that I’d eventually want to build an easier-to-use GUI on top of it.
So as a little weekend project I took inspiration from GitHub for Windows Metro/Modern visual design and a desire to look further into ReactiveUI’s take on multi-threaded UI.
Multi-threaded UI development typically has one sticking point; it’s easy enough to define a lambda or function and set it up to run on a separate thread but one has to be very careful when moving the result of that lambda back onto the UI thread to display it. Reactive’s secret sauce is to simplify this dangerous activity and deliver a true ‘fire-and-forget’ multi-thread solution.
Thus far I’ve found that it’s very easy to chain async commands AND provide a nice responsive user experience. In cases where you have certain application behaviors that are gated upon other actions completing the ReactiveUI framework makes it extremely simple.
An example:
The state machine representing the above workflow was implemented with three View Models bound to one XAML MainWindow. The cleanest one is below, and I’ve commented how the commands to check for updates and download and install updates are implemented (I’m still working on tightening up the other two).
I’ve included the entire file below because I think it’s valuable to view the initialization of the commands and observers and their targets in context. This view model encapsulates all of the updater functionality and how the different states of updating are exposed to the UI.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
The data that the updater object fetches from is located on an intranet share, and so it was important to check for and apply updates asynchronously (especially if, like me, you’re working over the VPN).
I’m unsure at this point if it’s ok that I’m updating the UpdateState
property from within many of the lambdas. I feel that there’s something risky going on there but everything seems to work fine for me as it is.
I’ll break down the two main uses of Reactive’s async commands below. Note that I’m using the UpdateState
property primarily outside of this class; the window binds certain elements’ visibility to the state of the updater object, but only when the state has certain values.
1 2 3 4 5 6 7 8 9 10 11 |
|
When the async function returns the new EnvironmentsFileUpdater
instance and sets the backing field of the Updater
property, I then lean on an Observable. Below you can see that I set it up such that when the Updater
field changes we check whether an update is available and then either kick off the DownloadUpdates
command or set the overall state to UpdateState.Completed
to signal to the external listeners that the update process is complete.
The Status
property is the string value that’s bound to the UI that updates the user as to what’s happening in the update process.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
The DownloadUpdates
command actually handles the work of downloading the latest environment definitions and installing them to the appropriate local storage mechanism. Again, it must reach out over the intranet and so it’s best to make this action asynchronous. Most of the code you see below is concerned with updating the UI-bound status value; line 5 is the key method.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
I hope this has been a decent non-trivial example on how to use the ReactiveUI framework to build WPF view models; if it looks interesting have a look at the great docs that have been synthesized from Paul Betts blog post examples of different usages of the framework.
]]>I hadn’t put time into maintaining my fitbit gem reference app in quite awhile, and when I got a pull request to update it to the latest rails version I at first hesitated. After talking with some folks at my coworking space, however, I decided to use some holiday downtime to pull in the changes and see if I could build on them to improve the quality of the site.
At first I was horrified to learn that I’d stopped in mid-refactoring and I realized it was no wonder that Marcel (the submitter) reported he’d had trouble getting tests to run. The first step was to delete whole directories of files and tests that no longer conformed to the current design of the site and ensure that the files that remained all contributed to that design.
I grabbed Marcel’s pull request and got to work merging it in, at which point I looked at the login code and got embarrassed again. The truth is, the fitgem library wasn’t really built for managing oauth logins; instead it is optimized for using a token/secret for a given user to fetch data from the API. Yes, you could use fitgem to facilitate an oauth login process, but it certainly wasn’t built with rails in mind and you had to roll your own controllers, views, and token and secret management.
The thing is, we already have a library that does that sort of thing: OmniAuth is a fantastic library for consuming pluggable login strategies. I didn’t have to search long before I found a fitbit strategy for omniauth, and I set to work integrating it into the reference app.
Luckily I had been working a lot with OmniAuth lately, as I’d implemented logins via Facebook and Twitter in another project just the week before. As always I leaned heavily on Ryan Bates’ excellent Railscasts in getting up to speed on how OmniAuth worked with Devise, and specifically with Twitter and Facebook. <small-plug>Railscasts Pro ($9/month) has been totally worth it for me. The Pro episodes go into a lot of depth and I’ve learned a ton from them.</small-plug>
Using the omniauth-fitbit gem I was able to delete a lot of now-redundant code for signing into/up for the application. There were, however, a few hiccups along the way that I wanted to note:
1 2 3 4 5 6 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
Eventually I should use a better pattern for this, but for now it’s simplistic and anywhere you have a logged-in user in your application you will be able to call fitbit_data
to get access to a configured FitbitClient instance.
And with that we’re logging in via Fitbit and using the token/secret returned via omniuath-fitbit to fuel data retrieval from the Fitbit API through fitgem.
In each thread where the topic appears there’s someone that says “I wish I didn’t have to have a Facebook account but unfortunately it’s required.” Someone responds and notes that no, it’s not required, and all they have to do is delete their account. That (sensible) response is then responded to a million times with the now-ubiquitous oath: to be a fully-fledged member of a social group in 2012 requires one to be on Facebook. And to that I restate the obvious: no, it really isn’t.
How many devices do you use on a daily basis? Phones, laptops, tablets, desktops, kiosks, and more. There exists a whole world outside of Facebook and all you have to do is grasp it. I prefer phone calls or coffee with friends over email but it’ll do in a pinch. I share my photos through Flickr and Twitter (depending on whether they came from my DSLR or iPhone). I blog at a skrillion different venues. People know where to find me online and IRL, and I’m confident that within my social circle I won’t be left off some invite just because I’m not on Facebook. My friends are my friends— we enjoy each others’ company! What I found when I deleted my Facebook account was that none of my friends really used it for anything meaningful. My wall was eternally filled with high school people I hadn’t seen/talked to/cared about in a decade, and extended family that I didn’t talk to much anyways.
In the last 5 years I’ve taken two significant (though very First-World-Problemy) steps: I got rid of my car, and I deleted my Facebook account. Interestingly enough at the time I did each of these things I was much more uncomfortable and worried about having no car than that I would miss some crucial social interaction by leaving Facebook.
When I left Facebook there was much more a sense of relief than anything else; I suggest you try it.
]]>I’m working off the conference hangover. Email catchup, priotizing tasks, etc. took up my time this morning, but I couldn’t let Madison’s premier software conference fade without writing down some thoughts.
Firstly: much love to Jim and Jen Remsik as well as all the volunteers, speakers, and organizers that helped make v2 of Madison Ruby even more fun and entertaining than the inaugural conf last year. The diversity of topics was again front and center: from team dynamics, to talks about social justice, to funky drummers, to sources of inspiration and the finer points of immigration law, the organizers found a way to communicate technical content while advancing the community in other areas as well.
First up was the Design Eye for the Dev Guy or Gal workshop on Thursday. I misunderstood the focus, which is mostly because I signed up for it before a comprehensive description had been published. Wynn Netherland did a great job laying out the finer points of HTML5, Sass, Compass, and more. I had expected the workshop to range more towards the philosophy of web design, rather than a technical how-to around using the tools. Even so, however, I learned a lot about Compass. My previous aversion stemmed from when compass and blueprint were sold as a boxed set, but I’m definitely going to check it out on the next major project.
Highlight: In a discussion about the ubiquity of Twitter Bootstrap Wynn asked how many folks were using Octopress for their dev blog, and then added, “You know, you don’t HAVE to use the default style… you can add some color in there.” sheepish grin
I may have had a little too much fun at the Github-sponsored drinkup on Thursday evening; waking up and riding my bike downtown at 7am was a monumental task.
Friday’s talks were very interesting, especially the Anti-Opression 101 talk by Lindsey Bieda and Steve Klabnik. Though I had seen it a previous Mad-Railers meetup I also enjoyed Matthew Rathbone’s talk about how Foursquare uses Hadoop for its various data processing needs. Unfortunately, I had to take off early on Friday and missed some of the afternoon talks, including what I heard was a fantastic 45 minutes with Clyde Stubblefield.
Saturday I was feeling a lot better, and hit the Farmer’s Market early for coffee and pastries. I’m embarassed to say that I hadn’t looked at the schedule closely enough and so it was an extremely pleasant surprise that the first talk was by Paolo Perrotta, author of the best Ruby book in existence: Metaprogramming Ruby. It was a great talk about ghosts, fake ghosts, and all manner of potential problems one will encounter when using the metaprogramming aspect of the Ruby language.
Later in the day were several successive talks that excited the hell out of me, but none moreso than Leon Gersing’s talk on the Weird in programming. Difficult to describe so I’ll just say that if he’s speaking at a conference make sure you get your ass to that talk and be prepared to enjoy it.
The Teaching Rails panel discussion late in the day on Saturday was interesting. I almost didn’t attend because I thought it was going to be about how best to teach Rails to new folks. Instead it was more about the business of teaching Rails and how each of the four panelists approached it. Ultimately the panel dovetailed into a talk about certification, payscales, and other highly interesting topics to any software developer. I felt that Jeff Casimir’s innocent suggestion of some kind of certification was not considered for a single second before being shouted down by everyone else on the panel and in attendance. In particular, I think his idea has merit but the operational reality is bad, which is why everyone just said “NO CERTIFICATION” when I think what is needed is a discussion about what certification is trying to accomplish and how it can be done without turning into a paradise for grifters and dummies.
This was just a taste of the goings-on at Madison Ruby 2012. Others will likely have a fuller rundown of the events so I just hit what were, to me, the highlights. If you’re down on the Ruby tip I highly suggest you try to get to Madison for next year’s conference. You can already hit the Very Early Bird registration page and get a ticket for the low, low price of $199. Do it, and come hang out with me next year!
]]>At around 3am one friend drunkenly burst into the hotel room and started going on and on about how great he was doing, gambling-wise. He talked at length even as it became obvious that he was winding down like some kind of strange children’s toy. Then he surged out of bed and said, “I have an idea!” Another friend asked what it was and got the classic response, “I’ll need $300 of your money.”
“And then what?” asked the friend, genuinely intrigued.
Alas, friend one was already passed out. When asked the next morning what the great $300 was he had no idea what we were talking about.
I think about this story every time I see a vaguely defined, ill-conceived Kickstarter campaign that will ultimately result in a fugue state where everyone wonders what happened to their money in the morning.
(Note: this is not a jab at Kickstarter itself; I think there’s some great stuff going on there.)
]]>For me, starting projects is a great time to be alive. I love bootstrapping repeatable builds, testing systems, automatic doc generation, etc. It’s a reflexive response having seen so many projects that were doomed from the word ‘go’ by developers who named folders any old thing. Even folks that joined the team within the first year were confused as to where code was located and what (if any) naming conventions existed.
Similarly, I’ve been writing software for millions of users for ten years now and these days I think intensely about architecture when bootstrapping new components or applications. Making the appropriate trade-offs between short- and long-term design up front will accelerate development while leaving you open to pivoting or scaling when the time is right.
My problem comes to the final touches of a release. For example, often when I’m working on a new release of fitgem I code up the functionality, write new tests, write inline API documentation as I create new methods, etc. After I think I’m done I’ll take the library for a spin in Pry and start coding against the new API functions. Things will go well for awhile and then I’ll hit a snag where I forgot a parameter or had a typo in the code. Great! My ad-hoc testing revealed something I hadn’t included in the unit tests. AND THEN I RELEASE. This is not a good thing, but I often spaz out when I get 95% of the way to finishing, hit a snag, and then fix my issue. I guess I feel exasperated about how long this thing is taking and then just wig out and push it (WE’LL DO IT LIVE!)
To be a good finisher I need to slow down and restart my release process (as light as it is!) when I hit problems. I need to suppress the anxiousness when I hit minor problems before pushing live, go back to the top of my checklist, and start it over. Doing that on small projects like a gem will instill good mental resistance to spazzing for large, important releases.
]]>I am uncomfortable specifically with the “engineer” label, and that discomfort stems primarily from a panel discussion I attended in conjunction with the UW Engineering Career Services group.
For some background, at the University of Wisconsin-Madison, Computer Science is not part of the School of Engineering, but a department in the School of Letters & Science. Even so, Computer Science students are given leave to particpate in School of Engineering career fairs and recruiting due to the crossover technical skillset.
As someone who conducts on-campus interviews and attends career fairs for my company at UW I was invited to a panel discussion on “starting your first engineering job after graduating.” It was a fantastic topic for a panel because many seniors are so busy interviewing, wrapping up courses, and thinking about relocation concerns that they often don’t think much about the actual beginning of a professional career. Other folks on the panel included chemical engineers, industrial engineers, civil engineers, mechanical engineers and more.
Topics ranged across the board (Do you work a lot of hours? Do you like your boss? Have you been promoted yet?) but one that stuck with me was the question: “Should I pay/study for my engineer certification? Is it better to do it before interviewing or will my employer fund the cost?” Many engineers on the panel had various opinions and it really did vary widely by field, but when it got to me I just shrugged and told the truth: there is no professional certification to be a software “engineer”. There are certainly certifications for working with a particular software system, but there is no certification that will allow you to be insured against writing buggy software.1
I admire engineers and really only want to drive on bridges and use power tools that were designed by professional engineers. Software developers CAN produce incredible feats of redundant, bug-free software system design and implementation but with the stipulation that time and process are needed to accomplish such things. There likely do exist folks with the sufficient experience and knowledge to be called software engineers, but I am not one of them and they are few and far between.
In the tech sphere this is really more of an academic conversation- we all speak the same language. Everyone at your software company has some flavor of “engineer” in their title, and no one thinks twice about it. When thrust into the larger world of engineers, however, the loosey-goosey nature of software development and it’s similarity to more of a craft than a discipline make strict adherance to standards of proven technical excellence difficult.
What do you think? Does the typical professional software development environment make you think of “engineering” in the same way that mechanical or civil engineers practice it? Does the fact that we do TDD/BDD make up for the fact that software tends not to have perfectly understood natural laws it must follow?
Since I wrote my latest blog engine to take markdown input it was easy-as-pie to export the existing posts to files and then use them to bootstrap this Octopress site. For now I’ll ride on the default theme and see how it goes. I hope to spend more time iterating on my presentations and a bunch of C++ code that I’m working on for a work project, so it’s also a plus that I don’t have maintain the platform source tree anymore.
]]>New features include glucose, blood pressure, and heart rate logging and retrieval methods. Documentation has also been updated to include new endpoints for the Time Series interface.
Source | Gem | Issues | Documentation
```ruby
comments = Comment.all
```
becomes
comments = Comment.all
When I rewrote the code for this web site awhile back I decided to support markdown, but didn’t really think I’d use it much; maybe to do code blocks but not much else. Now I find myself writing markdown for all content, and only translating it through various interpreters when I need the HTML. In retrospect I’m extremely glad I went through the effort to add markdown support for the site— I use it all the time now.
There’s an incredible app for writing markdown on the fly: Mou is a Mac application that lets you input markdown on the left pane and the HTML is dynamically updated in the right pane. I’m literally using it right now to write this post, just to get a preview of what it will look like when I drop it into the <textarea> on the web site.
I basically use it for all markdown content authoring, and then I copy/paste the result into whatever application/content field I will use it in. It allows me to do edit/revision cycles on content as well as seeing the application of styles. Highest recommendation.
I also recently forked git-wiki and started working on improving it for my own use. It also uses Markdown to write the wiki entries, which gets a little weird if you’ve ever used MediaWiki or Wikipedia. The default way to define wikiwords in git-wiki didn’t really work for me so I updated it and checked everything back into my own repo. I’ll be updating more about my git-wiki fork once I get it refactored and a little more configurable.
So there you go- markdown is awesome and you should write your content in it; if you’ve never worked with it then try it out on Github (just make a public repo and create a README.md in the index).
]]>In addition to the new features, I thoroughly documented the entire library API (available here), added many more unit tests, refactored the subscriptions methods, and moved the oauth process documentation to the project wiki on github.
Source | Gem | Issues | Documentation
Back in 2002 I got my exposure to SQL while doing elemntary PHP programming. I read a little about MySQL back then, but never really went deep into things such as indexes and optimizations. Rails obscures a lot of that stuff, but I wanted to be able to optimize efficiently and learn more about the underlying technology. So far this book has been really good at diving down underneath your SELECT, UPDATE, CREATE, and DROP actions.
This is bar none my favorite Ruby book, and in my opinion is far better than the Pickaxe book for those who are coming to Ruby from another language/ecosystem. The fellas at Bendyworks have been doing a bi-weekly book club on this book and I’ve happily attended as many as I could to talk about it. This book goes into incredible, understandable detail about the object model and dynamic nature of programming in Ruby. It gets my highest recommendation.
My college computer science curriculum was in C++ and Java. My job at Intuit involves C/C++/C# development, but it tends to fall heavily into the C++/C# side of things. I respect the hell out of Zed Shaw’s outlook on programming, and truly believe that a working knowledge of C development will take you far. To that end, I’ve been working through Learn C the Hard Way online while reading this superb book. I also purchased the original The C Programming Language book, but A Modern Approach is, in my opinion, superior as a reference and primer.
My technical reading list changes often; I’ll usually read through three books over the course of several weeks and then pull more off my shelf and refresh myself on a different topic. It’s why I also buy books on the periphery of my interests just to have them on my shelf. I enjoy having the references available, even if I don’t read them as soon as they arrive.
]]>Upcoming plans include:
Source | Gem | Issues | Documentation
activities_on_date
pp client.activities_on_date("2011-10-24")
{"activities"=>
[{"activityId"=>2130,
"activityParentId"=>2130,
"activityParentName"=>
"Weight lifting (free, nautilus or universal-type), light or moderate effort, light workout, general",
"calories"=>228,
"description"=>"",
"duration"=>3600000,
"hasStartTime"=>true,
"isFavorite"=>true,
"logId"=>2136964,
"name"=>
"Weight lifting (free, nautilus or universal-type), light or moderate effort, light workout, general",
"startTime"=>"16:15"}],
"goals"=>
{"activeScore"=>1000,
"caloriesOut"=>2911,
"distance"=>5,
"floors"=>10,
"steps"=>10000},
"summary"=>
{"activeScore"=>495,
"activityCalories"=>927,
"caloriesOut"=>2458,
"distances"=>
[{"activity"=>"total", "distance"=>2.07},
{"activity"=>"tracker", "distance"=>2.07},
{"activity"=>"loggedActivities", "distance"=>0},
{"activity"=>"veryActive", "distance"=>0.3},
{"activity"=>"moderatelyActive", "distance"=>0.68},
{"activity"=>"lightlyActive", "distance"=>1.09},
{"activity"=>"sedentaryActive", "distance"=>0},
{"activity"=>
"Weight lifting (free, nautilus or universal-type), light or moderate effort, light workout, general",
"distance"=>0}],
"elevation"=>90,
"fairlyActiveMinutes"=>118,
"floors"=>9,
"lightlyActiveMinutes"=>102,
"marginalCalories"=>562,
"sedentaryMinutes"=>1210,
"steps"=>4590,
"veryActiveMinutes"=>10}
}
data_by_time_range
pp client.data_by_time_range("/activities/log/floors", {:base_date => "2011-10-24", :period => "1d"})
{"activities-log-floors"=>[{"dateTime"=>"2011-10-24", "value"=>"9"}]}
foods_on_date
method
pp client.foods_on_date("2011-10-24")
{"foods"=>[],
"goals"=>{"calories"=>2911},
"summary"=>
{"calories"=>0,
"carbs"=>0,
"fat"=>0,
"fiber"=>0,
"protein"=>0,
"sodium"=>0,
"water"=>0}
}
pp client.devices
[{"battery"=>"Full",
"id"=>*****,
"lastSyncTime"=>"2011-10-25T09:54:40.000",
"type"=>"TRACKER"}]
food_info
There is a new method that will fetch detailed food information given a food ID. Food IDs are generally retrieved by searching for a food using find_food
.
# Get food info for an apple
pp client.food_info 20711
{"food"=>
{"accessLevel"=>"PUBLIC",
"brand"=>"Shoney's",
"calories"=>81,
"defaultServingSize"=>5,
"defaultUnit"=>{"id"=>226, "name"=>"oz", "plural"=>"oz"},
"foodId"=>20711,
"name"=>"Apple",
"units"=>[226, 180, 147, 389]}
}
find_food
results
pp client.find_food("GoLean Raisin Bran Crunch Cereal")
{"foods"=>
[{"accessLevel"=>"PUBLIC",
"brand"=>"Kellogg's",
"calories"=>110,
"defaultServingSize"=>1,
"defaultUnit"=>{"id"=>17, "name"=>"bar", "plural"=>"bars"},
"foodId"=>20942,
"name"=>"Cereal Bar, Raisin Bran Crunch",
"units"=>[17]},
{"accessLevel"=>"PUBLIC",
"brand"=>"Safeway",
"calories"=>230,
"defaultServingSize"=>1,
"defaultUnit"=>{"id"=>304, "name"=>"serving", "plural"=>"servings"},
"foodId"=>79051,
"name"=>"Raisin Bran Crunch Cereal",
"units"=>[304, 91, 256, 279]}]
}
]]>These days the information in question is often a tool, configuration, or piece of code that radically shortcuts an oft-repeated activity. It has nothing to with luck, or ability to use Google, however. The delight in finding the perfect tool or option is often that you never conceived that it could exist in the first place.
The most stark example I’ve had recently is in pursuit of my quest to learn vim. In changing over from TextMate to vim many times I would end up pondering really newb questions: how do I cut a line of text? How do I move to the end of a file? A couple of HI-YAH’s of Google-Fu later I’ve got the answer an I’m trucking along. Some days, however, I’ll be reading something unrelated and all of a sudden learn that dt followed by a character will delete from the cursor position to the next instance of that character. It’s weird, but that was a game-changer for me. Changing out strings became: dt” and then insert and go.
I could also characterize my use of guard in this way. I was at a Mad-Railers hack day awhile back and Brad happened to be using guard for his rspec execution during TDD. It was a revelation that I could just edit files and tests and it would automatically rerun. Here’s where things get interesting, however. It had previously never occurred to me to even look for a way to speed up my tests or optimize the time spent manually executing them. It was a problem I didn’t know I had. After learning about guard I looked through the list of available guards, however, and found some other interesting ones. These additional plugins for the guard model were also solving problems I didn’t even know were bugging me until I found they were solved problems.
This all came to me this morning as I was watching Ryan Bates’ Railscast on using Spork to speed up test execution further by preloading the Rails framework and keeping it in memory between tests.
When worrying about getting a primary goal completed (say, a feature in an application) you don’t often think about solutions to one-off problems or annoyances, let alone prioritize solving them. This is partially why I invest time at the beginning of a project to develop and optimize the code-test-fix cycle, why I obsess over the deployment model when I only have a README, and why I like continuous integration before a test has been written. Once your process for coding is solid, you can optimize it piecemeal as you need to or as you find unexpected tools or processes that would increase your productivity drastically.
An example: my tests were already running really well under guard, but 15 minutes or install and configuration of guard-spork I saw immediate speed gains.
]]>After some googling and reading Stack Overflow I quickly came across the solution that Paperclips :command_path option wasn’t being set correctly. The problem, however, was that it looked like I was correctly setting the ImageMagick path:
$ which identify
/usr/local/bin/identify
I kept double- and triple-checking that the option was set correctly in my config/environments/staging.rb file:
WhazzingCom::Application.configure do
....
Paperclip.options[:command_path] = "/usr/local/bin"
end
That seemed to be correct, but I still kept getting the same error. Eventually I came across an excellent suggestion that perhaps my installed version of ImageMagick was incomplete, corrupt, or missing libs or dependencies. First I reran the ImageMagick apt-get install…
$ sudo apt-get install imagemagick --fix-missing
…which installed identify to /usr/bin rather than the existing version in /usr/local/bin. I then updated my environment file to the following…
WhazzingCom::Application.configure do
....
Paperclip.options[:command_path] = "/usr/bin"
end
…and refreshed the page and everthing worked as expected!
]]>I’d recently read the excellent Responsive Web Design and it made me very excited to design a site that effortlessly works on phones, tablets, monitors, and TVs. Another project I have in mind is primarily aimed at mobile form factors, so I thought a semi-major experiment was in order to ensure I was thinking in the right direction when it came to media queries and mobile design.
I built whazzing.com with ruby 1.9.2, rails 3.1, mysql, and various gems. My first commit was August 19th, 2011 and I deployed version 1.0 to production on September 20th, 2011.
As I started coding this site, Rails 3.1 was entering it’s final release candidate testing phase so I decided to start fresh on it. The asset pipeline is just a terrific addition to the framework, and combining it with the style organization I’ve been developing off of Dale Sande’s incredible Axle framework provided for a very smooth visual design experience. I added a little bit to Dale’s CSS organization to support the media queries I wanted to support.
assets |-- stylesheets |-- admin.css.scss |-- application.css.scss |-- design.css.scss |-- devices | |-- phones.css.scss | |-- tablets.css.scss | |-- high-resolution.css.scss |-- imports | |-- colors.css.scss | |-- form.css.scss | |-- mixins.css.scss | |-- navigation.css.scss | |-- text.css.scss |-- reset.css.scss |-- typography.css.scss
One of the interesting takeways from the excellent testing panel at Madison Ruby was that people are divided about the usefulness of cucumber. I had always stumbled in wrangling cucumber to be really useful for the applications I was writing and so I decided to eschew its usage this time. I focused instead on getting good coverage via my RSpec controller and model tests. I found that leaving cucumber aside really streamlined the process and didn’t leave me eternally feeling like I was losing focus on the functionality I wanted due to the the myriad tests that were demanded by all the tools I was using.
Madison Ruby was a fantastic source of information on how to write better ruby code and rails applications. In particular, Jeff Casimir’s talk on using view-models and decorators as opposed to helpers in rails was food for thought. Halfway through implementation on this site I switched over to using draper for my presentation logic and I love it.
Another tool I wanted to test out for this project was git-flow. At Scott Chacon’s eye-opening Git workshop I increased my git knowledge 10 times, easily. Going through the underlying structures of git taught me a lot about what’s really going on during commits, push, and branching, and I wanted to try to formalize the release process of this project a bit more using git-flow’s branching model. The result: I loved the structure of the branches, even if this was a one-man project, and will definitely use it on more projects in the future.
Finally, I’ve been more and more intrigued by Markdown since Github and many other sites support it these days. Instead of pouring time into creating a WYSIWYG editor or trying to drop one in, I instead grabbed the Redcarpet gem and simply use markdown to create the posts and project descriptions. I also use albino and pygments for code highlighting.
The problem I had to overcome this time (and every time I jumpstart a project) is that too often I get bogged down on tangents. I’ll get fixated on one unimportant feature or line of code, I’ll get frustrated that I can’t get it the way I want, and eventually I won’t even want to open the text editor due to feelings of dread. This time I was determined to mercilessly cut features until I had something simple I could get coded and deployed, and then iterate from there. A few times I almost got bogged down, but forced myself to leave features on the table for future releases so that I could get my high priority functionality into production.
I settled on four main features for the first release:
My major feature list for the next release includes:
I’ll also, of course, be correcting anything that shakes out while using it.
I’m looking forward to extending this project, while using it as a platform to document the other things I’m working on. I would really like to use redis for some analytics/tracking ideas I have, and I’m interested in how to tie in akismet spam-blocking for comments.
The images along the left side don’t scan well on mobile devices, so I’m going to look into replacing them in the next release as well. I like the structure of the site, but the specific details could be tightened a little.
I’d like to get a styleguide up and running so that my CSS can be standardized a little. I tried as much as possible to be OO in my CSS but failed in a few places that I’d like to clean up.
Many, many thanks to Joe Nelson, from whom I shamelessly stole the idea for the micro-resume. Thanks so much for the idea- it jump-started what became the new site, blog, etc.
Also, thanks to Kevin Burke. I read his blog post on designing your personal site and was inspired to go beyond the usual “reverse chronological order of posts” home page. Instead I tried to create a homepage that advertises all the things I’m working on, blog posts being only one of the pieces.
Thanks to Dale Sande, who helped me step up my CSS game with his excellent Axle framework and thoughts on OOCSS (object-oriented CSS). If you ever get a chance to attend his OOCSS talk definitely do it; as a primarily backend dude I found it immensely helpful for structuring my CSS and semantic markup.
]]>