Easily add/remove Vim scripts 12

Posted by ryan Tue, 20 Jul 2010 01:39:00 GMT

I like being able to easily add and remove Vim scripts, whether it's to try one out or easily upgrade to a newer version down the line. Since the directory structure of a script almost always follows the standard runtime directory structure, I simply wrote a script that adds each directory under $HOME/.vim/vendor to Vim's runtimepath, so that Vim includes the vendor directories in its script-searching behavior. That way, I can simply download something like rails.vim, which has files in autoload, doc, and plugin, and would be very annoying to remove manually, uncompress it into its own directory under $HOME/.vim/vendor, restart Vim, and the script is loaded. Removing the script is as easy as removing the directory under vendor and restarting Vim.

To add this behavior, simply put the following in your $HOME/.vimrc file:

let vendorpaths = globpath("$HOME/.vim", "vendor/*")

let vendorruntimepaths = substitute(vendorpaths, "\n", ",", "g")
set runtimepath^=vendorruntimepaths

let vendorpathslist = split(vendorpaths, "\n")
for vendorpath in vendorpathslist
  if isdirectory(vendorpath."/doc")
    execute "helptags ".vendorpath."/doc"
  endif
endfor

For the latest and greatest version of this code, refer to my vimrc.

Auto-saving sessions in Vim 10

Posted by ryan Tue, 20 Jul 2010 00:42:00 GMT

Back when I used Textmate, I liked that you could save projects to a file so that you could quit Textmate and come back later, load the project file up, and have all the files and tabs set up and open the way you had them. Since switching to Vim, I've gotten the same functionality (and more!) using the :mksession command. One thing that's missing in Vim, however, is the ability to auto-save a session. There are a few add-ons that do this, but add a bunch of other functionality that I find unnecessary. Below is a little script I wrote that adds auto-saving sessions to Vim:

function! AutosaveSessionOn(session_file_path)
  augroup AutosaveSession
    au!
    exec "au VimLeave * mks! " . a:session_file_path
  augroup end
  let g:AutosaveSessionFilePath = a:session_file_path

  echo "Auto-saving sessions to \"" . a:session_file_path . "\""
endfunction

function! AutosaveSessionOff()
  if exists("g:AutosaveSessionFilePath")
    unlet g:AutosaveSessionFilePath
  endif

  augroup AutosaveSession
    au!
  augroup end
  augroup! AutosaveSession

  echo "Auto-saving sessions is off"
endfunction

command! -complete=file -nargs=1 AutosaveSessionOn call AutosaveSessionOn(<f-args>)
command! AutosaveSessionOff call AutosaveSessionOff()

augroup AutosaveSession
  au!
  au SessionLoadPost * if exists("g:AutosaveSessionFilePath") != 0|call AutosaveSessionOn(g:AutosaveSessionFilePath)|endif
augroup end

To begin auto-saving sessions, simply run:

:AutosaveSessionOn <session filename>

Your session will then be automatically saved to the given session filename when Vim exits. Also, if you have globals in your sessionoptions list, when you load the auto-saved session, auto-saving will continue to occur to that session file. To turn auto-saving to the session file off, run:

:AutosaveSessionOff

For the latest and greatest version, refer to my vimrc.

Approaching pure REST: Learning to love HATEOAS 180

Posted by ryan Thu, 24 Jun 2010 05:06:00 GMT

We've been building up a REST API at work for a couple of months now, have an iPhone client, an Android client, and a browser-based client built on it, and are well on our way to using it for a number of other purposes. As far as client and API development are concerned, things are going pretty smoothly. So, when I read Michael Bleigh's article on how he thinks that building a pure REST API is too hard, and probably not worth the time, I was pretty surprised. I started wondering if maybe I'd misunderstood something, despite spending quite some time poring over Roy Fielding's dissertation and scads of other articles by a variety of authors. After some reflection, I've decided that I'm not missing anything, and it's a lot easier than people think to build a pure REST API, once you understand what one is, and have determined that REST is an appropriate architecture for your system.

Learn about REST from credible sources

There's a lot of information out there about REST, so naturally there's also a lot of inaccurate, incomplete, confusing, and misleading information out there. The key to learning about REST, as with everything else, is finding good sources of information. In my research, there were a handful of people who were instrumental in my understanding of the topic: Roy Fielding, Jim Webber, Ian Robinson, Mark Baker, Sam Ruby, Leonard Richardson, and Subbu Allamaraju. Specifically, the following are great sources of information on REST:

REST is a fairly broad topic to understand completely. Only after reading through the above sources, and then some, did I begin to fully grasp what it is, and why it's so important. Due to the contextualized nature of this article, I strongly suggest that you at least skim through the links above before continuing on here.

HATEOAS: The "hard part" about REST

The most sparsely-documented aspect of REST in Roy Fielding's dissertation is the "hypermedia as the engine of application state" (HATEOAS) constraint; it's also the aspect of REST that has the fewest practical examples. It's no surprise, then, that HATEOAS is the part of REST most-often neglected or misinterpreted by API developers. There is one rather popular and ubiquitous hypertext-driven application out there to learn from, however: The World Wide Web.

A popular, yet misleading example

The world-wide web is a great example of utilizing hypermedia; it's also a source of much confusion around the role of hypermedia in APIs. In his article, Michael Bleigh uses the same example that Fielding uses in a comment on his rant of how web browsers and spiders don't distinguish between an online-banking resource and a wiki resource, and only need to be aware of the "links and forms", and "what semantics/actions" are implied by traversing those links". Fielding uses the example as a way to illustrate how clients of REST services need only know about the standard media types used in a service response in order to make use of the service. The problem with this analogy is that browsers and spiders are extremely general consumers of hypermedia.

A web site can be thought of as a REST API, except the media types and link relations that are used are not structured enough to allow most applications to determine the meaning of individual links and forms. Instead, a web site typically relies on a human user to determine the semantics of the site from the natural language text, sounds, and visuals presented to them via HTML, Javascript, Flash, JPEGs, or any other standard data formats at the web site author's disposal. That's fine if you're building a system designed to be used by human beings, but if you're designing one to be consumed by other software, then your system and the systems that use it are going to have to agree on something up-front in order to play nicely.

A more common example

It's an exciting, web-driven world we're building, and clients usually need to do quite a bit more than allow their users to browse or crawl the systems we build. The Twitter API, for example, has services that allow clients to update their status, or retweet one that already exists. Twitter's API is not RESTful, so the documentation for retweeting a status instructs developers to call the service by sending an HTTP POST or PUT request to http://api.twitter.com/1/statuses/retweet/[id].[format].

If the Twitter API were RESTful, clients would need to understand what it means to follow a link to retweet a status. The semantics of such a service are deeper than what Fielding talks about in his comment about browsers and crawlers. At the same time, I think that this deep level of understanding is a more common requirement for clients of web APIs. It's this need for clients to have deep semantic understanding of an API that has Michael Bleigh questioning whether it's even worth the effort to make an API like Twitter's hypertext-driven.

It's not that hard!

HATEOAS is not as difficult to adhere to as people think. If you've ever built an interactive website that includes links and forms in HTML pages it generates and returns to its users, then congratulations: You've conformed to HATEOAS! For an API to conform to HATEOAS, it must provide all valid operations, what they mean, and how to invoke them, in representations that it sends to clients. In order to provide this knowledge in-band, it must utilize standard media types and link relations. If the API can't use standard types and relations, then custom types must be defined. Regardless of what types and relations are used, the point is that clients should be bound to services they consume at a higher, more generalized level than a specific communication channel, URI pattern, and set of invocation rules. The only difference between building a web site and web API that conforms to HATEOAS is that the majority of media types and link relations that you'll need for a web site are already defined, whereas you'll most-likely need to define some of your own for an API.

Michael Bleigh states in his article that REST requires "too much work from the [service] provider in defining and supporting custom media types with complex modeled relationships", but defining media types and link relations for a REST API simply takes the place of other forms of documentation that would otherwise have to be produced. Subbu Allamaraju has a pretty good article on documenting RESTful applications. Among other things, he highlights how you no longer need to specify details on constructing requests to specific services within a REST API. The hypermedia constraint of REST requires that all possible requests be constructable at runtime, and provided by the API itself; clients must know how to interpret the hypermedia controls, but then are only responsible for interpreting the semantics of specific links and structural elements of the data format. This allows for greater flexibility on the service side to make changes, and greater resilience on the client side to those changes.

Michael states that it's too much work to define "complex modeled relationships", but defining link relations in a REST API's media type definitions is not any more complex than leaving those relationships undefined and requiring clients to figure them out on their own by poking around the documentation. The difference is just that the effort of figuring those relationships out and working with them in a consistent way gets shifted from the (single) service developer to the (multitude of) client developers. If links are provided, but relationship types are not defined or specified, clients must base their behavior on specific links, thus making it harder to change those links. As was stated earlier, having well-defined relationships also encourages consistency and sound design from the service developers, and improves the ease with which clients make sense of and build solutions against an API.

Benefits of HATEOAS

There are many short- and long-term benefits of HATEOAS. Many of the benefits that Roy Fielding talks about, such as supporting unanticipated use-cases, the ability of generalized clients to crawl a service, and reducing or eliminating coupling between a service and its clients, tend only to be fully realized after a service has been available to the public for a while. Craig McClanahan -- author of the Sun Cloud API -- suggested some short-term benefits of adhering to the HATEOAS constraint of REST. One of the benefits mentioned by McClanahan is the improved ability of a service to make changes without breaking clients. Subbu Allamaraju describes, in his previously-mentioned article, the simplification that REST lends to the documentation of services that are written within its constraints.

Another short-term benefit of HATEOAS is that it simplifies testing. I've worked on a number of projects where there was a QA team comprised of non-technical people, and my current project is no exception. The QA team performs primarily manual testing of our applications, and leave the automated testing to the developers. They do, however, have a few people on their team capable of writing automated tests. So, I wrote a simple client against our REST API with knowledge of only the hypermedia semantics, which translates the hypermedia controls of our API into HTML links and forms that can then be driven by testing tools like Selenium. This had the added benefit of allowing me to kick the tires, as it were, of the API early on and make sure that we were getting things right by writing a simple client against it. Since the client is bound only to the hypermedia semantics of the API, it's incredibly resilient to change. Also, having the QA team rely on an HTML client ensures that all aspects of the API are hypertext-driven; if they weren't, they couldn't be tested and would be kicked back to development.

It's also possible that defining media types and link relations used by an API in a standardized, generalized fashion, as required by REST, encourages the developers of an application to think about the consistency and structural clarity of their services; I definitely feel like this is the case when I work on or with REST APIs. The reasons for this higher-level of thought about the API may be due to the fact that, once the hypermedia controls have been defined, the technical details are pretty much out of the way and developers are left with determining the structure of the system their building; it's classic separation of concerns, which has yielded great results from conscientious developers for decades. In addition to encouraging better API design, defining the hypermedia controls and link relations in a consistent, standardized fashion also improves the client developers' ability to make assumptions about the API, thus improving their productivity.

A solution built on top of a true REST API can be bound to the relations and semantics in a media type, whereas a solution built on top of a partially-RESTful API, as they are typically built, is bound to each individual service in the API and the (often-times undocumented and implied) relationships between those services. Michael Bleigh suggests in his article that it's "too much work for clients and library authors to perform complex aggregation and re-formulation of data" when they're built on a true REST API. With the advantages of REST that have been mentioned so far, it should be the architecture of choice for client or library developers that are concerned with building systems that are resilient to or tolerant of common changes in and challenges of a web-based system. Developers that would like the flexibility to bind to the semantic subset of an API that is appropriate to their client or library (general hypermedia, or deep semantic understanding) would also probably prefer a REST API.

Of course, there are also the often-mentioned reasons for using REST: Scalability and maintainability. You can't work on a web application these days without having to build an API to drive iPhone, Android, mobile web, or some other client. Twitter's API has quite a few clients already (318 as of 6/21/2010). With so many clients likely to be built against a modern web API, it's important that one be written in a way that is as scalable and maintainable as possible, for the sake of both the clients and the system providing the services. The advantage of exposing services over the web is that, with REST, it's already designed to be massively scalable and maintainable, and the better a web API is at playing by the rules of the web, the more it can take advantage of those properties.

Introducing Spectie, a behavior-driven-development library for RSpec 12

Posted by ryan Mon, 02 Nov 2009 03:34:00 GMT

I'm a firm believer in the importance of top-down and behavior-driven development. I often start writing an integration test as the first step to implementing a story. When I started doing Rails development, the expressiveness of Ruby encouraged me to start building a DSL to easily express the way I most-often wrote integration tests. In the pre-RSpec days, this was just a subclass of ActionController::IntegrationTest that encapsulated the session management code to simplify authoring tests from the perspective of a single user. As the behavior-driven development idea started taking hold, I adapted the DSL to more-closely match those concepts, and finally integrated it with RSpec. The result of this effort was Spectie (rhymes with necktie).

The primary goal of Spectie is to provide a simple, straight-forward way for developers to write BDD-style integration tests for their projects in a way that is most natural to them, using existing practices and idioms of the Ruby language.

Here is a simple example of the Spectie syntax in a Rails integration test:

Feature "Compelling Feature" do
  Scenario "As a user, I would like to use a compelling feature" do
    Given :i_have_an_account, :email => "ryan@kinderman.net"
    And   :i_have_logged_in

    When  :i_access_a_compelling_feature

    Then  :i_am_presented_with_stunning_results
  end

  def i_have_an_account(options)
    @user = create_user(options[:email])
  end

  def i_have_logged_in
    log_in_as @user
  end

  def i_access_a_compelling_feature
    get compelling_feature_path
    response.should be_success
  end 

  def i_am_presented_with_stunning_results
    response.should have_text("Simply stunning!")
  end
end

Install

Spectie is available on GitHub, Gemcutter, and RubyForge. The following should get it installed quickly for most people:

% sudo gem install spectie

For more information on using Spectie, visit http://github.com/ryankinderman/spectie.

Why not Cucumber or Coulda?

At the time that this is being written, Cucumber is the new hotness in BDD integration testing. My reasons for sticking with Spectie instead of switching to Cucumber like the rest of the world are as follows:

  • Using regular expressions in place of normal Ruby method names seems like a potential maintenance nightmare, above and beyond the usual potential.
  • The layer of indirection that is created in order to write tests in plain text doesn't seem worth the cost of maintenance in most cases.
  • Separating a feature from its "step definitions" seems mostly unnecessary. I like keeping my scenarios and steps in one file until the feature becomes sufficiently big that it warrants extra organizational consideration.

These reasons are more-or-less the same as those given by Evan Light, who recently published Coulda, which is his solution for avoiding the cuke. What sets Spectie apart from Coulda is its reliance on and integration with RSpec. The Spectie 'Feature' statement has the same behavior as an RSpec 'describe' statement, and the 'Scenario' statement is the same as the RSpec 'example' and 'it' statements. By building on RSpec, Spectie can take advantage of the contextual nesting provided by RSpec, and rely on RSpec to provide the BDD-style syntax within what I've been calling a scenario statement (the words after the Given/When/Thens). Coulda is built directly on Test::Unit. I'm a firm believer in code reuse, and RSpec is the de facto standard for writing BDD-style tests. Spectie, then, is a feature-driven skin on top of RSpec for writing BDD-style integration tests. To me, it only makes sense to do things that way; as RSpec evolves, so will Spectie.

Rails Plugin for Mimicking SSL requests and responses 1

Posted by ryan Fri, 14 Nov 2008 23:33:42 GMT

The Short

I've written a plugin for Ruby on Rails that allows you to test SSL-dependent application behavior that is driven by the ssl_requirement plugin without the need to install and configure a web server with SSL.

Learn more

The Long

A while back, I wanted the Selenium tests for a Ruby on Rails app I was working on to cover the SSL requirements and allowances of certain controller actions in the system, as defined using functionality provided by the ssl_requirement plugin. I also wanted this SSL-dependent behavior to occur when I was running the application on my local development machines. I had two options:

  1. Get a web server configured with SSL running on my development machines, as well as on the build server.

  2. Patch the logic used by the system to determine if a request is under SSL or not, as well as the logic for constructing a URL under SSL, so that the system can essentially mimic an SSL request without a server configured for SSL.

Since I had multiple Selenium builds on the build server, setting up an SSL server involved adding a host name to the loopback for each build, so that Apache could switch between virtual hosts for the different server ports. I also occasionally ran web servers on my development machines on ports other than the default 3000, as did everyone else on the team, so that we'd all have to go through the setup process for multiple servers on those machines as well. We would need to do all of this work in order to test application logic that, strictly speaking, didn't even require the use of an actual SSL server. Given that the only thing that I was interested in testing was that the requests to certain actions either redirected or didn't, depending on their SSL requirements, all I really needed was to make the application mimic an SSL request.

To mimic an SSL request in conjunction with using the ssl_requirement plugin without an SSL server consisted of patching four things:

  1. ActionController::UrlRewriter#rewrite_url - Provides logic for constructing a URL from options and route parameters

    If provided, the :protocol option normally serves as the part before the :// in the constructed URL.

    The method was patched so that the constructed URL always starts with "http://". If :protocol is equal to "https", this causes an "ssl" key to be added to the query string of the constructed URL, with a value of "1".

  2. ActionController::AbstractRequest#protocol - Provides the protocol used for the request.

    The normal value is one of "http" or "https", depending on whether the request was made under SSL or not.

    The method was patched so that it always returns "http".

  3. ActionController::AbstractRequest#ssl? - Indicates whether or not the request was made under SSL.

    The normal value is determined by checking if request header HTTPS is equal to "on" or HTTP\_X\_FORWARDED_PROTO is equal to "https".

    The method was patched so that it checks for a query parameter of "ssl" equal to "1".

  4. SslRequirement#ensure\_proper\_protocol - Used as the before\_filter on a controller that includes the ssl_requirement plugin module, which causes the redirection to an SSL or non-SSL URL to occur, depending on the requirements defined by the controller.

    This method was patched so that, instead of replacing the protocol used on the URL with "http" or "https", it either adds or removes the "ssl" query parameter.

For more information, installation instructions, and so on, please refer to the plugin directly at:

http://github.com/ryankinderman/mimic_ssl

Enabling/disabling observers for testing 6

Posted by ryan Thu, 10 Apr 2008 02:53:50 GMT

If you use ActiveRecord observers in your application and are concerned about the isolation of your model unit tests, you probably want some way to disable/enable observers. Unfortunately, Rails doesn't provide an easy way to do this. So, here's some code I threw together a while ago to do just that.

module ObserverTestHelperMethods
  def observer_instances
    ActiveRecord::Base.observers.collect do |observer|
      observer_klass = \
        if observer.respond_to?(:to_sym)
          observer.to_s.camelize.constantize
        elsif observer.respond_to?(:instance)
          observer
        end
      observer_klass.instance
    end
  end

  def observed_classes(observer=nil)
    observed = Set.new
    (observer.nil? ? observer_instances : [observer]).each do |observer|
      observed += (observer.send(:observed_classes) + observer.send(:observed_subclasses))
    end
    observed
  end

  def observed_classes_and_their_observers
    observers_by_observed_class = {}
    observer_instances.each do |observer|
      observed_classes(observer).each do |observed_class|
        observers_by_observed_class[observed_class] ||= Set.new
        observers_by_observed_class[observed_class] << observer
      end
    end
    observers_by_observed_class
  end

  def disable_observers(options={})
    except = options[:except]
    observed_classes_and_their_observers.each do |observed_class, observers|
      observers.each do |observer|
        unless observer.class == except
          observed_class.delete_observer(observer)
        end
      end
    end
  end

  def enable_observers(options={})
    except = options[:except]
    observer_instances.each do |observer|
      unless observer.class == except
        observed_classes(observer).each do |observed_class|
          observer.send :add_observer!, observed_class
        end
      end
    end
  end
end

Include this in a Test::Unit::TestCase or 'include' in your RSpec configuration, whatever rocks your boat. Here's a stupid example:

class SomethingCoolTest < Test::Unit::TestCase
  include ObserverTestHelperMethods

  def setup
    disable_observers
  end

  def teardown
    enable_observers
  end

  def test_without_observers
    # ...
  end

end

When you go to test the behavior of the observer itself, simply disable/enable like the following to disable/enable all observers except the one you're testing:

class DispassionateObserverTest < Test::Unit::TestCase
  include ObserverTestHelperMethods

  def setup
    disable_observers :except => DispassionateObserver
  end

  def teardown
    enable_observers :except => DispassionateObserver
  end

  def test_without_observers_except_dispassionate_observer
    # ...
  end

end

Plugin to Support composed_of Aggregations in ActiveRecord Finder Methods 15

Posted by ryan Thu, 03 Jan 2008 23:11:00 GMT

In Rails, hooking up an ActiveRecord model to use a value object to aggregate over a set of database fields is a piece of cake. With the accessor methods that are created for a composed_of association, you can now deal exclusively with the composed_of field on your model, instead of directly manipulating or querying the individual database fields that it aggregates. Or can you? As long as all you're doing with the aggregate field is getting and setting its value, your aggregated database fields remain encapsulated. However, if you want to retrieve instances of your model from the database through a call to a finder method, you must do so on the individual database fields.

Consider the following ActiveRecord model definition:

class Customer < ActiveRecord::Base
  composed_of :balance, :class_name => "Money", :mapping => %w(balance_amount amount)
end

Given such a model, we can do something like this with no problem:

customer = Customer.new
customer.balance = Money.new(512.08)
customer.balance                      # returns #<Money:abc @amount=512.08>
customer.save!

However, now that we've saved the record, we might want to get that record back from the database at some point with code that looks something like:

customer = Customer.find(:all, :conditions => { :balance => Money.new(512.08) })

or like:

customer = Customer.find_all_by_balance( Money.new(512) )

This would provide full encapsulation of the aggregated database fields for the purposes of both record creation and retrieval. The problem is, at the time of my posting this article, it doesn't work. Instead, you have to do this:

customer = Customer.find(:all, :conditions => { :balance_amount => 512.08 })

To deal with this problem, I've submitted a ticket, which is currently scheduled to be available in Rails 2.1.

If you need this functionality, but your project is using a pre-2.1 release of Rails, I've also created a plugin version of the changes I submitted in the aforementioned ticket. To install:

script/plugin install git://github.com/ryankinderman/find_conditions_with_aggregation.git

Addendum: The patch has been committed to changeset 8671. Yay!

Testing on High: Bottom-up versus Top-down Test-driven Development 40

Posted by ryan Mon, 19 Nov 2007 02:13:21 GMT

I recently talked to a number of Rails developers about their general approach to testing some new functionality they're about to code. I asked these developers if they found it to be more useful to start testing from the bottom-up or top-down. I suggested to them that, since Rails uses the MVC pattern, it's easy to think of the view, or user interface, as the "top", and the model as the "bottom". Surprisingly, nearly every developer that I asked this question of answered that they prefer to start from the bottom, or model, and test upwards. Nearly every one! I expected that I'd get a much more mixed response than I have. In fact, I think that the correct place to start testing is precisely at the highest level possible, to reduce the risk of building software based on incorrect assumptions of how best to solve a user requirement.

Bottom-up Testing

Bottom-up testing implies bottom-up design in TDD. In bottom-up design, a developer would probably consider the high-level objectives and break them up into manageable components that interact with each other to provide the desired functionality. The developer thinks about how each component will be used by its client components, and tests accordingly.

The problem with the bottom-up approach is that it's difficult to really know how a component needs to be used by its clients until the clients are implemented. To consider how the clients will be implemented, the developer must also think about how those clients will be used by their clients. This thought process continues until we reach the summit of our mighty design! Hopefully, when the developer is done pondering, they can write a suite of tests for a component which directly solves the needs of its client components. In my experience, however, this is rarely the case. What really happens is that the lower-level components tend either to do too much, too little, or the right amount in a way that is awkward or complicated to make use of.

The advantage of bottom-up testing is that, since we're starting with the most basic, fundamental components, we guarantee that we'll have some working software fairly quickly. However, since the software being written may not be closely associated with the high-level user requirements, it may not produce results that are necessarily valuable to the user. A simple client could quickly be written which demonstrates how the components work to the user, but that's besides the point unless the application being developed is a simple application. In such a case, the bottom-level of components are probably close enough to the top-level ones that there is little risk involved in choosing either the bottom-up or top-down approach.

Unless you're writing a small application, the code is probably going to have to support unforeseen use cases. When this comes as a result of ungrounded assumptions about the software that's already been written, this can mean a lot of rework. I can tell you from experience, once you realize that your lower-level components don't fit the bill for the higher levels in the system, it can be quite a chore to go back and fix, remove, or replace all of that unnecessary or incorrect code.

Top-down Testing

Top-down testing implies top-down design in TDD. Following the top-down approach, the developer will pick the highest level of the system to be tested; that is to say, the part of the system that has the closest correlation to the user requirements. This approach is sometimes referred to as Behavior Driven Development. Whatever it's called, the point is that you test the most critical parts of the application first.

Since software is often written for human users, the most critical parts usually involve the front-end as it relates to the value being provided by the system being developed. When testing from the top-down, the effort is the inverse of bottom-up testing: Instead of spending a lot of time thinking about how the components to be developed will be used by other components to be developed, the focus is on how the user needs to interact with the system. Testing involves proving that the system supports the required usability. For an application with a graphical front-end, this might involve testing for a minimal version of that front-end.

The disadvantage of top-down testing is that you can end up with a lot of stubbed or mocked code that you then have to go back and implement. This means it might take longer before you have software that actually does something besides pass tests. However, there are ways that you can minimize this sort of recursive development problem.

One way to minimize the time between starting development of a feature and demonstrating functionality that is valuable to the user is to focus on a thin slice of the overall architectural pie of the application. For example, there may be a number of views that need to be implemented before the system provides some major piece of functionality. However, the developer can focus on one view at a time, or one part of the view. That way, the number of components that need to be implemented before the system does something useful is small; ideally, one component in each architectural layer that I need build out, and often times only a part of the overall functionality of each component.

Another way to minimize the amount of time before the system does something useful is to code a small bit of functionality without worrying about breaking the problem up into classes until you have some tested, working code to analyze. You can then use established methods for refactoring to bring the code to an acceptable level of quality.

The advantage of top-down testing is that you write functionality that solves the most critical functionality first. This generally means starting development at a high level. When the system eventually does something besides pass tests, what it does will provide value to its users. Additionally, because development starts at a high level, the code that is written is based on the current understanding of the problem, and not on assumptions. This guarantees that the tests and code that are written are not superfluous.

Conclusion

The challenge with top-down testing is that you must be highly disciplined to ensure that the code you write is being refactored and is properly evolving into a cohesive domain model for the application. This is compared with bottom-up testing, where you start with the domain model and build your system around it. Either way, you're going to be refactoring code. The difference is in where the time in refactoring is spent. In my experience, when doing bottom-up testing, more time is spent correcting incorrect assumptions about how the domain model will be used than on actually improving code that already works to solve the user requirements. In order to avoid making assumptions about the code being written, it must be written at the level that is closest to providing actual value to the end-user. In so doing, the developer focuses on continuous refinement of code that already provides value, as opposed to speculative design and development.

Bug: composite_primary_keys and belongs_to with :class_name option 2

Posted by ryan Sat, 17 Nov 2007 02:39:44 GMT

For those of you using the composite_primary_keys gem as of version 0.9.0, you may encounter an issue if you try to do something like:

class Reading < ActiveRecord::Base
  belongs_to :reader, :class_name => "User"
end 

When a User is loaded up from the database via the reader association, the CPK modification to ActiveRecord::Reflection::AssociationReflection#primary\_key\_name incorrectly returns "user_id" as the primary key name. If you encounter this issue, I've submitted a patch against revision 124 that can be obtained here.

Hopefully this will get fixed in the next release. More hopefully, I won't need to care by then.

AppleScript: Reverse screen colors on a Mac and keep a black background in Terminal 11

Posted by ryan Mon, 12 Nov 2007 06:21:32 GMT

Sometimes, it's such a hassle to get up and turn on a light when I'm in the middle of coding. Other times, I don't have control over the lighting, like in a dim pub or cafe. In dim lighting, it's a strain on my eyes to be looking at black text on a white background. I've tried dimming the brightness, but that decreases the contrast, which is also a strain on my eyes. Then, I discovered Control+Option+Command+8, which is a keyboard shortcut to the "Reverse black and white" Universal Access feature. Contrary to it's name, it does more than reverse black and white, it reverses all of the colors on the screen. This, however, is acceptable and requires only a slight mental adjustment for me. Now I use a nice black-on-white color scheme in TextMate for coding, and when I'm in dim lighting, I simply reverse the colors and it's white-on-black, still full-contrast. I'm happy, except for one thing.

I often have a number of Terminal windows open when I'm working, and I open and close them frequently. Although I'm happy with a black-on-white color scheme for TextMate, I can't tolerate anything but white-on-black for my normal Terminal windows. So, what happens when I throw the screen colors in reverse is that all of my lovely white-on-black Terminal windows go to black-on-white. Unacceptable! Thus began my first foray into AppleScript.

After referencing here, here, and here, along with the Terminal "dictionary", I came up with this:

tell application "System Events"
  tell application processes
    key code 28 using {command down, option down, control down}
  end tell
end tell

tell application "Terminal"
  if default settings is equal to settings set "White on Black" then
    set settingsSet to "Black on White"
  else if default settings is equal to settings set "Black on White" then
    set settingsSet to "White on Black"
  end if
  set default settings to settings set settingsSet
  repeat with w in every window
    set current settings of w to settings set settingsSet
  end repeat
end tell

This script first invokes the Control+Option+Command+8 keyboard shortcut to reverse the screen colors. It then toggles the default "settings set" for Terminal to one of two pre-defined sets I've created: "White on Black" and "Black on White". Hopefully the names are self-explanatory. Then, the script iterates over all currently-open Terminal windows and changes their current "settings set" to whatever the new default is.

Pretty simple. I named the script "Reverse Screen Colors" and dropped it in /Users/[user account]/Library/Scripts. I invoke it via the keyboard with QuickSilver.