Laptop Setup for an Awesome Development Environment

Posted about 1 month back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We’ve published guides in 2009, 2011, and 2012 for setting up your Mac OS X laptop as a Ruby on Rails development machine. Many others have published and shared similar tutorials.

Instead of copying and pasting a series of steps from a blog post, a better approach is to leverage automation and the open source community to save time and get a more stable result.

For the past three years, we’ve been developing and maintaining Laptop, a shell script which turns a Linux or Mac OS X laptop into an awesome development machine.

Mac OS X

After setting up the prerequisites described in the project’s README, we run one line:

bash <(curl -s https://raw.githubusercontent.com/thoughtbot/laptop/master/mac)

Linux

There are no prerequisites for Linux except that we must use a supported version of Ubuntu (currently Trusty Tahy, Saucy Salamander, and Precise Pangolin), Debian stable (currently Wheezy), or Debian testing (currently Jessie).

Given a supported version, it is also one line to set up a Linux machine:

bash <(wget -qO- https://raw.githubusercontent.com/thoughtbot/laptop/master/linux)

How it works

Either script should take less than 15 minutes to install (depends on the machine).

The linux and mac scripts are short. They are intended to be human-readable so that we know exactly what is installed and idempotent in case an error requires the script to be run two or more times.

What it sets up

Laptop currently sets up these common components:

  • Zsh for the Unix shell
  • A systems package manager (Aptitude or Homebrew)
  • A Ruby version manager (rbenv)
  • A Ruby installer (ruby-build)
  • The latest stable version of Ruby
  • A Ruby package manager (Bundler)
  • A JavaScript package manager (NPM)
  • Our most commonly-needed databases (Postgres and Redis)
  • ImageMagick for cropping and resizing images
  • Qt for headless JavaScript testing via Capybara Webkit
  • A fuzzy finder (The Silver Searcher)
  • A terminal multiplexer (tmux)
  • A dotfile manager (rcm)
  • CLIs for interacting with GitHub and Heroku

Extending the script

Individuals can add their own customizations in ~/.laptop.local. An example ~/.laptop.local might look like this:

#!/bin/sh

brew tap caskroom/cask
brew install brew-cask

brew cask install dropbox
brew cask install google-chrome
brew cask install rdio

The ~/.laptop.local script can take advantage of the preparation the Laptop script does, such as its shared functions and exit trap, to provide better script output and aid debugging.

Vagrant boxes

We publish Vagrant boxes for each supported Linux distribution. These boxes have the Laptop script applied and ready to run. Setup looks like this:

vagrant init thoughtbot/ubuntu-14-04-server-with-laptop
vagrant up
vagrant ssh

We currently supply these Vagrant Cloud boxes:

thoughtbot/debian-wheezy-64-with-laptop
thoughtbot/debian-jessie-64-with-laptop
thoughtbot/ubuntu-14-04-server-with-laptop
thoughtbot/ubuntu-13-10-server-with-laptop
thoughtbot/ubuntu-12-04-server-with-laptop

Vagrant >= 1.5.0 is required to use Vagrant Cloud images directly.

What’s next?

After using Laptop to set up a development machine, a great next step is to use thoughtbot/dotfiles to configure Vim, Zsh, Git, and Tmux with well-tested settings that we’ve evolved since 2011.

Our dotfiles use the same ~/*.local convention as the Laptop script in order to manage team and personal dotfiles together with rcm.

Episode #473 - June 17th, 2014

Posted about 1 month back at Ruby5

Nate and Gregg are back at it again, talking about a new release of Bundler, AREL, a new app with Ruby Shoes, attr_searchable, a passenger screencast, and SmartListing.

Listen to this episode on Ruby5

Sponsored by Codeship.io

Codeship is a hosted Continuous Deployment Service that just works.

Set up Continuous Integration in a few steps and automatically deploy when all your tests have passed. Integrate with GitHub and BitBucket and deploy to cloud services like Heroku and AWS, or your own servers.

Visit http://codeship.io/ruby5 and sign up for free. Use discount code RUBY5 for a 20% discount on any plan for 3 months.

Codeship

Bundler 1.6.3 has been released

Bundler 1.6.3 was released yesterday with a few new features and bug fixes. As of Bundler 1.6, Bundler now allows you to store private gem source URLs outside of your Gemfile using a bundle config source-url command. But, unfortunately, the Gemfile.lock still stored the secret keys. That’s now been fixed.
Bundler 1.6.3 has been released

The Definitive Guide to AREL the SQL Manager

Jiri Pospisil wrote up an amazing guide to AREL the SQL Manager, showing how you might use AREL separately from ActiveRecord to create sql statements. If you want to know what's going on under the covers, this is where to get started.
The Definitive Guide to AREL the SQL Manager

How to create post-it notes app in Ruby Shoes

Milos Dolobac put together an article detailing how to create a Post-It Note application using Ruby Shoes. Shoes is the Ruby GUI framework originally created by Why-the-lucky-stiff that is Mac, Windows, and Linux compatible.
How to create post-it notes app in Ruby Shoes

attr_searchable

The attr_searchable gem by Benjamin Vetter allows you to send ActiveRecord full text search queries, which it will parse and generate the proper SQL with AREL. Right now just PostgreSQL and MySQL are supported.
attr_searchable

Phusion Passenger Code Walkthrough

Hongli Lai recently created a 30 minute free video he put together giving you a walkthrough of the Phusion Passenger codebase. In the video he starts by giving an architectural overview, then dives into the code for initialization, request handling, process management, and application spawning.
Phusion Passenger Code Walkthrough

SmartListing

At some point if you’re building Rails applications you’ll inevitably need to build smart tables, that do things like AJAX pagination, sorting, filtering, and maybe even in-place editing. The SmartListing gem from the guys Sology which gives you all this functionality and more.
SmartListing

Sponsored by TopRubyJobs

Just two jobs on the top ruby job board this week. Braintree is looking for a Software Engineer in San Francisco, CA. phishme is looking for a mid Rails Developer in Chantilly, VA or remote
TopRubyJobs

Foolproof I18n Setup in Rails

Posted about 1 month back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Let’s make I18n on Rails better, quickly and easily. These tips have helped us here at thoughtbot and caught some easy-to-fix but hard-to-track-down mistakes.

Raise an exception on missing translations

When a translation is missing, Rails will fall back to a default translation. For example the code t(:hello) will output "hello" if there is no provided translation for :hello. This is almost certainly not what you want, especially for more complicated I18n keys like t('users.sign_in').

Most versions of Rails include a way to raise exceptions when a translation is missing. Raising an exception ensures that all of your calls to t() are using your copy, instead of using a default string. I recommend raising exceptions in the test and development environments, so that you can find missing translations by running tests and by browing around on localhost.

To raise exceptions on missing translations in Rails 4.1.0 and higher:

# config/environments/test.rb
# and
# config/environments/development.rb
Rails.application.configure do |config|
  config.action_view.raise_on_missing_translations = true
end

And in Rails 3:

# config/initializers/i18n.rb
if Rails.env.development? || Rails.env.test?
  I18n.exception_handler = lambda do |exception, locale, key, options|
    raise "Missing translation: #{key}"
  end
end

If you’re using a version of Rails 4 between 4.0.0 and 4.1.0, it requires a monkey patch, from Henrik Nyh:

# config/initializers/i18n.rb
if Rails.env.test? || Rails.env.development?
  module ActionView::Helpers::TranslationHelper
    def t_with_raise(*args)
      value = t_without_raise(*args)

      if value.to_s.match(/title="translation missing: (.+)"/)
        raise "Translation missing: #{$1}"
      else
        value
      end
    end
    alias_method :translate_with_raise, :t_with_raise

    alias_method_chain :t, :raise
    alias_method_chain :translate, :raise
  end
end

Time’s a-Wasting

Using I18n.t in tests is more typing than you really need. Here’s how you can use t() instead of I18n.t() in all of your tests:

RSpec.configure do |config|
  config.include AbstractController::Translation
end

Excellent.

What’s next?

Read about better tests through internationalization. You can also use the i18n-tasks gem to find and manage missing and unused translations in your application.

Announcing new, revamped developer documentation

Posted about 1 month back at Phusion Corporate Blog

phusion_banner

We are excited to announce a set of new, revamped developer documentation for Phusion Passenger. While we’ve always had detailed user documentation, our developer documentation was quite sparse in comparison. We had a Contributors Guide that briefly shows you around, an Architectural Overview that mostly covers the I/O model and spawning system, and code comments. Despite this, it was hard for new developers to understand how things work.

Today, we’re changing this with the following:

  • The Contributors Guide has been much improved. It provides a clear, concise starting point for contributors.
  • The 8-minutes Developer Quickstart video shows you how you can get started with developing Passenger, through our Vagrant-based development environment. For those who don’t want to watch the video, there is also a Developer Quickstart document.
  • The Design and Architecture document explains in detail what the design and architecture looks like. It supersedes the Architectural Overview.
  • The Code Walkthrough video walks you through the Passenger codebase, showing you step-by-step how things fit together. It complements the Design and Architecture document.
Code Walkthrough

We need your feedback

At its heart, Passenger is and remains open source, and that means it’s important that people can easily understand and modify the code. We hope that with this, it is now significantly easier to contribute to Passenger. But is it easy enough now? Is there anything we should change or add? We don’t know.

Read the documents. Watch the videos. Use the comment box below and let us know what you think. :)

The post Announcing new, revamped developer documentation appeared first on Phusion Corporate Blog.

How Rails' Type Casting Works

Posted about 1 month back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Have you ever noticed that when you assign a property to an Active Record model and read it back, the value isn’t always the same? Here’s an example:

class StoreListing < ActiveRecord::Base
  connection.create_table :store_listings, force: true do |t|
    t.integer :price_in_cents
  end
end

store_listing = StoreListing.new
store_listing.price_in_cents = "100" # Note, this is a string
store_listing.price_in_cents # => 100

This is because Active Record automatically type casts all input so that it matches the database schema. Depending on the type, this may be incredibly simple, or extremely complex. Let’s take a look into how the internals work in 4.1.

The first method we need to understand in the above code is where the price_in_cents method is defined. In older versions of Rails, your models would go look up the database schema and define the attribute methods as soon as it was loaded. However, this caused problems on platforms like Heroku, where you might want to load the application when you don’t have a real database connection.

Today, the loading is lazy, and happens in a call to method_missing (source). The important line here is the call to define_attribute_methods.

def method_missing(method, *args, &block) # :nodoc:
  self.class.define_attribute_methods
  if respond_to_without_attributes?(method)
    # make sure to invoke the correct attribute method, as we might have gotten here via a `super`
    # call in a overwritten attribute method
    if attribute_method = self.class.find_generated_attribute_method(method)
      # this is probably horribly slow, but should only happen at most once for a given AR class
      attribute_method.bind(self).call(*args, &block)
    else
      return super unless respond_to_missing?(method, true)
      send(method, *args, &block)
    end
  else
    super
  end
end

Active Record’s definition of define_attribute_methods does little of note, other than call super with column_names (source).

def define_attribute_methods # :nodoc:
  return false if @attribute_methods_generated
  # Use a mutex; we don't want two thread simultaneously trying to define
  # attribute methods.
  generated_attribute_methods.synchronize do
    return false if @attribute_methods_generated
    superclass.define_attribute_methods unless self == base_class
    super(column_names)
    @attribute_methods_generated = true
  end
  true
end

We won’t look into how column_names gets determined today, but that method call is what causes Rails to go perform the SQL query that loads information about the model’s schema. Inside of Active Model, we’ll do some metaprogramming magic and ultimately end up calling define_method_attribute (source). Finally, in the body of define_method_attribute, we can see the method that gets called is read_attribute (source). Quite a bit of legwork!

If you decide to read along with us, make sure you’re on the 4-1-stable branch. A lot of this code has changed significantly on master. One of the most important changes to keep in mind is that @attributes_cache has been renamed to @attributes, and @attributes has been renamed to @raw_attributes.

The body of read_attribute looks like this:

def read_attribute(attr_name)
  # If it's cached, just return it
  # We use #[] first as a perf optimization for non-nil values. See https://gist.github.com/jonleighton/3552829.
  name = attr_name.to_s
  @attributes_cache[name] || @attributes_cache.fetch(name) {
    column = @column_types_override[name] if @column_types_override
    column ||= @column_types[name]

    return @attributes.fetch(name) {
      if name == 'id' && self.class.primary_key != name
        read_attribute(self.class.primary_key)
      end
    } unless column

    value = @attributes.fetch(name) {
      return block_given? ? yield(name) : nil
    }

    if self.class.cache_attribute?(name)
      @attributes_cache[name] = column.type_cast(value)
    else
      column.type_cast value
    end
  }
end

Let’s go through each segment and understand what it’s doing. First we call to_s on the argument, as it’s possible we were passed a symbol (this method is part of the public API). Next we check to see if we’ve already type cast this attribute, as we cache the results. The next line is not always obvious.

column = @column_types_override[name] if @column_types_override
column ||= @column_types[name]

@column_types_override is sometimes given to us when the model in question was built as part of the result of a SQL query. If you’ve done something like

Post
  .joins(:comments)
  .select('posts.*, COUNT(comments.*) AS comments_count')
  .group('comments.post_id')

then we sometimes have to do additional leg work to type cast the count to an integer. If you ran that code while using the MySQL adapter or PostgreSQL adapter (keep in mind that most MySQL users are using the MySQL2 adapter), then @column_types_override would look like: { 'comments_count' => an_object_that_type_casts_to_int }. Continuing to the next line, @column_types will contain the column object that is crucial to this behavior, except for a few special cases (which we will have to cover another time).

The next block of code causes model.id to return the primary key, even if the primary key for the table is a column other than id.

return @attributes.fetch(name) {
  if name == 'id' && self.class.primary_key != name
    read_attribute(self.class.primary_key)
  end
} unless column

Next we need to grab the raw, un-typecast version of the attribute, which came either from user input, or from the database (“user” in this case refers to you, the programmer using Rails). However, there’s an interesting fork in behavior here.

value = @attributes.fetch(name) {
  return block_given? ? yield(name) : nil
}

The first question is whether or not a block was given. This is based on how read_attribute ended up being called. If you called it as post.title, no block would have been given. If you called it as post[:title], then a block would have been given to raise an exception. The reason that the title method would exist in this case, even if we don’t have a 'title' key in our attributes hash is: you performed a custom select statement. (This is an excellent example of how one feature can cause a surprising amount of complexity if not sufficiently isolated).

The conditional around caching attributes is actually bugged, and will always return true for most users, so we’ll ignore it. This leaves us with the line of importance:

@attributes_cache[name] = column.type_cast(value)

column in this case, will be an instance of ActiveRecord::ConnectionAdapters::Column, or one of its adapter specific subclasses. The behavior in question lives on the type_cast method (source).

# Casts value (which is a String) to an appropriate instance.
def type_cast(value)
  return nil if value.nil?
  return coder.load(value) if encoded?

  klass = self.class

  case type
  when :string, :text        then value
  when :integer              then klass.value_to_integer(value)
  when :float                then value.to_f
  when :decimal              then klass.value_to_decimal(value)
  when :datetime, :timestamp then klass.string_to_time(value)
  when :time                 then klass.string_to_dummy_time(value)
  when :date                 then klass.value_to_date(value)
  when :binary               then klass.binary_to_string(value)
  when :boolean              then klass.value_to_boolean(value)
  else value
  end
end

As any good method should, we start with an extremely misleading comment (the value will be anything you passed to the writer, not a string). Like many methods in Column, we also have a case statement based on type, and will call one of many helper methods based on which type it is. type will have been set back in the constructor, by using a regex from the sql_type source. sql_type will be the raw type string from the database schema, such as varchar(255).

At this point, the behavior is linear. All of the helper methods called exist in the class, and most are no more than a few lines long.

Also of note is the method type_cast_for_write, which gets called during the writer, before we store the attributes for type casting later. (Note: Anything that happens in this method will be applied to the _before_type_cast version of the attribute as well.)

If you’ve been cringing looking through the Column class, you’re justified. Luckily, it’s gotten much better. In preparation for adding a public API to hook into the type casting behavior in 4.2, this class has been heavily refactored to focus on polymorphism, rather than conditionals and regular expressions. In part 2, we’ll dig into some of the refactoring that’s been done, and the decisions behind it.

What’s Next?

Does your code base resemble the code we looked at today? Learn about common smells and refactorings with Ruby Science.

Episode #472 - June 13th, 2014

Posted about 1 month back at Ruby5

5 Reasons why you'll Love Swift and doing gooder with ruby

Listen to this episode on Ruby5

Sponsored by surviving the Switch from Startup to Enterprise Dev

surviving the Switch from Startup to Enterprise Dev
NewRelic

5 Reasons Why Rubyist Will Love Swift

On tuesday ruby5 discussed why RubyMotion will not die anytime soon and not be replaced by swift, but today we have 5 Reasons Why Rubyists Will Love Swift
5 Reasons Why Rubyist Will Love Swift

Add Some Jazz to Your Rails App

why are you doing jazz hands, I'm talking about Jazz Hands, it makes your console jazzy
Add Some Jazz to Your Rails App

Business gem

Have you ever wanted to calculate dates based on the number of business days?
Business gem

Invoker 1.2

Invoker is a replacement for pow and foreman, but it also handles https
Invoker 1.2

Ruby for Good

RUBY FOR GOOD is a 3 day hackathon and workships to do help make the world a better place with Ruby
Ruby for Good

Software Apprenticeship Podcast Update

Posted about 1 month back at Jake Scruggs

Episode 4 “Time to Exercise!” is out right now! Search for it on your favorite podcast app or check out our free temp website here: http://softwareapprenticeship.libsyn.com

With 4 weeks under his belt (plus 9 weeks of Dev Bootcamp) our apprentice, Jonathan Howden, continues his quest to become an enterprise software developer at an amazingly rapid pace.  Can a dedicated man become a good developer without a college degree?  Tune in and find out (spoiler: he’s doing well but it’s intense) 

Topics this week:

  • Doing push-ups to break up the lethargy of coding
  • Migrating from Authlogic to Devise/Warden and the perils of using a framework’s column in the database for activation.
  • Why senior programmers avoid becoming mentors
  • Rails’ Asset Pipeline  
  • The usual screwing around and one censored F-bomb (sorry - it was me).

Yesterday we all sat in a room and reviewed Jon’s chess code (his outside of work coding project).  I’ll try to put up a more detailed article about it soon, but in brief it went well.  It’s fun to watch a junior developer try to encode all the crazy logic of chess while keeping the code clean, tested, and understandable.  Other than making the classic mistake of mocking/stubbing the very object he was testing, Jon has some pretty readable code that will “mostly” let 2 people play chess against each other.

Episode 5’s in the can (on the SSD?) and we’re recording episode 6 later today with Dave Hoover, co-founder of Dev Bootcamp.  Jonathan attended Chicago’s DBC in September of 2013 so Dave will get to check in with how Jon is doing in the “wild.”  Also, Dave and I worked at both ThoughtWorks and Obtiva together so it should be quite an interesting conversation.

I really wanted this to be a weekly podcast but here we are at the end of the 9th week and only recording our 6th ep.  Now that we’ve been through the recording and editing  process a few times it should be easier to stick to a weekly schedule.  Even if we weren’t recording the conversation I would still have a weekly wrap up with members of the team and the apprentice — it is a very nice way to sum up the week and lessons learned.  It's sort of a weekly, recorded, low stakes retrospective.



The views and opinions expressed here are my own and don’t necessarily represent positions, strategies, or opinions of Backstop Solutions Group.

Applicative Options Parsing in Haskell

Posted about 1 month back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

I’ve just finished work on a small command line client for the Heroku Build API written in Haskell. It may be a bit overkill for the task, but it allowed me to play with a library I was very interested in but hadn’t had a chance to use yet: optparse-applicative.

In figuring things out, I again noticed something I find common to many Haskell libraries:

  1. It’s extremely easy to use and solves the problem exactly as I need.
  2. It’s woefully under-documented and appears incredibly difficult to use at first glance.

Note that when I say under-documented, I mean it in a very specific way. The Haddocks are stellar. Unfortunately, what I find lacking are blogs and example-driven tutorials.

Rather than complain about the lack of tutorials, I’ve decided to write one.

Applicative Parsers

Haskell is known for its great parsing libraries and this is no exception. For some context, here’s an example of what it looks like to build a Parser in Haskell:

type CSV = [[String]]

csvFile :: Parser CSV
csvFile = do
    lines <- many csvLine
    eof

    return lines

  where
    csvLine = do
        cells <- many csvCell `sepBy` comma
        eol

        return cells

    csvCell = quoted (many anyChar)

    comma = char ','

    eol = char '\n' <|> char '\r\n'

    -- etc...

As you can see, Haskell parsers have a fractal nature. You make tiny parsers for simple values and combine them into slightly larger parsers for slightly more complicated values. You continue this process until you reach the top level csvFile which reads like exactly what it is.

When combining parsers from a general-purpose library like parsec (as we’re doing above), we typically do it monadically. This means that each parsing step is sequenced together (that’s what do-notation does) and that sequencing will be respected when the parser is ultimately executed on some input. Sequencing parsing steps in an imperative way like this allows us to make decisions mid-parse about what to do next or to use the results of earlier parses in later ones. This ability is essential in most cases.

When using libraries like optparse-applicative and aeson we’re able to do something different. Instead of treating parsers as monadic, we can treat them as applicative. The Applicative type class is a lot like Monad in that it’s a means of describing combination. Crucially, it differs in that there it has no ability to define an order – there’s no sequencing.

If it helps, you can think of applicative parsers as atomic or parallel while monadic parsers would be incremental or serial. Yet another way to say it is that monadic parsers operate on the result of the previous parser and can only return something to the next; the overall result is then simply the result of the last parser in the chain. Applicative parsers, on the other hand, operate on the whole input and contribute directly to the whole output – when combined and executed, many applicative parsers can run “at once” to produce the final result.

Taking values and combining them into a larger value via some constructor is exactly how normal function application works. The Applicative type class lets you construct things from values wrapped in some context (say, a Parser State) using a very similar syntax. By using Applicative to combine smaller parsers into larger ones, you end up with a very convenient situation: the constructed parsers resemble the structure of their output, not their input.

When you look at the CSV parser above, it reads like the document it’s parsing, not the value it’s producing. It doesn’t look like an array of arrays, it looks like a walk over the values and down the lines of a file. There’s nothing wrong with this structure per se, but contrast it with this parser for creating a User from a JSON value:

data User = User String Int

-- Value is a type provided by aeson to represent JSON values.
parseUser :: Value -> Parser User
parseUser (Object o) = User <$> o .: "name" <*> o .: "age"

It’s hard to believe the two share any qualities at all, but they are in fact the same thing, just constructed via different means of combination.

In the CSV case, parsers like csvLine and eof are combined monadically via do-notation:

You will parse many lines of CSV, then you will parse an end-of-file.

In the JSON case, parsers like o .: "name" and o .: "age" each contribute part of a User and those parts are combined applicatively via (<$>) and (<*>) (pronounced fmap and apply):

You will parse a user from the value for the “name” key and the value for the “age” key

Just by virtue of how Applicative works, we find ourselves with a Parser User that looks surprisingly like a User.

I go through all of this not because you need to know about it to use these libraries (though it does help with understanding their error messages), but because I think it’s a great example of something many developers don’t believe: not only can highly theoretic concepts have tangible value in real world code, but they in fact do in Haskell.

Let’s see it in action.

Options Parsing

My little command line client has the following usage:

% heroku-build [--app COMPILE-APP] [start|status|release]

Where each sub-command has its own set of arguments:

% heroku-build start SOURCE-URL VERSION
% heroku-build status BUILD-ID
% heroku-build release BUILD-ID RELEASE-APP

The first step is to define a data type for what you want out of options parsing. I typically call this Options:

import Options.Applicative -- Provided by optparse-applicative

type App = String
type Version = String
type Url = String
type BuildId = String

data Command
    = Start Url Version
    | Status BuildId
    | Release BuildId App

data Options = Options App Command

If we assume that we can build a Parser Options, using it in main would look like this:

main :: IO ()
main = run =<< execParser
    (parseOptions `withInfo` "Interact with the Heroku Build API")

parseOptions :: Parser Options
parseOptions = undefined

-- Actual program logic
run :: Options -> IO ()
run opts = undefined

Where withInfo is just a convenience function to add --help support given a parser and description:

withInfo :: Parser a -> String -> ParserInfo a
withInfo opts desc = info (helper <*> opts) $ progDesc desc

So what does an Applicative Options Parser look like? Well, if you remember the discussion above, it’s going to be a series of smaller parsers combined in an applicative way.

Let’s start by parsing just the --app option using the library-provided strOption helper:

parseApp :: Parser App
parseApp = strOption $
    short 'a' <> long "app" <> metavar "COMPILE-APP" <>
    help "Heroku app on which to compile"

Next we make a parser for each sub-command:

parseStart :: Parser Command
parseStart = Start
    <$> argument str (metavar "SOURCE-URL")
    <*> argument str (metavar "VERSION")

parseStatus :: Parser Command
parseStatus = Status <$> argument str (metavar "BUILD-ID")

parseRelease :: Parser Command
parseRelease = Release
    <$> argument str (metavar "BUILD-ID")
    <*> argument str (metavar "RELEASE-APP")

Looks familiar, right? These parsers are made up of simpler parsers (like argument) combined in much the same way as our parseUser example. We can then combine them further via the subparser function:

parseCommand :: Parser Command
parseCommand = subparser $
    command "start"   (parseStart   `withInfo` "Start a build on the compilation app") <>
    command "status"  (parseStatus  `withInfo` "Check the status of a build") <>
    command "release" (parseRelease `withInfo` "Release a successful build")

By re-using withInfo here, we even get sub-command --help flags:

% heroku-build start --help
Usage: heroku-build start SOURCE-URL VERSION
  Start a build on the compilation app

Available options:
  -h,--help                Show this help text

Pretty great, right?

All of this comes together to make the full Options parser:

parseOptions :: Parser Options
parseOptions = Options <$> parseApp <*> parseCommand

Again, this looks just like parseUser. You might’ve thought that o .: "name" was some kind of magic, but as you can see, it’s just a parser. It was defined in the same way as parseApp, designed to parse something simple, and is easily combined into a more complex parser thanks to its applicative nature.

Finally, with option handling thoroughly taken care of, we’re free to implement our program logic in terms of meaningful types:

run :: Options -> IO ()
run (Options app cmd) = do
    case cmd of
        Start url version  -> -- ...
        Status build       -> -- ...
        Release build rApp -> -- ...

Wrapping Up

To recap, optparse-applicative allows us to do a number of things:

  • Implement our program input as a meaningful type
  • State how to turn command-line options into a value of that type in a concise and declarative way
  • Do this even in the presence of something complex like sub-commands
  • Handle invalid input and get a really great --help message for free

Hopefully, this post has piqued some interest in Haskell’s deeper ideas which I believe lead to most of these benefits. If not, at least there’s some real world examples that you can reference the next time you want to parse command-line options in Haskell.

Shared Terminology Yet Different Concepts Between Ember.js and Rails

Posted about 1 month back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Developers who are well versed in Ruby on Rails (or other MVC implementations) and start learning Ember.js may find it surprising that, even though there’s shared vocabulary, denoted concepts are sometimes very different.

The first and obvious difference comes from the fact both are web frameworks. Rails facilitates the creation of web apps that offer mainly an HTTP interface to interact with. On the other hand, Ember helps creating web apps that interface directly with humans (through clicks, taps, key presses, etc). They are both web application frameworks, but the former is server side and the latter client side.

A look into their workflows will shed light over the main differences.

Rails: Request Life Cycle

The Rails request life cycle works as follows:

  1. The Router receives an HTTP request from the browser/client, and calls the controller that will handle it.
  2. The Controller receives the HTTP parameters, and instantiates necessary Model or Models.
  3. The Model fetches from the database requested objects.
  4. The Controller passes Models to the View, and renders them.
  5. The View generates a text reponse (HTML, JSON, etc), interpolating Ruby objects where necessary.
  6. The Controller response is sent back to the Router, and from there to the client.

Rails Request Life Cycle

Ember.js: Run Loop

The Ember.js run loop works as follows (don’t forget that Model and Controller refer now to Ember concepts rather than Rails):

  1. The Router instantiates a Model and sets it in a Controller.
  2. The Router renders the Template.
  3. The Template gets its properties from the Controller, which acts as a decorator for the Model. The template doesn’t know where a property it displays is defined, the controller will provide it by itself or through its Model.

Ember Run Loop

At this point the cycle ends, and it can be restarted by:

  • An event (like a click on a link) that triggers an action that updates the route.
  • A new URL is directly visited by writing in the browser’s address bar.

Model

Models are similar in both frameworks. It is a common situation that a model in Ember maps one-to-one with a model in Rails.

In Rails almost always a Model is backed by a Database like PostgreSQL, whereas in Ember it is common that a model only lives in memory, and is fetched, changed or deleted via a JSON API.

Note that in Ember Template and Model are always automatically in sync thanks to the two-way binding feature. This means that if we edit in a form an attribute for a Model, the attribute will change in real time in any place the model is rendered (let’s say, in the title of the page), even if we don’t submit the form or persist the changes. This is another surprise coming from Rails, where a change in a form is stateless and until we don’t successfully send it nothing really changes.

View

It is a good Rails practice to have simple views, with presenters/decorators providing any necessary logic. Ember enforces this good practice, with its Templates being logic-less by nature of its engine, Handlebars. A similar enforcement may be used in Rails via gems like curly.

Ember has both the concept of Views and of Templates, though Templates are more akin to Rails Views. An Ember View renders Templates, and it provides re-usable components and more complex event handling.

Controller

A Controller in Rails is a Rack application, that talks to Models to return an HTTP response. A Controller in Ember is a Model decorator, and it’s called from the templates.

Router

A Router in Rails is responsible for HTTP requests/responses. In Ember a Route (and not a Router, which is just a mapping between strings and Routes) is concerned about the current state of the application (what models and controllers should be set up), and about keeping the URL up to date as application’s state changes (like after a click on a link).

Final Thoughts

As you dig more into Ember you’ll find more similarities and differences. This blog post should provide a good start to avoid possible confusions due to similar vocabulary that refers to different things.

Swift Sequences

Posted about 1 month back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We’re incredibly excited about the new Swift programming language announced by Apple at this year’s WWDC. As a way of experimenting, we’ve begun looking into what it would be like if we rewrote Liftoff, our command line Xcode project generation/configuration tool, in Swift.

Liftoff supports a few options on the command line, so the first thing we’re trying to do is write a small command line parsing library in Swift.

We want to try avoiding importing Foundation, so we are relying on the top level constants C_ARGV and C_ARGC to get the arguments passed on the command line. Instead of working with these primitive types, we’d really rather have our own object that can hold onto a native String[]. By implementing the Sequence protocol, we could quickly iterate over these options to do whatever we need to do with them.

Creating the Argument List

The requirements for the ArgumentList object are as follows:

  • Instantiate it with C_ARGV and C_ARGC
  • Transform those into a native property with the type String[]

C_ARGV is of the type UnsafePointer<CString>. It contains all of the arguments passed to our process from the command line. From the type definition alone, we know that the internal contents of the object are instances of CString. This is good, because it means that once we get to those contents, we can use the method fromCString() on String to convert them to a nicer type. We also know that we’ll be able to access the contents via subscripting, but since UnsafePointer doesn’t conform to Sequence itself, we can’t iterate through it.

C_ARGC is of the type CInt. It represents the number of arguments that were passed to our object on the command line. We can use this to generate a loop so that we can convert each CString inside C_ARGV into a String.

We can start with a struct:

struct ArgumentList {
    var arguments: String[]

    init(argv: UnsafePointer<CString>, count: CInt) {
    }
}

Here, we’ve defined a basic constructor that will take C_ARGV and C_ARGC, and a property named arguments of the type String[]. So now, we can implement our constructor to loop through the provided input from the command line and convert the arguments into String instances:

init(argv: UnsafePointer<CString>, count: CInt) {
    for i in 1..count {
        let index = Int(i);
        let arg = String.fromCString(C_ARGV[index])
        arguments.append(arg)
    }
}

This gives us an object that satisfies our basic requirements. Now we can start to look into what it would take to conform this object to Sequence.

Inspecting Sequence

Now that we have an object that behaves how we want as a container, we can start to implement the methods that will let us transparently iterate through the internal list.

The protocol that lets us do this is called Sequence, and although it seems very straightforward, it took three of us in a room watching the Advanced Swift session video, looking through the session slides, and implementing it three times to fully understand what we needed to do.

So here’s a quick overview of how the protocol works when used with for in:

When you use the for <object> in <sequence> syntax, Swift actually does some re-writing of the code under the covers. As described in the Advanced Swift session, when you write:

for x in mySequence {
    // iterations here
}

Swift actually turns that into:

var __g: Generator = mySequence.generate()
while let x = __g.next() {
    // iterations here
}

So, breaking this down:

  • Swift calls generate() on the provided Sequence, returning a Generator. This object is stored in a private variable, __g.
  • __g then gets called with next(), which returns a type of Element?. This object is conditionally unwrapped in the while let statement, and assigned to x.
  • It then performs this operation until next() has nothing left to return.

For the record, I’m not crazy about the naming here. I think it’s probably best to think Enumerator instead of Generator, at least in this use case. I’ve filed a radar to this effect, but have already gotten some feedback that this change might not be so simple.

So it looks like we’ll need to actually implement two protocols to conform to Sequence. We’ll need our ArgumentList to conform to Sequence, and we’ll need another object to conform to Generator. We can start with Generator, since it’s the one that’s actually going to be doing the work.

Implementing Generator

As previously shown, we’ll need to implement one method for Generate: next(). This method has the return type of Element?, which is really just a catch-all type defined internally due to some weirdness with protocols and Generics. For now, we’ll ignore that, and think of it as being <T>?. The important thing to get is that we need to return an Optional.

In order to iterate through our array of arguments, we’re going to use a new type: Slice. This type holds a reference to a range of an existing array. This is a bit odd, but essentially, if we create a Slice with a range from an Array, and then update that Array, the Slice is updated as well:

let array: Array = ["foo", "bar", "baz"]
let slice: Slice<String> = array[1...2]
println(slice) // prints ["bar", "baz"]

array[1] = "bat"
println(slice) // prints ["bat", "baz"]

Note that I’m adding the type signatures for those constants for illustrative purposes. The return type of a range of an array is already Slice<T>, so Swift is able to infer this information.

We’ll create a light weight ArgumentListGenerator that conforms to Generator, and has an internal items property:

struct ArgumentListGenerator: Generator {
    var items: Slice<String>
}

If you try to compile, you’ll see that the compiler throws an error, because we haven’t implemented Generator properly. We need to implement next() for the compiler to be happy:

mutating func next() -> String? {
  if items.isEmpty? { return .None }
  let element = items[0]
  items = items[1..items.count]
  return element
}

Our implementation performs a quick check to see if our Slice is empty, and performs an early return with Optional.None if so. Note that since the return type is already Optional<String>, we can omit the Optional prefix for the enum.

We can then grab the top item from items, then reset items to the rest of the Slice. This is why we declared items as mutable, and also why we declared next() as mutating.

Now note that none of this implementation is specific to Strings, or even to our ArgumentList. In fact, with a quick refactor, we can modify this object to use Generics:

struct CollectionGenerator<T>: Generator {
    var items: Slice<T>

    mutating func next() -> T? {
        if items.isEmpty { return .None }
        let item = items[0]
        items = items[1..items.count]
        return item
    }
}

This is such a generic object that seems to solve so much of the common use case here, I’m a bit baffled as to why it hasn’t been included as a part of the standard library. I’ve already filed a radar on the issue.

Now that we have our Generator, we can finally conform our ArgumentList to Sequence.

Implementing Sequence

We can start by creating an extension on ArgumentList to hold the required method:

extension ArgumentList: Sequence {
}

We can then declare the required method, generate():

extension ArgumentList: Sequence {
    func generate() -> CollectionGenerator<String> {
    }
}

Note that we’re using our generic CollectionGenerator<T> type as the return type here. All that’s left is to create a Collection Generator with our arguments:

extension ArgumentList: Sequence {
    func generate() -> CollectionGenerator<String> {
        return CollectionGenerator(items: arguments[0..arguments.endIndex])
    }
}

Now, we can quickly and easily create a list of arguments passed on the command line, and iterate through them using for in:

let arguments = ArgumentList(argv: C_ARGV, count: C_ARGC)

for argument in arguments {
    println(argument)
}

What’s next

Episode #471 - June 10th, 2014

Posted about 1 month back at Ruby5

The Rails/Merb Merge in Retrospect, Opinionated Rails Application Templates with orats, Why Swift Will Never Replace RubyMotion, RubyMotion 3.0 Sneak Peek, Docker 1.0 and RubyConf Portugal taking place in October.

Listen to this episode on Ruby5

Sponsored by Codeship.io

Codeship is a hosted Continuous Deployment Service that just works.

Set up Continuous Integration in a few steps and automatically deploy when all your tests have passed. Integrate with GitHub and BitBucket and deploy to cloud services like Heroku and AWS, or your own servers.

Visit http://codeship.io/ruby5 and sign up for free. Use discount code RUBY5 for a 20% discount on any plan for 3 months.

Codeship

The Rails/Merb Merge in Retrospect

Giles Bowkett wrote an article last week called "The Rails/Merb Merge in Retrospect" where he looks back at the result of merging the Merb framework into what would become Rails 3.0. Giles recognizes improvements, but he argues that despite all of the hard work put into making Rails more modular, most developers haven't embraced this modularity on their projects.
The Rails/Merb Merge in Retrospect

Orats

Orats is a Ruby gem by Nick Janetakis that stands for Opinionated Rails Application Templates. It's a wrapper around the rails command for creating new apps and it does a bunch of stuff for us like setting up redis, sidekiq, puma and a lot more. It can also setup authentication with Devise and a playbook for Ansible.
Orats

Why Swift Will Never Replace RubyMotion

Jack Watson-­Hamblin wrote a blog post on with pretty solid arguments on why RubyMotion will not die anytime soon. In short, he says that RubyMotion is not just a *language*, but a whole toolchain, with a command line tool that doesn't tie you to an editor, like Xcode. There's also the fact that it's a Ruby, so we still have access to all the existing gems out there. Lastly, he points out that life is going to go on for a while without any large portion of the App Store containing apps written in Swift.
Why Swift Will Never Replace RubyMotion

RubyMotion 3.0

Upcoming RubyMotion 3.0 will add support for Android. You will be able to build and run your apps in the Android emulator by running `rake emulator`, or on a USB-­connected Android device running `rake device`. To prepare and sign builds for a Google Play submission, you run `rake release`. All pretty similar to how it works with iOS and OS X.
RubyMotion 3.0

Docker 1.0

This week Docker released version 1.0. This release signifies a level of quality, feature completeness, backward compatibility and API stability to meet enterprise IT standards.
Docker 1.0

RubyConf Portugal

RubyConf Portugal is happening on October 13th and 14th in Braga. This is the first RubyConf taking place in Portugal, and Braga is one of the oldest portuguese cities established in Roman times. Tickets are going fast, so make sure to grab yours at http://rubyconf.pt/ and use the special discount code RUBY5<3PT
RubyConf Portugal

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
Top Ruby Jobs

Sponsored by Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

The Hampshire Hub: open and linked data for Hampshire

Posted about 1 month back at RicRoberts :

Some great news - we’ve recently been selected to work on a new project called the Hampshire Hub, which is a partnership project for Hampshire County Council and public service providers in and around the area.

The Hub will be powered by PublishMyData and will centre around a linked datastore for open data on the web, just like our work with DCLG and Glasgow (which is coming soon as part of their future cities project). And as well as just publishing the raw data, of course we’ll be creating some cool stuff with it too.

So how will it work? Well, if you’re technical and want to use the data for an an app or web site then it’ll be available in a variety of computer formats via APIs. And for non-technical users, the whole thing will be human friendly too with filtered searches, visualisations and tools so you can publish, share and consume any data you need.

Whatever type of user you are, the whole point of the Hub is to make it easy to find and use the exact data you want in the format that you want it. You can link to the exact data you need because all of the data (including metadata and attachments) has a URL. And, because we’ll be publishing 5-star open data, you can also combine the exact data you want - whether it’s used for reference or to create apps with.

We’re really pleased to be working on this with Hampshire Council and all the other partners. They already have lots of ideas on their Protohub and this project will be an evolution of that.

Protohub site screenshot

The whole project’s a great example of how public sector organisations can use open data to their advantage. And, it’s great for PublishMyData too because we’ll be adding loads of new features to support new functionality for the Hub. If you want to read more, check out both Bill’s guest post for the current Hub and Hampshire’s post too.

The Definition of Garbage

Posted about 1 month back at Jake Scruggs

The views and opinions expressed here are my own and don’t necessarily represent positions, strategies, or opinions of Backstop Solutions Group.

Recently we released episode 3 of the Software Apprenticeship Podcast but had to pull it back for re-editing because of some problems with how developers talk to each other.  Developers are not kind to ANY code.  Even our own.  Especially our own.  Sitting next to a dev while he or she discusses the code they are working on can be a shocking experience.  Words like “Crap,” “Junk”, “Garbage” and many worse are used often.  A lot of this type of talk was on episode 3 and when someone at Backstop (who’s job it is to protect us from ourselves and comments taken out of context) heard it they asked us to edit the podcast to take out some of the more offensive comments. This is why episode 3 sometimes fades into music and then comes back mid-conversation.  Sorry about that.

I don’t know where I first heard the definition of developer as “Whiny Optimist” but it is uncannily accurate.  We developers are forever complaining about previously written code.  Code is awful. Code is crap.  Code is the worst spaghetti wrapped around horse manure we’ve ever seen. 

And yet…

We couldn’t go on if we thought we’d have to live out our lives fighting the very thing we create.  There is this optimism about future code.  It  will be bright and shiny.  The next project to re-write the <whatever> is going to make everything better.  So much better…  The code will be pristine and new features slide in like rum into coke.  Ponies and rainbows are coming.

Also…

Every year I get better at what I do, so even code I thought wonderful 3 years ago can be “crap” to me today.  I look back and see a developer who didn’t keep orthogonal concepts separate who coupled code that should not be coupled and I am sad.  I regret my past inefficiencies and curse them.

But…

How bad is this code really?  Backstop’s code is rigorously tested many times automatically before being poured over by humans.  Any code change in my product gets tested first on my machine (by automated tests) then on another “Build Server” (which runs the tests I was supposed to run and a bunch more), then another series of “Regression Servers” will run some even longer regression tests that literally use the app as our customers do.  If it passes all that then we’ll have our Quality Assurance people go over it again to make sure the machines haven’t missed anything.  The last thing the Q.A. people do is write a new automated regression test to make sure this functionality doesn’t break in the future.

What the heck are we complaining about then?  The software works!  It helps many people make a lot of money, it makes the company money, and is a leader in the industry.  We developers are, in some ways, a bunch of ungrateful jerks.

Let me see if I can explain why.  Writing software that solves hard problems is hard.  Duh. There are only so many people who can do it and we struggle through.  Writing software that solves hard problems and can continue to accept new features easily is the HOLY GRAIL of software development.  Rarely has it been done even though every company claims their code is the “best in the industry.”  If you were to get your hands on the unedited version of episode 3 you would hear a lot of developers complain how we wish we had written code in the past that could be easily changed today.  We might even call such code “garbage” but what does “garbage” really mean?  In our app it has come to mean code that works, is well tested, but resists change more than we would like.  We are whining about having to do more work.  If only our past selves had properly separated the concerns more, if only there was more time for refactoring.  But some day we will reach that shining castle on the hill.  And there will be ponies and rainbows for all.</whatever>

Expanding the Raleigh Office

Posted about 1 month back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Last September, we opened an office in Raleigh, NC. Jason moved down from the Boston office, and I moved back to the Triangle after a tour of Atlanta, Charlotte, and Boston. We couldn’t wait to get involved with the fast-growing tech scene, enjoy the small-town feel, and settle into the best city in the country for raising a family.

We moved into the American Underground in Raleigh, downtown on Fayetteville Street. We’re in a private office in an awesome co-working space. It’s an exciting environment, with folks from Uber, Google, Mozilla, and local startups like Bandwidth Labs, Groundfloor, and Photofy.

American Underground Raleigh

We’ve met hundreds of people in the community, given talks at the JavaScript and Ruby meetups, consulted with teams out of The Startup Factory, and argued about barbecue.

We finally feel like we’ve put down our roots, and now we’re looking to grow.

We’re looking for an experienced developer

Web developers at thoughtbot are able to rapidly build high-quality, fully test-driven Ruby on Rails applications. Well-qualified candidates have an excellent knowledge of HTML, CSS, JavaScript, SQL, Unix, deployment, performance, debugging, refactoring, design patterns, and other best practices.

We’re looking for an experienced designer

Product designers at thoughtbot are fully capable of creating great visual design as well as doing great product design and user experience, and then implementing their designs with HTML and CSS (Sass). We don’t have front-end developers, preferring smaller, more integrated teams that can directly realize their vision.

Benefits and perks

These are full-time positions with competitive salary and exceptional benefits, which include unlimited time off, paid conference expenses, and 100% of medical premiums paid.

Quit your job and come work with us. Head to thoughtbot.com/jobs and get in touch.

Three Things To Know About Composition

Posted about 1 month back at Luca Guidi - Home

A few days ago Aaron Patterson wrote a in interesting article about composition vs inheritance with Ruby.

He says that when inheriting our classes directly from Ruby’s core objects such as Array, our public API for that object will become too large and difficult to maintain.

Consider a powerful object like String which has 164 public methods, once our library will be released, we should maintain all that amount of code. It doesn’t worth the trouble, probably because we just wanted to pick a few methods from it. It’s better to compose an object that hides all that complexity derived from String, and to expose only the wanted behaviors.

I was already aware of these issues, but that article was a reminder for fixing my OSS projects. For this reason I refactored Lotus::Utils::LoadPaths. It used to inherit from Array (169 methods), but after breaking the inheritance structure, I discovered that I only needed 2 methods.

However, there are some hidden corners that are worthing to share.

Information escape

A characteristic that I want for LoadPaths is the ability to add paths to it. After the refactoring, for the sake of consistency, I decided to name this method after Array’s #push, and to mimic its behavior.

The initial implementation of this method was:

it 'returns self so multiple operations can be performed' do
  paths = Lotus::Utils::LoadPaths.new

  paths.push('..').
        push('../..')

  paths.must_include '..'
  paths.must_include '../..'
end

class Lotus::Utils::LoadPaths
  # ...

  def push(*paths)
    @paths.push(*paths)
  end
end

When we use this Ruby’s method, the returning value is the array itself, because language’s designers wanted to make chainable calls possible. If we look at the implementation of our method, the implicit returning value was @paths (instead of self), so the subsequent invocations were directly manipulating @paths.

The test above was passing because arrays are referenced by their memory address, so that the changes that happened on the outside (after the accidental escape) were also visible by the wrapping object (LoadPaths). Because our main goal is to encapsulate that object, we want to prevent situations like this.

it 'returns self so multiple operations can be performed' do
  paths = Lotus::Utils::LoadPaths.new

  returning = paths.push('.')
  returning.must_be_same_as(paths)

  paths.push('..').
        push('../..')

  paths.must_include '.'
  paths.must_include '..'
  paths.must_include '../..'
end

class Lotus::Utils::LoadPaths
  # ...

  def push(*paths)
    @paths.push(*paths)
    self
  end
end

Dup and Clone

LoadPaths is used by other Lotus libraries, such as Lotus::View. This framework can be “duplicated” with the goal of ease a microservices architecture, where a developer can define MyApp::Web::View and MyApp::Api::View as “copies” of Lotus::View, that can independently coexist in the same Ruby process. In other, words the configurations of one “copy” shouldn’t be propagated to the others.

Until LoadPaths was inheriting from Array, a simple call to #dup was enough to get a fresh, decoupled copy of the same data. Now the object is duplicated but not the variables that it encapsulates (@paths).

paths1 = Lotus::Utils::LoadPaths.new
paths2 = paths1.dup

paths2.push '..'
paths1.include?('..') # => true, which is an unwanted result

The reason of this failure is the same of the information escaping problem: we’re referencing the same array. Ruby has a special method callback that is designed for cases like this.

class Lotus::Utils::LoadPaths
  # ...

  def initialize_copy(original)
    @paths = original.instance_variable_get(:@paths).dup
  end
end

Now, when paths1.dup is called, also the @paths instance variable will be duplicated and we can safely change paths2 without affecting it.

Freeze

A similar problem arises for #freeze. I want Lotus::View to freeze its configurations after the application is loaded. This immutability will prevent accidental changes that may lead to software defects.

When we try to alter the state of a frozen object, Ruby raises a RuntimeError, but this wasn’t the case of LoadPaths.

paths = Lotus::Utils::LoadPaths.new
paths.freeze
paths.frozen? # => true

paths.push '.' # => It wasn't raising RuntimeError

This had an easy fix:

class Lotus::Utils::LoadPaths
  # ...

  def freeze
    super
    @paths.freeze
  end
end

Conclusion

Composition should be preferred over inheritance, but beware of the unexpected behaviors.

I discovered these problems in a matter of a minutes, because the client code of this object (Lotus::View) has some integration tests that are asserting all these features, without assuming anything of the underlying objects. For instance, it checks one by one all the attributes of a configuration after its duplication, without trusting the fact they can safely duplicate themselves. This double layered testing strategy is fundamental for me while building Lotus.