Rails Models in a Namespace

Posted over 6 years back at Mike Mondragon

If you are starting to get a cluttered model space in your Rails application you should consider placing your models in a namespace. As an example I’m going to go through a Rails application I’m calling Recipes. If my models were starting to have the namespace implied in the class names such as AppleFruit in app/models/apple_fruit.rb then that’s starting to smell like rotten apples. A better namespace would be Fruit::Apple in app/models/fruit/apple.rb

This is what we’ll be modeling. Fruits of Apples and Oranges via single table inheritance. And Vegetables of Potatoes and Carrots through single table inheritance.

We’ll have Ingredients that belong to Fruit, Vegetables, and Recipes. Ingredients are a limited kind of join model going from the recipe through to the kind of ingredient (i.e. fruit or vegetable). Ingredients are a true join model from the fruit or vegetables back to their associated recipes. The Ingredient is polymorphic because Fruits and Vegetables are different kinds of objects.

Finally Recipes are another single table inheritance model but by convention they will only have ingredients, they won’t be associated to the kinds of ingredients through the polymorphic Ingredient class. To access the specific kinds of ingredients from the recipe’s perspective you must access the collection of ingredients and then program the desired behavior to access the kinds of ingredients in your application logic.

Here’s a graph of the models we are designing (click for bigger picture):

The graph was made with Railroad which uses Graphiz to generate the graphs.

All of the source for this example as available at the following Subversion code repository

svn checkout http://svn.mondragon.cc/svn/recipes/trunk/ recipes

http://svn.mondragon.cc/svn/recipes/trunk/

Setup

For simplicity we’ll be using a sqlite3 database for this application, now that we are eating fruits and vegetables we don’t need to be any fatter with an external database server floating around. This example is done in Rails 1.2.3

Before we go on let me give you a quote that Eric Hodel has been putting in the footer of his emails:

Poor workers blame their tools. Good workers build better tools. The
best workers get their tools to do the work for them. -- Syndicate Wars

I’ve been learning many things from Eric and I try to emulate what he does as a developer. Two things he always does is practice test driven development and uses a tool he wrote called autotest to make TDD easier to accomplish. autotest does supports Rails out of the box. Now on with our recipes…

This is the config/database.yml we’ll be using:

market: &market
  adapter: sqlite3

development:
  database: "db/development.db" 
  <<: *market

production:
  database: "db/production.db" 
  <<: *market

test:
#  database: ":memory:" 
  database: "db/test.db" 
  <<: *market

And we’ll start off by putting our sessions in the database and then running the migration to ensure we have our database settings correct.

rake db:sessions:create
rake db:migrate
rake test

Now in a separate console cd into the root of your application and start autotest

autotest

it will run in your directory and whenever you save a test or code file the corresponding unit tests will be fired for those files.

Fruits and Vegetables models

Fruits

Now lets make a Base of our Apples and Orange model with single table inheritance, after the fixture is generated we need to fix where its placed as in the example. Note we are declaring a string attribute named ‘type’ to the model generator. The string is really a column and having a column named ‘type’ is a Rails idiom signaling single table inheritance.

ruby script/generate model Fruit::Base type:string
mv test/fixtures/fruit/fruit_bases.yml test/fixtures/
rmdir test/fixtures/fruit/

In your unit test test/unit/fruit/base_test.rb you need to clue the test into knowing which table/fixture is to be used in the namespace. Coincidently note that your tables and fixtures will still look somewhat flat even though your model classes have depth. After you save the test autotest should be complaining the about an error with the SQL since you haven’t yet migrated your schema. Lets also change the default truth test the generator runs so that autotest is testing something of value for better test driven development

require File.dirname(__FILE__) + '/../../test_helper'

class Fruit::BaseTest < Test::Unit::TestCase
  fixtures :fruit_bases
  set_fixture_class :fruit_bases => Fruit::Base

  # if our notion of a valid new fruit changes then we'll catch it that here
  def test_should_be_valid
    f = Fruit::Base.new
    assert f.valid?
  end

end

In your base fruit model you also have to mark which table to assign the class in its namespace

class Fruit::Base < ActiveRecord::Base
  set_table_name :fruit_bases
end

Now migrate your schemas and then re-save your app/models/fruit/base.rb and you should see autotest is happy, it doesn’t have any errors or failures.

rake db:migrate

See, autotest is happy, it doesn’t have any errors or failures

/usr/local/bin/ruby -I.:lib:test -rtest/unit -e "%w[test/unit/fruit/base_test.rb].each { |f| require f }" | unit_diff -u
Loaded suite -e
Started
.
Finished in 0.48792 seconds.

1 tests, 1 assertions, 0 failures, 0 errors

Now lets generate our Apples and Oranges, but the generator is going to create test/fixtures/fruit/fruit_apples.yml and test/fixtures/fruit/fruit_oranges.yml and we won’t need those fixtures because we are using single table inheritance and we’ll only have one fixture for all of the fruits: test/fixtures/fruit_bases.yml Migrations for Fruit::Orange and Fruit::Apple are also generated. We don’t need those because we are doing single table inheritance from the Fruit::Base Migrate the schema while you are at it.

ruby script/generate model Fruit::Apple
rm db/migrate/*_create_fruit_apple.rb
rm test/fixtures/fruit/fruit_apples.yml
ruby script/generate model Fruit::Orange
rm db/migrate/*_create_fruit_apple.rb
rm test/fixtures/fruit/fruit_oranges.yml
rmdir test/fixtures/fruit/
rake db:migrate

For simplicity of this example we want to have a Apple and a Orange in our fruits fixture test/fixtures/fruit_bases.yml

one:
  id: 1
  type: Apple
two:
  id: 2
  type: Orange

This is what their models and tests should look like, as you go through saving your changes lining up the files watch what autotest is telling you. Do not react to autotest until after you have set up apple.rb, orange.rb, apple_test.rb, and orange_test.rb files. See how our inheritance is denoted in the models Fruit::Apple < Fruit::Base and Fruit::Orange < Fruit::Base

app/models/fruit/apple.rb

class Fruit::Apple < Fruit::Base
end

test/unit/fruit/apple_test.rb

require File.dirname(__FILE__) + '/../../test_helper'

class Fruit::AppleTest < Test::Unit::TestCase
  fixtures :fruit_bases
  set_fixture_class :fruit_bases => Fruit::Base

  # loading from the fixture there is only one Apple
  def test_there_should_only_be_one_apple_in_the_fixture
    assert_equal 1, Fruit::Apple.find(:all).length
  end

end

Orange will follow the same pattern as Apple.

Once completed have a sanity check with the rake test:unit task

rake test:unit

Vegetables

Do everything we just did for Fruits, but this time for Vegetables. We want to end up with Vegetable::Base, Vegetable::Carrot, and Vegetable::Potato . Don’t forget to trigger single table inheritance when you generate the base and trim out the non single table inheritance from the Carrots and Potatoes fixtures.

Ingredient model

Now we’ll make the Ingredient model. It will use a polymorphic association so that it can refer to fruits and vegetables. From a Fruit::Base perspective the ingredient model will be a join model to recipes (we’ll update our Fruit::Base code shortly to accomplish this) From recipe’s perspective (which we will generate shortly) the ingredient model can not be used as a join model to fruits and vegetables because the polymorphic side the of ingredients can not be used in this manner.

Generate the model with ‘kind’ being the name used in the polymorphic idiom (kind_id, kind_type) for heterogeneous ingredients and recipe_id used to join a kind of ingredient (fruit, vegetable, etc.) back to the recipe that uses it.

ruby script/generate model Ingredient::Base kind_id:integer kind_type:string recipe_id:integer
mv test/fixtures/ingredient/ingredient_bases.yml test/fixtures/ingredient_bases.yml
rmdir test/fixtures/ingredient/

app/models/ingredient/base.rb

##
# A polymorphic model to associate different kinds of
# specific ingredients with a recipe.  The joining nature
# of the ingredient is one way from its kind to the recipe.
# The recipe cannot go through the ingredient to its kind
# due to a limitation in the polymorphic model.

class Ingredient::Base < ActiveRecord::Base
  set_table_name :ingredient_bases
  belongs_to :kind, :polymorphic => true
  belongs_to :recipe, :class_name => "Recipe::Base", :foreign_key => "recipe_id"
end

There is not anything of significance about the polymorphic declaration of the Ingredient model. However since the Recipe is itself in a namespace we need to help ActiveRecord with the recipe association declaring the class_name of the recipe and the foreign key to it.

Don’t forget to write a unit test for you Ingredient model.

We also need to update the basic fruit and vegetable base models.

Here is the updated app/models/fruit/base.rb

##
# A fruit base class that uses single table inheritance.
# Specific kinds of fruits should inherit from this class.
# A fruit has ingredients as join a model through which
# recipes that include the fruit can be found.

class Fruit::Base < ActiveRecord::Base
  set_table_name :fruit_bases
  has_many :ingredient, :class_name => 'Ingredient::Base',
           :foreign_key => :kind_id, :conditions => "kind_type LIKE 'Fruit::%'"
  has_many :recipes, :through => :ingredient, :uniq => true
end

Notice that the fruit base has many ingredients. But because ingredients are polymorphic (has an kind_id column and a kind_type column) the fruit base needs to declare the foreign key that the ingredient uses to refer to it and what the kind_type column will look like when an ingredient is pointing to fruit. Once that is established then the ingredient model can be used as a join model which we go through to get to the recipe that includes this kind of ingredient.

Update your vegetable base model accordingly.

Recipe model

Now lets generate the recipe model. Its using single table inheritance and we’ll give each recipe a title so this is what our generation looks like. Don’t forget to flatten the fixtures again.

ruby script/generate model Recipe::Base type:string title:string
mv test/fixtures/recipe/recipe_bases.yml test/fixtures/recipe_bases.yml
rmdir test/fixtures/recipe/

app/models/recipe/base.rb

class Recipe::Base < ActiveRecord::Base
  set_table_name :recipe_bases
  has_many :ingredients, :class_name => 'Ingredient::Base', :foreign_key => :recipe_id

  # If we could go through ingredients to their kinds this is how we would make
  # the association.  However polymorphic models cannot be used as a join model
  # when the join is towards the heterogeneous type referenced by the model
  # has_many :kinds, :through => :ingredients
end

Again, we need to clue AR in to which ingredient model we are associating with and what the foreign key used. Don’t for get to write your tests.

Runtime

Integration test

Be sure to check out the source code of the example. It has an integration test that runs the model through its paces using predefined fixtures. This is the test

test/integration/recipes_test.rb

require File.dirname(__FILE__) + '/../test_helper'

##
# Tests the recipes system with the simple yaml fixtures

class RecipesTest < ActionController::IntegrationTest

  # load up all the fixtures
  fixtures :fruit_bases
  set_fixture_class :fruit_bases => Fruit::Base
  fixtures :vegetable_bases
  set_fixture_class :vegetable_bases => Vegetable::Base
  fixtures :ingredient_bases
  set_fixture_class :ingredient_bases => Ingredient::Base
  fixtures :recipe_bases
  set_fixture_class :recipe_bases => Recipe::Base

  def test_fruit_salad_recipe_should_have_apples_and_oranges
    r = Recipe::Base.find(:first, :conditions => {:title => "Fruit Salad"})
    assert r
    r.ingredients.collect do |i|
      assert(i.kind.class == Fruit::Apple || i.kind.class == Fruit::Orange)
    end
  end

  def test_apple_pie_recipe_should_only_have_apples
    r = Recipe::Base.find(:first, :conditions => {:title => "Apple Pie"})
    assert r
    r.ingredients.collect do |i|
      assert_equal Fruit::Apple, i.kind.class
    end
  end

  def test_apple_should_be_in_fruit_salad_and_apple_pie
    a = Fruit::Apple.find(:first)
    # there are 3 recipes but check that the through across the polymorphic
    # ingredients is limited to fruit
    assert_equal 2, a.recipes.length
    a.recipes.each do |r|
      assert(r.title == "Apple Pie" || r.title == "Fruit Salad")
    end

  end

end

You can explicitly run only the integration test with rake thus:

rake test:integration

Or run a specific function with the integration test such as:

ruby test/integration/recipes_test.rb -n test_apple_pie_recipe_should_only_have_apples

Rails console

In the Rails console the following code also shows some behavior that can be exercised with our Recipes, Ingredients, Fruits and Vegetables:

ruby script/console

Run this code in the console

# create an apple and orange ingredient
a = Fruit::Apple.create!
o = Fruit::Orange.create!
apple = Ingredient::Base.create! :kind => a
orange = Ingredient::Base.create! :kind => o

# notice that the recipe hasn't been assigned
# for this ingredient  "recipe_id"=>nil
apple.attributes

r = Recipe::Base.create! :title => "Fruit Salad"
r.ingredients << apple
r.ingredients << orange

# now the apple ingredient is associated with the
# recipe "recipe_id"=>1
apple.attributes

# look at the ingredients in this recipe, we have to go
# through the ingredient to inspect their kinds because
# we can not go through the join model from its polymorphic side
r.ingredients.collect{|i| i.kind}
r.ingredients.collect{|i| i.kind.type}

# make another recipe using the apple object
# (not the first apple ingredient) so the apple
# object can tell us which recipes it belongs to
r = Recipe::Base.create! :title => "Apple Pie"
apple = Ingredient::Base.create! :kind => a
r.ingredients << apple

# and we can see that the apple instance knows which recipes
# it is included with now
a.recipes
a.recipes.collect{|r| r.title}

a.ingredient.collect{|i| i.recipe}
a.ingredient.collect{|i| i.recipe.title}

# note STI finders are smart by its class, base
# returns all fruit, orange only returns oranges
Fruit::Base.find(:all)
Fruit::Orange.find(:all)

Wrap-up

All of the source for this example as available at the following Subversion code repository

svn checkout http://svn.mondragon.cc/svn/recipes/trunk/ recipes

http://svn.mondragon.cc/svn/recipes/trunk/

Here is my lib/tasks/diagrams.rake to generate Railroad’s graphs with these Rake tasks:

rake doc:diagram:controllers   # generate controllers diagram
rake doc:diagram:models        # generate models diagram
rake doc:diagrams              # generate object graphs of models and controllers
namespace :doc do
  namespace :diagram do
    desc "generate models diagram"
    task :models do
      sh "railroad -i -l -a -m -M | dot -Tsvg | sed 's/font-size:14.00/font-size:11.00/g' > doc/models.svg"
    end

    desc "generate controllers diagram"
    task :controllers do
      sh "railroad -i -l -C | neato -Tsvg | sed 's/font-size:14.00/font-size:11.00/g' > doc/controllers.svg"
    end
  end

  desc "generate object graphs of models and controllers"
  task :diagrams => %w(diagram:models diagram:controllers)
end

Project Management and Best Practices in Retrospect-iva

Posted over 6 years back at Wood for the Trees

I don’t hear a lot about project management, even though there’s a lot about how to manage a project. Testing, deployment and source code management get the most attention, and project management seems to get the least. Maybe that’s because it hasn’t been done properly yet and all the solutions out there only address pieces of the overall problem.

So I’m going to try to clarify, for myself mostly, the kind of project management that is needed and why it is so important in development.

Getting It Right

When I say management, I mean a combination of something like Lighthouse and Basecamp, with a serious overhaul of perpsective. An integral part of good development is developing ideas hand-in-hand with the code. Management is all about keeping this communication as agile as the coding process, sticking to priorities, and addressing the right things at the right time.

Is Basecamp sufficient for managing a project? No. DHH even says it is not meant for managing Rails projects; it’s for marketers and managers. It’s only a piece of the puzzle, because it provides no way to easily track code.

Is Trac sufficient for managing a project? No. It is too much like a big todo list and a bug tracker combined. It is very developer-centric—even when the developer is also the designer and manager, there’s no way to make known the other roles. Trac too is only a piece, because it provides no way to easily communicate ideas. A wiki doesn’t cut it.

Are help desks sufficient? No. They are too customer/support-centric. They have no way to easily communicate the ideas of developers and designers.

What about Lighthouse or Unfuddle? For a hosted solution, Lighthouse and Unfuddle combine Trac and Basecamp. That’s going in a decent direction. Anything which integrates different parts of the development process is addressing the need for management. But it’s not enough, because it has no integration with the customer.

What about [insert megolithic answer to everything]? No. It’s too complicated, has too many options and forms, too much information on each page. Something as complicated as Google Analytics, for example, is pushing the boundary of what is acceptable. Complicated applications get in the way of communication and understanding, even if they integrate everything. Simplicity first.

What all of these solutions lack is a focus on the different kinds of users for a project, ways of easily communicating their needs and ways of addressing those needs. Even when the developer, designer, marketer and manager are rolled into the same person, it is important to separate the roles, make them clear and integrate each one’s concerns at the right points.

It’s all about the development process.

The Development Process

I see that there are 9 stages in the development process:

  1. Management: find out the next need to address
  2. Specification/Testing: specify how the need is addressed
  3. Coding: code until the specification passes
  4. Continuous Integration: combine the efforts of multiple coders
  5. Refactoring: clean up the code
  6. Graphic Design (if needed): make the new feature appealing
  7. Deployment: release the latest revision
  8. Marketing: advertise the latest feature
  9. Customer Feedback: find out what is going well and what isn’t

If you see this process as organic, the importance of management becomes much clearer. In fact, I think management is the most important stage, more important than the code itself, for a number of reasons:

  • Management is the first step; without management, testing/coding is arbitrary
  • Management brings everyone into the development process
  • Management gives everyone an overview so they can see the wood for the trees
  • Management encourages communication between everyone
  • Management naturally focuses on the most important aspects
  • Management reinforces and rewards good development
  • Management operates organically, reflecting needs and their importance
  • Management begins the specification/coding cycle
  • Management draws from and feeds into all other stages of development

Disconnecting from these aspects of development is a serious mistake because it denies the organic element of development. Everything needs to converge at some point and management is the most natural way of acknowledging and converging all of a project’s members, roles, ideas, problems and concerns.

Poor management will try to force the development process into a linear pattern. It will approach things as ‘things to do’, ‘features to have’, ‘milestones to reach’, ‘deadlines to meet’, ‘code to test’. Everything will have its place, need to be addressed by a particular person… in short, it’ll look like a Trac installation. The code will feel strained, regimented and will generally be a rather boring thing to deal with. The developer is being strait-jacketed.

On the other hand, a lack of management will result in the process becoming chaotic. The developer will code whatever takes his fancy. Occasionally e-mails or posts containing feedback will find their way into the code, but mostly the code will diverge from the customer’s interests. The organic element has gone mad in this case because the developer is too isolated.

But good management acknowledges the organic aspect of development and lets the code flow. It translates ideas into specifications just as test-first translates specifications into code. Good project management will create and maintain strong channels of communication between developer, designer, marketer, manager and customer. The real needs for the project will appear of their own accord as different ideas converge in one place.

But that sounds much easier than it actually is. There isn’t yet an application out there which integrates all of those roles together, but some are closer than others. I think Retrospectiva could be the one which gets there first.

What is needed for good management

The three major aspects of project management are development, collaboration and integration. There needs to be a way to develop, track that development, and focus it. There needs to be collaboration and communication surrounding that development. There needs to be a process of integration between management and the other stages of development.

Ideally, a project management system will have the following aspects:

  • Stories: isolated stories to be resolved (bug, feature, question, idea)
  • Dynamic properties: status, milestone, persona, feature, assigned user, assigned group
  • Reinforcement: aspects of the application NOT to change (positive feedback, robust code)
  • Personae: ability to define personae, like power users, buyers, sellers, novices, etc.
  • Milestones: rough organisation of stories and deadlines
  • Messages: site-wide (like Basecamp) and for each milestone
  • Roles: developer, guest, manager, designer, customer, marketer, administrator
  • Groups: optional story development by groups
  • Interfaces: different interfaces for each role and/or group
  • Cross-referencing: referencing between stories, messages, milestones, source
  • Testing integration: update stories with progress on tests (e.g. Tesly)
  • Coverage Integration: stories for area of test coverage, whether 100% covered or not
  • SCM integration: update stories through commit logs
  • Continuous Build Integration: create stories for failed builds
  • Error Notification Integration: create stories for application errors
  • Customer Integration: create stories for customer feedback (positive & negative)

Most importantly, the interface(s) needs to be extremely clean. Lighthouse goes a long way in making a highly readable, even pleasurable interface. Most of the aspects I list above can exist on their own, meaning the application itself will have many facets, but each very easily understood. Cross-referencing is probably the most important of them all, since it will bring together the various aspects.

The shift in perspective I suggest for project management is to not focus on managing people (like Basecamp) or code (like Trac) or users, but ideas. Those ideas will never disappear from view, unlike tickets on Trac or todo lists on Basecamp. As stories grow and connect with new stories (like associating tickets), everyone will see the evolution and development of features, in which way the project is going, and be able to react better to the movement of the project. In a way, project management also begins to document the project, but more importantly, it shows in black and white how ideas become code and how they evolve. At the centre of the project should be a cloud of ideas which each role can see differently.

Just to give a little hypothetical situation: at the beginning of your project you had a simple user authentication system. Over time, users talked about adding Open ID. Management wanted an authorisation system and an admin interface. Designers wanted a cute widget that pops down with AJAX. Developers wanted to extract it into a plugin. All of these ideas would be associated and appear together in a good management system, showing the time each one was added and completed, the role which initially suggested it, and the group or user responsible for implementing it. All the bugs, notes, support questions, requests and feature stories will clump together and naturally point towards what is needed next, if anything.

This form of project management could very well revolutionise the way development is perceived. Or is it a load of bullshit?

Project Management and Best Practices in Retrospect-iva

Posted over 6 years back at Wood for the Trees

I don’t hear a lot about project management, even though there’s a lot about how to manage a project. Testing, deployment and source code management get the most attention, and project management seems to get the least. Maybe that’s because it hasn’t been done properly yet and all the solutions out there only address pieces of the overall problem.

So I’m going to try to clarify, for myself mostly, the kind of project management that is needed and why it is so important in development.

Getting It Right

When I say management, I mean a combination of something like Lighthouse and Basecamp, with a serious overhaul of perpsective. An integral part of good development is developing ideas hand-in-hand with the code. Management is all about keeping this communication as agile as the coding process, sticking to priorities, and addressing the right things at the right time.

Is Basecamp sufficient for managing a project? No. DHH even says it is not meant for managing Rails projects; it’s for marketers and managers. It’s only a piece of the puzzle, because it provides no way to easily track code.

Is Trac sufficient for managing a project? No. It is too much like a big todo list and a bug tracker combined. It is very developer-centric—even when the developer is also the designer and manager, there’s no way to make known the other roles. Trac too is only a piece, because it provides no way to easily communicate ideas. A wiki doesn’t cut it.

Are help desks sufficient? No. They are too customer/support-centric. They have no way to easily communicate the ideas of developers and designers.

What about Lighthouse or Unfuddle? For a hosted solution, Lighthouse and Unfuddle combine Trac and Basecamp. That’s going in a decent direction. Anything which integrates different parts of the development process is addressing the need for management. But it’s not enough, because it has no integration with the customer.

What about [insert megolithic answer to everything]? No. It’s too complicated, has too many options and forms, too much information on each page. Something as complicated as Google Analytics, for example, is pushing the boundary of what is acceptable. Complicated applications get in the way of communication and understanding, even if they integrate everything. Simplicity first.

What all of these solutions lack is a focus on the different kinds of users for a project, ways of easily communicating their needs and ways of addressing those needs. Even when the developer, designer, marketer and manager are rolled into the same person, it is important to separate the roles, make them clear and integrate each one’s concerns at the right points.

It’s all about the development process.

The Development Process

I see that there are 9 stages in the development process:

  1. Management: find out the next need to address
  2. Specification/Testing: specify how the need is addressed
  3. Coding: code until the specification passes
  4. Continuous Integration: combine the efforts of multiple coders
  5. Refactoring: clean up the code
  6. Graphic Design (if needed): make the new feature appealing
  7. Deployment: release the latest revision
  8. Marketing: advertise the latest feature
  9. Customer Feedback: find out what is going well and what isn’t

If you see this process as organic, the importance of management becomes much clearer. In fact, I think management is the most important stage, more important than the code itself, for a number of reasons:

  • Management is the first step; without management, testing/coding is arbitrary
  • Management brings everyone into the development process
  • Management gives everyone an overview so they can see the wood for the trees
  • Management encourages communication between everyone
  • Management naturally focuses on the most important aspects
  • Management reinforces and rewards good development
  • Management operates organically, reflecting needs and their importance
  • Management begins the specification/coding cycle
  • Management draws from and feeds into all other stages of development

Disconnecting from these aspects of development is a serious mistake because it denies the organic element of development. Everything needs to converge at some point and management is the most natural way of acknowledging and converging all of a project’s members, roles, ideas, problems and concerns.

Poor management will try to force the development process into a linear pattern. It will approach things as ‘things to do’, ‘features to have’, ‘milestones to reach’, ‘deadlines to meet’, ‘code to test’. Everything will have its place, need to be addressed by a particular person… in short, it’ll look like a Trac installation. The code will feel strained, regimented and will generally be a rather boring thing to deal with. The developer is being strait-jacketed.

On the other hand, a lack of management will result in the process becoming chaotic. The developer will code whatever takes his fancy. Occasionally e-mails or posts containing feedback will find their way into the code, but mostly the code will diverge from the customer’s interests. The organic element has gone mad in this case because the developer is too isolated.

But good management acknowledges the organic aspect of development and lets the code flow. It translates ideas into specifications just as test-first translates specifications into code. Good project management will create and maintain strong channels of communication between developer, designer, marketer, manager and customer. The real needs for the project will appear of their own accord as different ideas converge in one place.

But that sounds much easier than it actually is. There isn’t yet an application out there which integrates all of those roles together, but some are closer than others. I think Retrospectiva could be the one which gets there first.

What is needed for good management

The three major aspects of project management are development, collaboration and integration. There needs to be a way to develop, track that development, and focus it. There needs to be collaboration and communication surrounding that development. There needs to be a process of integration between management and the other stages of development.

Ideally, a project management system will have the following aspects:

  • Stories: isolated stories to be resolved (bug, feature, question, idea)
  • Dynamic properties: status, milestone, persona, feature, assigned user, assigned group
  • Reinforcement: aspects of the application NOT to change (positive feedback, robust code)
  • Personae: ability to define personae, like power users, buyers, sellers, novices, etc.
  • Milestones: rough organisation of stories and deadlines
  • Messages: site-wide (like Basecamp) and for each milestone
  • Roles: developer, guest, manager, designer, customer, marketer, administrator
  • Groups: optional story development by groups
  • Interfaces: different interfaces for each role and/or group
  • Cross-referencing: referencing between stories, messages, milestones, source
  • Testing integration: update stories with progress on tests (e.g. Tesly)
  • Coverage Integration: stories for area of test coverage, whether 100% covered or not
  • SCM integration: update stories through commit logs
  • Continuous Build Integration: create stories for failed builds
  • Error Notification Integration: create stories for application errors
  • Customer Integration: create stories for customer feedback (positive & negative)

Most importantly, the interface(s) needs to be extremely clean. Lighthouse goes a long way in making a highly readable, even pleasurable interface. Most of the aspects I list above can exist on their own, meaning the application itself will have many facets, but each very easily understood. Cross-referencing is probably the most important of them all, since it will bring together the various aspects.

The shift in perspective I suggest for project management is to not focus on managing people (like Basecamp) or code (like Trac) or users, but ideas. Those ideas will never disappear from view, unlike tickets on Trac or todo lists on Basecamp. As stories grow and connect with new stories (like associating tickets), everyone will see the evolution and development of features, in which way the project is going, and be able to react better to the movement of the project. In a way, project management also begins to document the project, but more importantly, it shows in black and white how ideas become code and how they evolve. At the centre of the project should be a cloud of ideas which each role can see differently.

Just to give a little hypothetical situation: at the beginning of your project you had a simple user authentication system. Over time, users talked about adding Open ID. Management wanted an authorisation system and an admin interface. Designers wanted a cute widget that pops down with AJAX. Developers wanted to extract it into a plugin. All of these ideas would be associated and appear together in a good management system, showing the time each one was added and completed, the role which initially suggested it, and the group or user responsible for implementing it. All the bugs, notes, support questions, requests and feature stories will clump together and naturally point towards what is needed next, if anything.

This form of project management could very well revolutionise the way development is perceived. Or is it a load of bullshit?

Capistrano 2.0, upgrading & fitting into a size 0 dress

Posted over 6 years back at Wood for the Trees

The improvements to Capistrano are much welcomed. My deployment recipe is now half the length it used to be and it is much easier to follow what is happening for my many types of deployment. I love the new features added, mostly dealing with manipulating scopes and enhancing the user’s ability to extend the core framework.

Review of new features

namespaces: Like Rake, you can namespace your tasks and group them together more sensibly. This feature alone is worth upgrading for just to make your scripts more sensible and easier to read.

events: Like Rails, you can now perform tasks before or after other ones rather than using the hacky ‘before_something’ and ‘after_something’. Much cleaner and much faster too.

strategies: In addition to checkout, you can now deploy via export and copy and use different strategies for deployment, such as using export for your copy_strategy rather than zips and tarballs.

scoping: All sorts of scoping has been introduced in Capistrano 2.0, from namespacing to single execution of “run” and “sudo”, allowing you to define specific roles or hosts in which your commands run.

help: Capistrano 2.0 now has a more verbose way of explaining tasks with cap -e task_name. You’ll realise how useful this is when you use it for the built-ins as well as your own.

All in all, Capistrano is pretty simple, but it is the way it is written that makes it appear so much simpler than it really is. Capistrano 2.0 takes that to a new level, not groundbreaking perhaps, but definitely a lot cleaner than its previous releases.

Upgrading from 1.4.1

There is no need to change config/deploy.rb out of the box. Capistrano 2.0 is nicely backwards compatible, unlike other things out there, and, at least for me, nothing broke because of the upgrade.

You can look at Capistrano’s instructions for upgrading, if you want to know what is being done, but for the impatient, here are the steps you have to follow before we can start drying up your deploy script.

1. Install the new version of capistrano:
sudo gem install capistrano
2. cd project_root & run capify
~# cd projroot
projroot# capify .
3. Upgrade previous deployments to use the new revision tracking system
projroot# cap -f upgrade -f Capfile upgrade:revisions

4. Rinse and repeat for each of your deployment targets

Getting your deploy.rb into its new size 0 dress

You may now have the very understandable urge to slim down your deployment recipes. With the introduction of Capistrano 2.0, I found my deploy.rb reduced to less than half the size. Below, I cover the areas which you should focus on to get that deploy script into its new size 0 dress.

Anatomy of my deploy.rb

  1. requires: capistrano-ext, mongrel_cluster, etc.
  2. global, stage and custom variables
  3. event chains
  4. rewriting built-ins: web:disable and web:enable
  5. extra tasks: fixing permissions, copying mongrel confs, etc.
  6. custom deploy tasks: long, normal, quick
  7. maintenance tasks: backup, restore

Variables

More than before, variables are the lynch-pin of slimming everything down. The first thing you should do is look over every task rewrite or custom task and see how it can be turned into a simple set :var, true/false/whatever. Capistrano 2.0 will make it very easy to do this.

With Capistrano 2.0, you should use the set command religiously, both for built-in and custom tasks.

I personally set the following at the top of my recipe.

  • Global variables: stages, deploy_via
  • Application specific: application, repository, user, scm_username
  • Deployment specific: deploy_to, rails_env
  • Custom variables: serving_via, suexec, suexec_user, suexec_group, disable_template

Deployment Strategy

I would personally suggest using xport for your deploy_via strategy unless you have a reason for using heckout or copy.

Using Namespaces

Namespaces make it dead simple to group common tasks, like different restart methodologies. I use a serving_via variable which translates into the reload:whatever task to run for restarting the application. For example:

namespace :reload do
  desc "Default reloading procedure"
  task :default do
    mongrels
  end
  desc "Reload an FCGI application"
  task :fcgi, :roles => :app do
    sudo "#{current_path}/script/process/reaper -a graceful -d #{current_path}/public/dispatch.fcgi"
  end
  desc "Reload an LSAPI application"
  task :lsapi, :roles => :app do
    sudo "/usr/local/litespeed/bin/lswsctrl restart"
  end
  desc "Give the mongrels a bath"
  task :mongrels, :roles => :app do
    restart_mongrel_cluster
  end
end

Note: I warn against using restart as a namespace because it clashes with the built-in task and, in certain instances, results in infinite recursion.

Maintenance Splash

The biggest change in Capistrano you may need to worry about is the removal of delete and render. Don’t despair, though, because creating a maintenance splash is still easy. This is my rewrite:

desc "Generate a maintenance.html to disable requests to the application."
deploy.web.task :disable, :roles => :web do
  remote_path = "#{shared_path}/system/maintenance.html"
  on_rollback { run "rm #{remote_path}" }
  template = File.read(disable_template)
  deadline, reason = ENV["UNTIL"], ENV["REASON"]
  maintenance = ERB.new(template).result(binding)
  put maintenance, "#{remote_path}", :mode => 0644
end

desc "Re-enable the web server by deleting any maintenance file."
deploy.web.task :enable, :roles => :web do
  run "rm #{shared_path}/system/maintenance.html"
end

Using events

Like the before and after filters in Rails, you can now cleanly chain together tasks. I’m a sucker for one-line solutions and these are really so simple that it makes my heart bleed:

before "deploy:restart", "fix:permissions"
before "deploy:migrate", "db:backup"
after "deploy:symlink", "deploy:cleanup"
after "deploy:update_code", "deploy:web:disable"
after "deploy:restart", "deploy:web:enable"

capistrano-ext & multistage

I highly recommend the use of multistage. It comes with the capistrano-ext gem (which has been upgraded to Capistrano 2.0, of course).

Basically, it separates the concerns of different deployments. If, like me, you like having a few other versions of your application out there, like a staging area, a testing area for bleeding edge features, and, of course, the production site, separating these in Capistrano before 2.0 was very irritating. Multistage sorts that out very nicely.

By default, you must specify the stage you wish to deploy. This behaviour can be overridden by setting the default_stage variable, but I like being explicit. This is what using stages looks like:

# cap production deploy

If you don’t provide ‘production’, it’ll complain and abort.

Using multistage is dead easy. Put this at the top of your deploy.rb:

  require 'capistrano/ext/multistage'
  set :stages, %w(staging production testing)

Run the task for generating your stage deploy files:

projroot# cap multistage:prepare

This will create a recipe file for each stage in a new config/deploy directory (exactly like Rails environments). Now, in each stage recipe, add all of your stage-specific tasks and variables. For example:

set :rails_env, "stage"
set :application, "staging.example.com"
set :deploy_to, "/var/www/#{application}"

Now switching between different deployments is a breeze. Just make a new recipe file for it with the necessary variables and you’re set.

Capistrano 2.0, upgrading & fitting into a size 0 dress

Posted over 6 years back at Wood for the Trees

The improvements to Capistrano are much welcomed. My deployment recipe is now half the length it used to be and it is much easier to follow what is happening for my many types of deployment. I love the new features added, mostly dealing with manipulating scopes and enhancing the user’s ability to extend the core framework.

Review of new features

namespaces: Like Rake, you can namespace your tasks and group them together more sensibly. This feature alone is worth upgrading for just to make your scripts more sensible and easier to read.

events: Like Rails, you can now perform tasks before or after other ones rather than using the hacky ‘before_something’ and ‘after_something’. Much cleaner and much faster too.

strategies: In addition to checkout, you can now deploy via export and copy and use different strategies for deployment, such as using export for your copy_strategy rather than zips and tarballs.

scoping: All sorts of scoping has been introduced in Capistrano 2.0, from namespacing to single execution of “run” and “sudo”, allowing you to define specific roles or hosts in which your commands run.

help: Capistrano 2.0 now has a more verbose way of explaining tasks with cap -e task_name. You’ll realise how useful this is when you use it for the built-ins as well as your own.

All in all, Capistrano is pretty simple, but it is the way it is written that makes it appear so much simpler than it really is. Capistrano 2.0 takes that to a new level, not groundbreaking perhaps, but definitely a lot cleaner than its previous releases.

Upgrading from 1.4.1

There is no need to change config/deploy.rb out of the box. Capistrano 2.0 is nicely backwards compatible, unlike other things out there, and, at least for me, nothing broke because of the upgrade.

You can look at Capistrano’s instructions for upgrading, if you want to know what is being done, but for the impatient, here are the steps you have to follow before we can start drying up your deploy script.

1. Install the new version of capistrano:
sudo gem install capistrano
2. cd project_root & run capify
~# cd projroot
projroot# capify .
3. Upgrade previous deployments to use the new revision tracking system
projroot# cap -f upgrade -f Capfile upgrade:revisions

4. Rinse and repeat for each of your deployment targets

Getting your deploy.rb into its new size 0 dress

You may now have the very understandable urge to slim down your deployment recipes. With the introduction of Capistrano 2.0, I found my deploy.rb reduced to less than half the size. Below, I cover the areas which you should focus on to get that deploy script into its new size 0 dress.

Anatomy of my deploy.rb

  1. requires: capistrano-ext, mongrel_cluster, etc.
  2. global, stage and custom variables
  3. event chains
  4. rewriting built-ins: web:disable and web:enable
  5. extra tasks: fixing permissions, copying mongrel confs, etc.
  6. custom deploy tasks: long, normal, quick
  7. maintenance tasks: backup, restore

Variables

More than before, variables are the lynch-pin of slimming everything down. The first thing you should do is look over every task rewrite or custom task and see how it can be turned into a simple set :var, true/false/whatever. Capistrano 2.0 will make it very easy to do this.

With Capistrano 2.0, you should use the set command religiously, both for built-in and custom tasks.

I personally set the following at the top of my recipe.

  • Global variables: stages, deploy_via
  • Application specific: application, repository, user, scm_username
  • Deployment specific: deploy_to, rails_env
  • Custom variables: serving_via, suexec, suexec_user, suexec_group, disable_template

Deployment Strategy

I would personally suggest using xport for your deploy_via strategy unless you have a reason for using heckout or copy.

Using Namespaces

Namespaces make it dead simple to group common tasks, like different restart methodologies. I use a serving_via variable which translates into the reload:whatever task to run for restarting the application. For example:

namespace :reload do
  desc "Default reloading procedure"
  task :default do
    mongrels
  end
  desc "Reload an FCGI application"
  task :fcgi, :roles => :app do
    sudo "#{current_path}/script/process/reaper -a graceful -d #{current_path}/public/dispatch.fcgi"
  end
  desc "Reload an LSAPI application"
  task :lsapi, :roles => :app do
    sudo "/usr/local/litespeed/bin/lswsctrl restart"
  end
  desc "Give the mongrels a bath"
  task :mongrels, :roles => :app do
    restart_mongrel_cluster
  end
end

Note: I warn against using restart as a namespace because it clashes with the built-in task and, in certain instances, results in infinite recursion.

Maintenance Splash

The biggest change in Capistrano you may need to worry about is the removal of delete and render. Don’t despair, though, because creating a maintenance splash is still easy. This is my rewrite:

desc "Generate a maintenance.html to disable requests to the application."
deploy.web.task :disable, :roles => :web do
  remote_path = "#{shared_path}/system/maintenance.html"
  on_rollback { run "rm #{remote_path}" }
  template = File.read(disable_template)
  deadline, reason = ENV["UNTIL"], ENV["REASON"]
  maintenance = ERB.new(template).result(binding)
  put maintenance, "#{remote_path}", :mode => 0644
end

desc "Re-enable the web server by deleting any maintenance file."
deploy.web.task :enable, :roles => :web do
  run "rm #{shared_path}/system/maintenance.html"
end

Using events

Like the before and after filters in Rails, you can now cleanly chain together tasks. I’m a sucker for one-line solutions and these are really so simple that it makes my heart bleed:

before "deploy:restart", "fix:permissions"
before "deploy:migrate", "db:backup"
after "deploy:symlink", "deploy:cleanup"
after "deploy:update_code", "deploy:web:disable"
after "deploy:restart", "deploy:web:enable"

capistrano-ext & multistage

I highly recommend the use of multistage. It comes with the capistrano-ext gem (which has been upgraded to Capistrano 2.0, of course).

Basically, it separates the concerns of different deployments. If, like me, you like having a few other versions of your application out there, like a staging area, a testing area for bleeding edge features, and, of course, the production site, separating these in Capistrano before 2.0 was very irritating. Multistage sorts that out very nicely.

By default, you must specify the stage you wish to deploy. This behaviour can be overridden by setting the default_stage variable, but I like being explicit. This is what using stages looks like:

# cap production deploy

If you don’t provide ‘production’, it’ll complain and abort.

Using multistage is dead easy. Put this at the top of your deploy.rb:

  require 'capistrano/ext/multistage'
  set :stages, %w(staging production testing)

Run the task for generating your stage deploy files:

projroot# cap multistage:prepare

This will create a recipe file for each stage in a new config/deploy directory (exactly like Rails environments). Now, in each stage recipe, add all of your stage-specific tasks and variables. For example:

set :rails_env, "stage"
set :application, "staging.example.com"
set :deploy_to, "/var/www/#{application}"

Now switching between different deployments is a breeze. Just make a new recipe file for it with the necessary variables and you’re set.

Great Plugin for Facebook Apps

Posted over 6 years back at Liverail - Home

If you’ve been working through the Facebook/Rails tutorials you might find this plugin useful.

Facebook on Rails is a sexy plugin for developing Facebook apps

It adds some useful functions to Rails for creating a Facebook application.

acts_as_fb_user


class CreateUsers < ActiveRecord::Migration
  def self.up
    create_table :users do |t|
      t.column :uid, :integer, :null => false
      t.column :session_key, :string
    end

    add_index :users, :uid, :unique
  end

  def self.down
    drop_table :users
  end
end

class User < ActiveRecord::Base
  acts_as_fb_user

  def self.import(fbsession)
    user = self.find_or_initialize_by_uid(fbsession.session_user_id)

    # Assumes session_key never expires
    if fbsession.session_key != user.session_key
      user.session_key = fbsession.session_key
      user.save!
    end

    return user
  end
end


you can now do things with the user object such as get a user’s friends


>> u = User.find(1)
=> #<User:...>
>> u.friends
=> [1, 2, 3]

FBMLController

You can create FBMLControllers such as


class ApplicationController < Facebook::FBMLController
  before_filter :require_facebook_install
  before_filter :import_user

  private
  def import_user
    @user = User.import(fbsession)
  end
end

Although I still feel this doesnt need to be inherited and such just extend the ApplicationController.

API Calls.

Are now easier, no need to parse the Hpricot XML. And also you can use the fbsession in the Model objects (where it belongs)


class MyController < Facebook::FBMLController
  def friends
    @me         = Facebook::Users.get_info(fbsession.session_user_id, ['first_name', 'last_name'])
    @first_name = @me.first_name
    @last_name  = @me.last_name
    @friends    = Facebook::Friends.get
  end
end

Notifications like ActionMailer


class StampNotificationPublisher < Facebook::NotificationPublisher
  def stamp(friends)
    @to_ids = friends.map(&:uid)
    @text   = "just stamped on you" 
  end
end

I advise you check it out if you plan to write any applications for Facebook.

Great Plugin for Facebook Apps

Posted over 6 years back at Liverail - Home

If you’ve been working through the Facebook/Rails tutorials you might find this plugin useful.

Facebook on Rails is a sexy plugin for developing Facebook apps

It adds some useful functions to Rails for creating a Facebook application.

acts_as_fb_user


class CreateUsers < ActiveRecord::Migration
  def self.up
    create_table :users do |t|
      t.column :uid, :integer, :null => false
      t.column :session_key, :string
    end

    add_index :users, :uid, :unique
  end

  def self.down
    drop_table :users
  end
end

class User < ActiveRecord::Base
  acts_as_fb_user

  def self.import(fbsession)
    user = self.find_or_initialize_by_uid(fbsession.session_user_id)

    # Assumes session_key never expires
    if fbsession.session_key != user.session_key
      user.session_key = fbsession.session_key
      user.save!
    end

    return user
  end
end


you can now do things with the user object such as get a user’s friends


>> u = User.find(1)
=> #<User:...>
>> u.friends
=> [1, 2, 3]

FBMLController

You can create FBMLControllers such as


class ApplicationController < Facebook::FBMLController
  before_filter :require_facebook_install
  before_filter :import_user

  private
  def import_user
    @user = User.import(fbsession)
  end
end

Although I still feel this doesnt need to be inherited and such just extend the ApplicationController.

API Calls.

Are now easier, no need to parse the Hpricot XML. And also you can use the fbsession in the Model objects (where it belongs)


class MyController < Facebook::FBMLController
  def friends
    @me         = Facebook::Users.get_info(fbsession.session_user_id, ['first_name', 'last_name'])
    @first_name = @me.first_name
    @last_name  = @me.last_name
    @friends    = Facebook::Friends.get
  end
end

Notifications like ActionMailer


class StampNotificationPublisher < Facebook::NotificationPublisher
  def stamp(friends)
    @to_ids = friends.map(&:uid)
    @text   = "just stamped on you" 
  end
end

I advise you check it out if you plan to write any applications for Facebook.

Episode 63: Model Name in URL

Posted over 6 years back at Railscasts

By default, Rails uses the model's id in the URL. What if you want to use the name of the model instead? You can change this behavior by overriding the to_param method in the model. Watch this episode for details.

Refactoring by Martin Fowler - Developer must have

Posted almost 7 years back at work.rowanhick.com

Late last year I was spending a lot of time on trains between T.O and Montreal, on one trip I picked up this book and was thoroughly engrossed - Refactoring - Martin Fowler is one of those simply must-have books on any developers bookshelf. Of all technical books I've found it to be the one that actually compels you to be a better programmer. The first chunk of the book is dedicated to whats, whys, testing etc of how you should turn your mish-mash of spaghetti code into beautiful elegant world class code, the rest of the book is dedicated to a catalog of various examples of refactorings. Each refactoring is structured with a small class diagram, theory about why you want to do it, then a detailed walk through of the steps written to get there. Well written, most of them are a doddle to understand. Even though the book uses examples in Java, they equally can apply for your Ruby, PHP, Actionscript code or whatever else tickles your fancy. If you want to change from a programmer to a code artisan, this is the book for you. If not we'll leave you alone with your unmaintainable spaghetti code. Some of the refactorings are obvious, and some the light switches on and you go 'a-ha'. Certainly if I'd have had this book from day 1 many moons ago I would've been a much happier chappy. I know using principles in the book I came away refactoring a working but messy piece of code days after reading it - the principles learned in the book made it a lot easier and quicker to do. One big personal improvement I made was towards not being afraid of breaking up functions for readability e.g. like so: class MyClass def able_to_checkout if ( ugly_condition_1 == a && ugly_condition_2 == c && you_get_the_picture ) self.set_book_to_checked_out end end (guilty as charged) to the following: class MyClass def able_to_checkout if ( no_books_checked_out? ) self.set_book_to_checked_out end end def no_books_checked_out? ugly_condition_1 == a && ugly_condition_2 == c && you_get_the_picture end end To round off the book, it's a hard cover with a little red cloth tape bookmarker. What other technical book have you seen with that in recent history ? It's been designed to stay on your shelf for a very long time - unlike some faded examples gathering cobwebs on mine. A rare timeless classic ? Quite possibly. Go out and buy it today if you haven't already. Another 10 | 10

How do you process literature?

Posted almost 7 years back at Saaien Tist

A quick glance at the side of my desk reveals two stacks of manuscripts to read; each stack about 20cm high. Sounds familiar? There seems to be a major task in front of me to process all that.
First thing to do is to identify what caused those piles in the first place. The answer: no system that I'm satisfied with for reference management. Of course, there is software like Reference Manager and EndNote as well as websites like Connotea and CiteULike. But they all have one major flaw: they are not suited to store the knowledge gained from those papers. Entering a reference to those papers in the software is not the same as going through them and extracting useful information. Sure, they do have a notes field where you can jot down some short remarks, but often knowledge is much easier recorded and remembered in little graphs and pictures than in words. There's reference management, and there's knowledge management.

What do I want my system to look like? First of all, it should be searchable. The tagging system provided by CiteULike/Connotea seems good for that. Also (and this might seem illogical for a bioinformatician), the system should not be fully automatic or even electronic, but analog. Why? Just pressing a button to for example add the abstract from a paper to the system gives a sense of... what's the word in English: volatility? For some things you should use the help of a computer, and for some you shouldn't. There's a difference between using Excel to repeat the same calculation 50 times, and trying to use a pc for storing knowledge. It's me who needs to store that knowledge, not the computer. If that was the case, I could always go back to Google instead of making the effort of using a reference manager in the first place. I've played around with zotero and personal wikis in the past, and they just didn't do the trick: I still ended up just copy-pasting the information instead of absorbing it.

Another advantage of using an analog system, is that when you feel your productivity behind your computer is suboptimal, you can always take your cards, find yourself a quiet place, put your feet on a desk, and flick through the things you wrote down. Slippers and pipe are optional.

During my PhD a few years ago, I used a system that was exclusively based on index cards. The inspiration came from Umberto Eco's book "Come si fa una tesi di laurea" (Or "How to make a doctoral thesis") (1977) in which he explains how he handles the knowledge for his book research. For each manuscript, I'd make a new card. The front contained an identifier, the paper title, full reference and keywords. On the back I'd write down what I had to remember from that paper, including little graphs, schemas and stuff. I've got to admit that a drawback of using these cards was that they were not easily searchable, but linking them worked quite well with a bit of discipline.
During those years, I used the index card system both as reference manager and as knowledgebase. Although it did work to satisfaction, the role of reference manager should be fulfilled by a better tool.

Now how could I implement something like that into a workflow? Basically, any new paper to be read should be entered in CiteULike and tagged as 'to_read'. When I've got time to read it: see if it's necessary to print out, or preferably read from the screen (we want to be nice to the trees, don't we?). When I've read the manuscript and there is interesting information to remember, only then create an index card. In case it's a landmark paper and/or I've been adding a lot of comments and markings in the text: keep the printout as well, and mark the index card that I've got that printout as well.
Let's try this out for a few weeks and see where it goes...

BTW: for a knowledgebase system based on index cards taken to the extreme, see PoIC (Pile of Index Cards).

Episode 62: Hacking ActiveRecord

Posted almost 7 years back at Railscasts

Have you ever wanted to temporarily disable all validations? Well, ActiveRecord doesn't support this, but that doesn't mean we can't add it. This episode will show you how to open up an existing class and change its behavior.

Impressed With Comatose CMS

Posted almost 7 years back at zerosum dirt(nap) - Home

Often after I’m finished building the bulk of a web app, I find there are some secondary pages that need to be built. These pages are largely informational, such as an about page, a contact page, an FAQ, etc. They’re relatively static in terms of the content, but it’s always nice if we can supply our client (or ourselves) with a nice CMS-style interface to make updating them easy, and within the context of our existing application layouts. Keep it simple, keep it DRY.

The obvious thing is to cook up some sort of PagesController from scratch. This is nice because it’ll make use of your existing facilities, your authentication/authorization system, layouts, etc. It is custom, after all, and a custom fit is almost always the best fit. But it’s a fair bit of work for something that’s probably not ‘core’ to the application, and takes cycles away from other places they could be better spent.

On the other hand, you can integrate with a 3rd-party CMS or blogging package like Typo, Radiant CMS, or Mephisto. They’re all great packages and do what they do really well. The downside is you’ve got to write a fair amount of glue to hook everything together and make it look (and feel) uniform.

Another option is to use Matt McCray’s Comatose plugin, a micro CMS. It’s got all the basic functionality you want for this sort of stuff out of the box and it couldn’t be much easier to use. The real bonus is integration is almost completely seemless, which makes it (imho) the best of both worlds for this sort of project.

Installing the plugin gets you a ComatoseController and a ComatoseAdminController. You add an entry (or multiple entries, if you like) in your routes file to tell your application when to invoke the ComatoseController. You might prefer a scheme where all URLs starting with /pages are passed to Comatose, for example. Then you log into the admin controller (which also needs an entry in routes) to create the pages. All the basic management tools we need are here; pages are organized hierarchically and can be edited with great ease, using a variety of markup filters. Each page gets a number of attributes, including a title, keywords, author, etc.

Basically it’s everything we need for the bare-minimum no-frills CMS experience and nothing we don’t. Which is just the way I like it. Check it out for your next project.

UPDATE

Anyone having issues with Comatose and authentication should check out this bug report. If you’re specifying an alternate session key, you should put it in environment.rb instead of ApplicationController.

ActionController::Base.session_options[:session_key] = "_your_custom_session_id"

Comatose controllers inherit directly from ActionController::Base instead of from your application controller. So if you specify the session key in application.rb, the Comatose-driven sections of your app will be blissfully unaware of it. This means a method like logged_in? (which checks the session for your login status) will always report back as false.

Episode 61: Sending Email

Posted almost 7 years back at Railscasts

This is a brief guide to sending email in Rails. See how to configure the environment, generate a mailer, create a template, and deliver the mail.

XChain - it has a name, license, and home.

Posted almost 7 years back at work.rowanhick.com

The open source Rails/Flex eCommerce application now has a name - XChain - pronounced cross-chain. I had some air travel time over the weekend and put it to good use coming up with a name, the basic premise being that the app should really be classified as a Supply Chain Management system as it deals with a lot more than simply taking peoples money for a product. (Order fulfillment, shipment tracking, eventually CRM and inventory management), and being a cross of two major technologies it seems to fit. Catchy enough and has meaning. Well I think so anyway, others might disagree - (that's what the comments form at the bottom is for...), only downer is XChain.com is taken by some spammer, but I've snapped up .ca and other appropriate domains. To this end, I've also created a home for it here http://code.google.com/p/xchain/ which seems to be a good location for now, satisfies the free and accessible by SVN requirements. Alas no RSS feed, so stick to here for the updates. I'm not a lawyer by any stretch, and have to a little more digging, but have basically decided on a license for it. The Mozilla Public License. My wish is that anyone will be able to use it for either non-commercial and commercial applications free of charge. The only restriction is if you extend it, you have to provide source code back to the community. I believe this is fair enough as you will be getting a tonne of usable code out of the box to start with. I just need to check that we can provide mechanisms for companies sensitive proprietary processes - to make sure these can be wrapped up in a separate lib, or DSL stored in the db, to make sure that doesn't need to be opened up to the outside world. Final decision of the day is to go with Cairngorm - this has more exposure so will hopefully appeal to a wider base of developers, and seems more suited to the scale of this application. I also found a Cairngorm Rails code generator via onrails.org which looks like it will do the business saving time generating code. Right, time to get some code into that repository.... in the meantime feel free to let me know your thoughts on the name and license choice.

New DRYML - Part 1

Posted almost 7 years back at The Hobo Blog

As I’ve mentioned a few times, there’s lot of breaking changes in the “new DRYML”. In some ways there’s not a huge amount of new functionality, but we really feel it’s much cleaner and more elegant now. I’ll go over the new features in this and probably one or two further posts.

If you want to try any of this out, Hobo 0.6-pre1 has been tagged in the repository:

  • svn://hobocentral.net/hobo/tags/rel_0.6-pre1

It’s just a preview though and you’ll likely encounter bugs. We won’t we updating the website or releasing a gem.

Changes to current features

Let’s start by working through some features from the perspective of changing an app to work with the new DRYML.

A bunch of names changes

The hobolib directory (app/views/hobolib) is now taglibs. We wanted to make Hobo feel less like an add-on and more integrated.

<taglib> is now <include>

The part_id attribute for creating ajax parts is now simply part. Hopefully this will help avoid some confusion about the different roles in the ajax mechanism of part names and DOM IDs.

The xattrs attribute for adding a hash-full of attributes to a tag in one go is now merge_attrs.

Code attributes

Code attributes are now signified with & instead of a #, so for example:

<if q="&logged_in">

Instead of

<if q="#logged_in">

The reason for the change is that you can now use #{...} at the start of an attribute, e.g.:

<button label="#{name} is my name" />

field and with instead of attr and obj

To set the context to some field of the current context you now say field="..." instead of attr="...". You can also use a new shorthand:

<repeat:comments> ... </repeat>

Notice the :comments part is missing from the close tag (that’s optional).

To set the context to any value, use with="&..." instead of obj="#..."

Bye bye parameter tags, hello templates

Parameter tags used to offer an alternate syntax to attributes for passing parameters to tags, so

<page title="My Page"/>

Was the same as

<page><:title>My Page</:title></page>

It looks great in a simple example like that, but in practice this feature was getting messy. Looking at some DRYML code, it was far from obvious why you’d be using a <:parameter_tag> in one place and a <normal_tag> in another. And what happened when you had a tag-body and parameter tags? That was even messier. And what about inner tags? Are they another name for parameter tags or something different? And then there was the mysterious content_option and replace_option. Which one should I use? Why?

So now there are no parameter tags, and instead there are template tags. Templates are a new kind of tag you can define:

  • They are distinguished form normal tags because the name is in <CamelCase>
  • They don’t have a tag-body. You never user <tagbody> in a template definition
  • Templates have attributes like regular defined tags. They also have parameters
  • A parameter is a section of content that the caller of the template can augment or replace
  • You create a parameter by adding the param attribute to any tag inside the template

Here’s an example:

<def tag="Page">
  <html>
    <head param="head">
      <title param="title" />
    </head>
    <body param="body">
      <div class="header" param="header" />
      <div class="main" param="main" />
      <div class="footer" param="footer" />
    </body>
  </html>
</def>

(note: it’s quite common that the parameter name is the same as the tag name, as in <head param="head">, in this case you can omit the parameter name, e.g. <head param>)

When calling this template, you can provide content and attributes for any of thed parameters. You can also append and prepend to parameters, or even replace parameters entirely:

<Page>
  <title>My Page</title>
  <head.append>
    <script src="..."/>
  </head.append>
  <body onclick="runMyScript()"/>
  <header>
    ...my header...
  </header>
  <main>
    ...my content...
  </main>
  <footer.replace/>
</Page>

To explain what’s going on in terms of “old DRYML”, it’s as if every child of <page> is a parameter tag. <title>, <head>, <body> etc. are neither defined tags or plain HTML tags. They are template parameters. The attributes and content of these tags are passed to the template and appear in the appropriate places.

Parameters can be called with various modifiers. In the above example we see <head.append>, which adds a script tag to the head section, and <footer> which replaces the footer entirely – in this case with nothing at all, so the footer is simply removed. The full set of these modifiers is:

  • append: append content to the body of the parameter
  • prepend: prepend content to the body of the parameter
  • before: insert content just before the parameter
  • after: insert content just after the parameter
  • replace: replace the parameter entirely

The power really kicks in with the fact that you can nest parameters, but I think that will have to go in part 2, along with local tags, control attributes and a few other bits and bobs…