Episode #419 - November 15th, 2013

Posted 5 months back at Ruby5

Keep track of your consoles with marco-polo, get a head start on sass with Bitters, smaller payloads with Rack::Deflater, Heroku open-sources its authentication, Heroku Postgres 2.0, and the MotionInMotion screencasts all in this episode of the Ruby5!j

Listen to this episode on Ruby5

This episode is sponsored by New Relic
New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.

marco-polo
What app and environment is this console for? Never forget again with the help of marco-polo.

Bitters
Use Bitters with Bourbon and Neat to get your sass off the ground at lightning speed.

Honey, I shrunk the internet!
A great new article on Thoughtbot's blog explains asset compression and how it affects the performance of your site. An interesting read full of good info!

Heroku Open-Sources Auth
Announcing the release of a key part of Heroku's authentication infrastructure.

Heroku Postgres 2.0
Announcing an evolution of what it means to be a database as a service provider.

MotionInMotion Screencasts
The MotionInMotion screencasts will be out soon and a free pre-launch episode on motion-layout is available on the RubyMotion blog.

No Newline at End of File

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Have you ever seen "No newline at end of file" in your git diffs? Us, too.

~/.dotfiles% git diff
diff --git a/vimrc b/vimrc
index 7e31913..a8b5f95 100644
--- a/vimrc
+++ b/vimrc
@@ -2,4 +2,4 @@
 let configs = split(glob("~/.vim/configs/*"), "\n")
 for filename in configs
   execute 'source ' filename
-endfor
+endfor
\ No newline at end of file

Why does this happen and what does it mean?

Try it in your shell

Here I have made a text file and, by definition, a non-text file:

~% echo foo > text-file
~% od -c text-file
0000000   f   o   o  \n
0000004
~% wc -l text-file
1 text-file
~% echo -n foo > binary-file
~% od -c binary-file
0000000   f   o   o
0000003
~% wc -l binary-file
0 binary-file

If you open each file in vim, they will display similarly. The intuition behind what vim is doing is "separate each line with a line break", which is different from "display each \n as a line break".

However, the binary-file will cause vim to display [noeol] in its status line (with the default status line).

History lesson

This comes from an old C decision that has been passed down through Unix history:

A source file that is not empty shall end in a new-line character, which shall not be immediately preceded by a backslash character.

Since this is a "shall" clause, we must emit a diagnostic message for a violation of this rule.

So, it turns out that, according to POSIX, every text file (including Ruby and JavaScript source files) should end with a \n, or "newline" (not "a new line") character. This acts as the eol, or the "end of line" character. It is a line "terminator".

Following the rules in your editor

You can make sure you follow this rule easily:

  • For Vim users, you're all set out of the box! Just don't change your eol setting.
  • For TextMate users, you can install the Avian Missing Bundle and add TM_STRIP_WHITESPACE_ON_SAVE = true to your .tm_properties file.
  • For Sublime users, set the ensure_newline_at_eof_on_save option to true.

What's next?

If this satisfied your curiosity, you might also enjoy:

Should there be only One? Merging User Profiles and Discussions

Posted 5 months back at entp hoth blog - Home

Tender has an awesome feature which allows you to merge users and discussions. We are often asked why this feature is not always available, so here’s the breakdown:

Merging will become available to you when someone replies by email to a discussion they do not have access to. This typically happens when someone uses two email addresses; they start a private conversation with one, but then reply from the other one (like work and home). Since the conversation is private, and the new email address has not been authorized on that discussion, we can’t add it to the discussion, so we split that comment into a new discussion. We do recognize that it was a reply to the original discussion though, and that’s when we offer you the chance to merge it. This allows you to validate that the comment is legitimate before granting access to the private discussion.

image

When the merge dialogue comes up, you have the choice of just merging the new comments to the original discussion, and also merging the users, in which case the two users mentioned in the dialog box become effectively one and the same.

Tender Tip: While the ability to merge users is particularly useful, it can indeed lead to some weird stuff when they are not the same person. We’ve made sure you can’t merge your support staff and Jane Doe user profile together, but just keep an eye out when clicking that merge button so two unconnected user profiles aren’t merged unintentionally.

Down the road, we’d like to expand the feature to enable the merging of several discussions from many different users.  This would come in handy when replying to several users experiencing the same issue.

Phusion Passenger 4.0.24 released

Posted 5 months back at Phusion Corporate Blog


Phusion Passenger is a fast and robust web server and application server for Ruby, Python, Node.js and Meteor. Passenger takes a lot of complexity out of deploying web apps, and adds powerful enterprise-grade features that are useful in production. High-profile companies such as Apple, New York Times, AirBnB, Juniper, American Express, etc are already using it, as well as over 350.000 websites.

Phusion Passenger is under constant maintenance and development. Version 4.0.24 is a bugfix release.

Phusion Passenger also has an Enterprise version which comes with a wide array of additional features. By buying Phusion Passenger Enterprise you will directly sponsor the development of the open source version.

Recent changes

  • Introduced the `PassengerNodejs` (Apache) and `passenger_nodejs` (Nginx) configuration options.
  • [Apache] Introduced the `PassengerErrorOverride` option, so that HTTP error responses generated by applications can be intercepted by Apache and customized using the `ErrorDocument` directive.
  • [Standalone] It is now possible to specify some configuration options in a configuration file `passenger-standalone.json`. When Passenger Standalone is used in Mass Deployment mode, this configuration file can be used to customize settings on a per-application basis.
  • [Enterprise] Fixed a potential crash when a rolling restart is triggered while a process is already shutting down.
  • [Enterprise] Fixed Mass Deployment support for Node.js and Meteor.

Installing or upgrading to 4.0.24

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball

Final

Fork us on Github!

Phusion Passenger’s core is open source. Please fork or watch us on Github. :)

<iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=watch&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=fork&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;type=follow&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="190" height="30"></iframe>

If you would like to stay up to date with Phusion news, please fill in your name and email address below and sign up for our newsletter. We won’t spam you, we promise.



Sass Variables

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Variables in Sass can be seen as simple versions of symbols in Fireworks or Illustrator, or as Smart objects in Photoshop; the most basic way of making a value appear in multiple places while still being able to update it where it was defined.

It prevents you from having to search through your code, trying to find all the places where you specified a certain number or name. Change the value of the variable and all the instances where it's used will update along with it.

Any single property that will be repeated in the code, such as a color or a size, usually benefits from being declared with a variable to allow the designer to make changes easier and quicker.

For instance, if the height of an element needs to be the same as the height of another element, it can be useful to set that height as a variable, making sure it will stay the same wherever it occurs. An example of this is the height of controls such as buttons, selects, inputs or the default size of a radius. This could be declared like so:

$control-height: 40px; // applied to any control that needs the same height.
$radius: 5px;

Whenever these sizes are needed, they are simply entered instead of a value:

height: $control-height; // instead of height: 40px;

The reason for naming the first variable control-height and not just control is that you don't know if the name control defines a color, a width or a height, so a descriptive naming convention like this one is useful.

When specifying the properties of text, it's often helpful to let the base size of the font steer other sizes, in order to keep the typography manageable and consistent. If your base font size is 16 pixels you can specify all the other typography related sizes with 16 pixels as a base. If the base changes, every other type related size will update to prevent relationships from breaking.

$font-size: 16px;
$line-height: $font-size * 1.6;
$h1-size: $font-size * 2; // all other fonts are related to $font-size.

To cater for all the tweaking and updating that graphic design requires, it's not only a good idea to put colors into variables, but even to name the colors in a generic way, so that if the contrast color of a page suddenly needs to change from blue to green, you won't have to go through the entire code to find all the variable instances of $blue-color.

Instead, naming it something like $contrast-color can be a good idea. The generic naming concept goes for simple things such as fundamental colors like white or black as well. You might want the color black to be a dark gray instead of pitch black, or the white to be slightly off-white. Declaring variables such as $white-color and $black-color can then be helpful.

$default-color: gray;
$font-color: darken($default-color, 60%);
$light-color: lighten($default-color, 20%);
$dark-color: darken($default-color, 20%);
$contrast-color: teal;
$warning-color: yellow;
$important-color: red;
$information-color: blue;
$black-color: lighten(black, 5%);
$white-color: darken(white, 5%);

The $font-color in the example above is based on the $default-color. While this is a handy way of making sure everything stays consistent and has the same foundation, it's important to keep these dependencies in mind when updating the variables to prevent any unwanted changes to happen when one variable is updated.

Variable dependencies are also very useful for complex layout calculations. Create inter-dependent variables to be able to reference things like widths with paddings included, widths without paddings, width of a text field in a div, and so on:

$container-width: 960px;
$container-padding: 80px;
$container-inner-width: $container-width - (2 * $container-padding);
$gutter-in-container: 10px;
$text-width: 60%;
$image-in-container-width: $container-inner-width - ($text-width -$gutter);

What's next?

If you found this useful, you might also enjoy:

Great Gray Landing

Posted 5 months back at Mike Clark

Great Gray Landing

A wild great gray owl returns to its perch after hunting the fields of a ranch near Jackson, WY. Spending a few days photographing this magnificent creature up close is an experience I'll never forget.

FactoryGirl for seed data?

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Occasionally, somebody recommends or asks about using FactoryGirl to generate seed data for an application. We recommend against it for a few reasons:

  1. The default attributes of a factory may change over time. This means that, when you're using FactoryGirl to generate data in seeds.rb, you'll need to explicitly assign each attribute in order to ensure that the data is correct, defeating the purpose.
  2. Attributes may be added, renamed, or removed. This means your seeds file will always need to be up to date with your factories as well as your database schema. You likely won't be running rake db:seed every time you change a migration. So, your seeds file may become out of sync and it won't be immediately obvious. You'll likely notice a breakage when a new developer comes onto the project.
  3. Data will still need to be checked for presence before insertion. The ActiveRecord gem gives you this with the "find or create" methods to locate a record or create it if it can't be found. In addition, those methods will basically force you to manually define each attribute you want assigned, making Factory Girl unnecessary.

What's next?

If you found this useful, you might also enjoy:

Honey, I shrunk the internet! - Content Compression via Rack::Deflater

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Speed is key. The snappier the site, the more visitors like you. Speed is so important that google uses it in site rankings and as a major component of its PageSpeed Tools.

Rack::Deflater middleware compresses responses at runtime using deflate or trusty ol' gzip. Inserted correctly into your Rack app, it can drastically reduce the size of your HTML / JSON controller responses (numbers below!). On a default heroku rails 3 deployment, it can be configured to compress assets delivered via your dynos.

There are other (possibly better) places to handle content compression, for example:

  • A frontend proxy / load balancer,
  • Your CDN,
  • By pre-compressing content and serving that from your web server.

We're going to talk about the simplest thing that'll work for most heroku hosted Rails apps.

Rails 3 and 4

Add it to config/application.rb thusly:

module YourApp
  class Application < Rails::Application
    config.middleware.use Rack::Deflater
  end
end

And your HTML, JSON and other Rails-generated responses will be compressed.

Rails 3 and Runtime Asset Compression

If you're running rails 3 (which will serve static assets for you), you can use Rack::Deflater for runtime asset compression as well. Configure it thusly:

module YourApp
  class Application < Rails::Application
    config.middleware.insert_before ActionDispatch::Static, Rack::Deflater
  end
end

Inserting Rack::Deflater before ActionDispatch::Static means you'll get runtime compression of assets served from heroku in addition to the HTML, JSON, XML and other content your app returns.

Rails 4 assumes you're serving assets from a CDN or via your webserver (and not your Rails processes) so ActionDispatch::Static middleware isn't enabled by default. If you try to insert Rack::Deflater before it, you'll get errors.

spec'ing

Controller specs skip Rack middleware. You need to assert that content is compressed / not compressed in a feature spec as they exercise the full Rails stack. We're using capybara's rspec awesomeness for our integration specs. Example:

# spec/integration/compression_spec.rb
require 'spec_helper'

feature 'Compression' do
  scenario "a visitor has a browser that supports compression" do
    ['deflate','gzip', 'deflate,gzip','gzip,deflate'].each do|compression_method|
      get root_path, {}, {'HTTP_ACCEPT_ENCODING' => compression_method }
      response.headers['Content-Encoding'].should be
    end
  end

  scenario "a visitor's browser does not support compression" do
    get root_path
    response.headers['Content-Encoding'].should_not be
  end
end

Compression Overhead - Dynamic Content

There is overhead to compressing content - let's see how significant it is. These tests were run against a fairly typical rails 3.2 app.

We'll run siege against a dynamic content page running under thin for 30 seconds, simulating 10 concurrent users. I'm picking typical results: I ran siege numerous times for each scenario.

siege -t30s -c 10 'http://127.0.0.1:3000/contact'
  Before Rack::Deflater After Rack::Deflater
Transactions 271 hits 265 hits
Data transferred 1.24 MB 0.45 MB
Transaction rate 9.19 trans/sec 8.93 trans/sec

So we were a little slower, but the content is around 1/3rd the size.

Compression Overhead - Static Content

Let's try against static asset content.

siege -t10s -c 10 'http://localhost:3000/assets/application.css'
  Before Rack::Deflater After Rack::Deflater
Transactions 22601 hits 4052 hits
Data transferred 658.26 MB 21.44 MB
Transaction rate 2374.05 trans/sec 443.33 trans/sec

So when we benchmark the raw speed of compressing static assets, yeah, there's overhead, it's around 5 times slower. The benefit, though, is that application.css is around 6 times smaller - 6.2KB instead of 30.1KB.

Plus we're still serving 443 requests per second. That is far beyond the demands that'd be put on any one dyno, and at traffic levels like this you're probably already using a CDN.

Real world impact

Once you've enabled Rack::Deflater middleware, you should see compression statistics in the chrome web inspector for (at least) your Rails-generated content. For example:

Compression Results

Assets are also being compressed in this Rails 3.2 app via Rack::Deflater, hence the compression ratios for application.css and application.js.

  Before Rack::Deflater After Rack::Deflater Compression Rate
Google PageSpeed analysis 79 of 100 93 of 100  
application.css 30.1KB 6.2KB 79%
application.js 117.0KB 40.8KB 65%
page html 4.3KB 2.6KB 40%
Total size 151.4KB 49.6KB 67%

So with minimal effort we were able to decrease our page sizes significantly (1/3 of their original size!) and bump up our pagespeed analysis 14 points.

Less Painful Heroku Deploys

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

When a Rails app boots, the ActiveRecord classes reflect on the database to determine which attributes and associations to add to the models. The config.cache_classes setting is true in production mode and false in development mode.

During development, we can write and run a migration and see the change take effect without restarting the web server but in production we need to restart the server for the ActiveRecord classes to learn about the new information from the database. Otherwise, the database will have the column but the ActiveRecord classes will have been cached based on the old information.

An app running on Heroku must therefore be restarted post migration.

You may find that you have issues sometimes that I call "stuck dynos". This is where not every process seems to be aware of new columns. Restarting your Heroku app will fix this problem.

You can introspect your running Heroku application to see if this problem has occurred.

Here's a real example:

$ production ps
=== web: `bundle exec rails server thin start -p $PORT -e $RACK_ENV`
web.1: up 2013/01/25 16:33:07 (~ 18h ago)
web.2: up 2013/01/25 16:47:15 (~ 18h ago)

=== worker: `bundle exec rake jobs:work`
worker.1: up 2013/01/25 17:30:58 (~ 17h ago)

The ups tell me the processes are running. They will say crashed if there's a problem.

If I were to run production tail (from Parity), I might just be watching the stream looking for anything unusual, like 500 errors. Sometimes error reporting services are delayed so there is no faster way to know about a post-deploy issue than tailing the logs while running through some critical workflows in the app.

If something looks unusual, I might then move over to the logging service we have set up (typically Splunk Storm or Papertrail) to run some searches to see how often the problem is coming up or if it looks new post-deploy.

New Relic or Airbrake will likely have more backtrace information by this time and we can make a decision about whether to roll back the deploy, or work on a hot fix as the next action, or record the bug and place it lower in the backlog.

What's next?

If you found this useful, you might also enjoy:

Don’t Talk to (Just) Me

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Having chatrooms for your company is great. They aid communication, especially across multiple locations; they provide an archive for past conversations for those who were not around; and they can serve to refresh the memory of those who were. As your team grows, you may need to create multiple rooms with more specific topics, but discourage one-on-one chats as they do not provide the same benefits.

Our Setup

Campfire Lobby

At thoughtbot, we have Everyone, Code, Design, and a couple of Meeting rooms that can be used for one-off conversations that take longer and might not be interesting to everyone in the company. The entire company have access to these rooms. We also have rooms specific to projects to which only developers and designers working on those projects have access.

We have arrived at this layout after a lot of experimentation, and it is certainly a living thing—we are always interested in trying something new out in case it works better.

One thing that we have determined is not a good idea is one-on-one chats. These types of conversation tend to contain information which would be valuable to the entire team, such as “hey, do you remember why you made this decision” or which could benefit from the insight of others: “do you know why this test is failing?” From our Playbook:

When things are only said on the phone, in person, in emails that don’t include the whole group, or in one-on-one chats, information gets lost, forgotten, or misinterpreted. The problems expand when someone joins or leaves the project.

Benefits

Having these discussions in a more public location encourages others to participate in the conversation and allows them to make the decision themselves to contribute, read and possibly learn from, or ignore if it doesn't apply to them. Here is an example of where both happen:

Ben: So this is odd. When I moved to ruby 2.0 I had to add "host: localhost" to my database.yml or it couldn't find the server.

Caleb: I used to need that. It turned out to be a problem with the pg gem, and I had to reinstall postgres and pg to fix it.

Joe: Works best while listening to Tool and Radiohead simultaneously

Joel: thoughts on this as an alternative to Postgres view homebrew - http://postgresapp.com

Sean: I use it, and love it.

Joe: That's what Heroku recommends: https://devcenter.heroku.com/articles/heroku-postgresql#set-up-postgres-on-mac

Best Practices

Things like mentioning a person’s name and setting up notifications for things that interest you can help keep up with several chat rooms. Mine are: my first name, my GitHub and Twitter usernames, ‘coffee’, ‘alaska’, /game ?night/, and ‘griddler’.

It is also important to remember that when company wide transient communication is happening in shared rooms, you need not worry about keeping up with everything. The most important information should be shared in more permanent ways, such as through Basecamp or email.

Keep conversations public. It encourages participation, builds culture, and reduces the cognitive overhead of keeping up with one-on-one conversations that you feel more obligated to budget attention to. Many chat programs provide a searchable archive, which is useful when looking up specific conversations or looking for topics which have been discussed. With good discipline in putting communications in the right room or in a more asynchronous place, inter-organization communication can improve greatly.

Episode #418 – November 9th, 2013

Posted 6 months back at Ruby5

Live from RubyConf Miami Beach 2013

Listen to this episode on Ruby5

New Relic
New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.

Terence Lee
Terence talks about the upcoming release for Bundler

Jim Gay
Jim tells us about his RubyConf talk and his thoughts about presenters and applications architecture

Mike Perham
Mike talks about Sidekick and his open source projects.

Thank You for Listening to Ruby5
Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

A Tour of Rails’ jQuery UJS

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

If you have a look at the default application.js file generated by Rails, you’ll see //= require jquery_ujs. You might know exactly what this require is for, but if you’re like me you know you need it but have only a vague idea of what it’s responsible for.

Maybe you’ve looked at that line and thought, “I should really figure out what that does someday.” Well, for me, today is that day. I thought you might want to join me.

Unobtrusive JavaScript

The UJS in jquery-ujs stands for unobtrusive JavaScript. This is a rather broad term that generally refers to using JavaScript to progressively enhance the user experience for capable browsers without negatively impacting clients that do not support or do not enable JavaScript.

jquery-ujs wires event handlers to eligible DOM elements to provide enhanced functionality. In most cases, the eligible DOM elements are identified by HTML 5 data- Attributes.

Let's have a look at the progressive enhancements jquery-ujs provides.

POST, PUT, DELETE Links

<%= link_to 'Delete', item, method: :delete %>

Clicking a link will always result in an HTTP GET request. If your link represents an action on a resource it may be more semantically correct for it to be performed with a different HTTP verb; in the case above, we want to use DELETE.

jquery-ujs attaches a handler to links with the data-method attribute. When the link is click, the handler constructs an HTML form along with a hidden input that sets the _method parameter to the requested HTTP verb and submits the form rather than following the link.

Confirmation Dialogs

<%= form_for item, data: { confirm: 'Are you sure?' } %>

jquery-ujs attaches a handler to links or forms with the data-confirm attribute that displays a JavaScript confirmation dialog. The user can choose to proceed with or cancel the action.

<%= form.submit data: { disable_with: 'Submitting...' } %>

Users double click links and buttons all the time. This causes duplicate requests but is easily avoided thanks to jquery-ujs. A click handler is added that updates the text of the button to that which was provided in the data-disable-with attribute and disables the button. If the action is performed via AJAX, the handler will re-enable the button and reset the text when the request completes.

AJAX Forms

<%= form_for item, remote: true %>

Adding remote: true to your form_for calls in Rails causes jquery-ujs to handle the form submission as an AJAX request. Your controller can handle the AJAX request and return JavaScript to be executed in the response. Thanks to jquery-ujs and Rails’ respond_with, setting remote: true is likely the quickest way to get your Rails application making AJAX requests. The unobtrusive nature of the implementation makes it simple to support both AJAX and standard requests at the same time.

AJAX File Uploads

Browsers do not natively support AJAX file uploads. If you have an AJAX form that contains a populated file input, jquery-ujs will fire the ajax:aborted:file event. If this event is not stopped by an event handler, the AJAX submission will be aborted and the form will submit as a normal form.

Remoteipart is one Rails gem that hooks into this event to enable AJAX file uploads.

Required Field Validation

HTML5 added the ability to mark an input as required. Browsers with full support for this feature will stop form submission and add browser-specific styling to the inputs that are required but not yet provided. jquery-ujs adds a polyfill that brings this behavior, minus the styling, to all JavaScript-enabled browsers. There’s no default styling provided by the polyfill, which means users of impacted browsers may be puzzled as to why the form will not submit. There’s an ongoing discussion about the appropriateness of this polyfill.

You can opt-out of this behavior by setting the “novalidate” attribute on your form. This will cause both the jquery-ujs polyfill and browsers with native support to skip HTML5 input validation. Given both the potential for confusion in browsers without native support and the fact that browsers with native support apply styles that may clash with your site design, handling validation is probably better left up to each developer.

Cross-Site Request Forgery Protection

Cross-Site Request Forgery (CSRF) is an attack wherein the attacker tricks the user into submitting a request to an application the user is likely already authenticated to. The user may think he’s simply signing up for an email newsletter, but the attacker controlling that sign up form is actually turning that into a request to post a status to some other website, using the users preexisting session.

Rails has built in protection which requires a token only available on your actual site to accompany every POST, PUT, or DELETE request. In collaboration with <%= csrf_meta_tags %> in your application layout HEAD, jquery-ujs augments this protection by adding the CSRF token to a header on outgoing AJAX requests.

jquery-ujs also updates the CSRF token on all non-AJAX forms on page load, which may be out-of-date if the form was rendered from a fragment cache.

Extensibility

jquery-ujs exposes its functions in the $.rails namespace and fires many events when submitting AJAX forms. jquery-ujs behavior can be customized by overriding these methods or handling the appropriate events. We’ve already seen Remoteipart as an example of custom event handling. There also exist several gems that override $.rails.allowAction to replace JavaScript confirmation dialogs with application-specific modal dialogs.

Docker-friendly Vagrant boxes

Posted 6 months back at Phusion Corporate Blog

Vagrant

We heavily utilize Vagrant in our development workflow. Vagrant is a tool for easily setting up virtual machines as development environments, making it easy to distribute development environments and making them reconstructible and resetable. It has proven to be an indispensable tool when working in development teams with more than 1 person, especially when not everybody uses the same operating system.

Lately we’ve been working with Docker, which is a cool new OS-level virtualization technology. Docker officially describes it as “iPhone apps for your server”, but being the hardcore system-level guys that we are, we dislike this description. Instead we’d like to describe Docker as “FreeBSD jails for Linux + an ecosystem to make it a joy to use”. Docker, while still young and not production-ready, is very promising and can make virtualization cheap and efficient.

Googling for Vagrant and Docker will yield plenty of information and tutorials.

Today, we are releasing Docker-friendly Vagrant boxes based on Ubuntu 12.04. Docker requires at least kernel 3.8, but all the Ubuntu 12.04 Vagrant boxes that we’ve encountered so far come with kernel 3.2 or 3.5, so that installing Docker on them requires a reboot. This makes provisioning a VM to be significantly more painful than it should be.

The Vagrant boxes that we’re releasing also come with a bigger virtual hard disk (40 GB) so that you don’t have to worry about running out of disk space inside your VM.

The Vagrant boxes can be found here:

https://oss-binaries.phusionpassenger.com/vagrant/boxes/

Please feel free to link to them from your Vagrantfile.

These Vagrant boxes are built automatically from Veewee definitions so that you can rebuild them. Our definitions can be found at Github: https://github.com/phusion/open-vagrant-boxes

Enjoy these Vagrant boxes!

You may also want to check our other products, such as Phusion Passenger, which is an application server for Ruby, Python, Node.js and Meteor which makes deployment extremely simple.

Discuss this on Hacker News.

Announcing Bitters, a Dash of Sass Stylesheets for Bourbon and Neat

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

The designers here like to have simple stylesheets when starting a new project. Styles that are better looking than browser defaults but not something that that will dictate our visual design moving forward. These styles should also remove a lot of the duplicated work we do when getting a project off the ground so we can start solving harder problems faster.

Our long time solution to this problem was the stylesheets in Flutie but Flutie’s stylesheets had grown outdated, its defaults and reset were getting in the way. We were continually overriding the styles, which was adding unnecessary bloat to our CSS. These styles weren’t doing their job, they were slowing us down at the start of a project not helping us speed up.

Instead of trying to refine the styles in Flutie we decided to just remove them and let Flutie stand alone as ActionView helpers. We didn’t just want to duplicate the problems that we had with Flutie’s styles and wanted to start from scratch. We also wanted to integrate this more fully with Bourbon and Neat.

Bitters aims to solve the same set of problems that the Flutie stylesheets and the problems that raised out of Flutie’s inattention. Unlike Flutie, the Bitters files should be installed into your Sass directory and imported at the top of your main stylesheet. Once installed the files should be edited to fit the style of the site or application. This way you won’t override styles and won’t add unnecessary cruft to your stylesheets, instead you’ll be building on top of the foundation that Bitters provides.

Bitters gives you plain variables for type, sizes, and color, a simple grid using Neat, smart defaults for typography and simply styled flashes for notifications or errors. Most importantly it should set up a consistent language and structure for your Sass.

We are still working out some of the details and I appreciate any feedback you might have.

The Product Design Sprint

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

A Product Design Sprint is a 5-phase exercise which uses design thinking to reduce the inherent risks in successfully bringing products to market. We've done six product design sprints so far and have made them a default part of our consulting engagements.

Participating in a Design Sprint orients the entire team and aims their efforts at hitting clearly defined goals. Sprints are useful starting points when kicking off a new feature, workflow, product, business or solving problems with an existing product.

Integrating design sprints and design thinking into our product development process keeps us aligned with our goals, and helps us invest our time and money wisely.

Design Thinking

Design Thinking combines empathy, creativity and rationality to solve human-centered problems. It is the foundation on which a Design Sprint is built.

Empathy

With Design Thinking, we use empathy to see the world through our customers eyes and understand their problems as they experience them. There may be many technological, financial, political, religious, human, and social and cultural forces involved. It is our job to develop a holistic understanding of these problems and forces and contextualize them in a greater world schema.

In addition to our own perspective, we aim to understand the perspectives of as many other people as possible to better diversify our understanding.

Empathy is the primary focus of Phase 1 (Understand) and a major part of Phase 5 (Test and Learn). We should aim to always maintain empathy when solving problems and building products for humans.

Creativity

Creativity is opportunity discovery. We use creativity to generate insights and solution concepts.

The most creative solutions are inspired by unique insights and intersecting perspectives. Empathy, as described above, empowers our ability to understand different perspectives and be more creative.

Collaboration inspires creativity. More perspectives, ideas, and insights lead to more opportunity.

Creativity is the focus of Phase 2 (Diverge), but is present in all phases (developing prototypes, testing/interviewing, researching/observing, creating experiments, etc.)

Rationality

We use rationality to fit solutions to the problem context through experimentation, testing and qualitative/quantitative measurements. This is the primary focus of Phase 3 (Converge) and Phase 5 (Test and Learn).

Design Thinking should pervade all of our processes outside the design sprint as well, from engineering to marketing to business development. In a complex business ecosystem design thinking can be used as a holistic approach to facilitating and maintaining a symbiotic relationship with your customers.

The Sprint Phases

A typical length for a project kick-off sprint is five days, with each day representing a different phase.

This timeframe is not rigid and should adapt to the specific needs of the problem. For example, some phases may need more than a full day and others may need less.

The aim is to develop a product or feature idea into a prototype that can be tested to help us fill our riskiest knowledge gaps, validate or invalidate our riskiest assumptions and guide future work.

image

Phase 1: Understand

Goal:

Develop a common understanding of the working context including the problem, the business, the customer, the value proposition, and how success will be determined.

By the end of this phase, we also aim to have identified some of our biggest risks and started to make plans for reducing them.

Why:

Common understanding will empower everyones decision making and contributions to the project.

Understanding our risks enables us to stay risk-averse and avoid investing time and money on things that rely on unknowns or assumptions.

Activities:

  • Define the Business Opportunity.
  • Define the Customer.
  • Define the Problem.
  • Define the Value Proposition (why will people pay you?).
  • Define context-specific terms (this will act as a dictionary).
  • Discuss short term and long term business goals (What’s the driving vision?).
  • Gather and analyze existing research.
  • Fill out the Business Model Canvas (this should be continually revisited).
  • Capture our analysis of competitive products.
  • Gather inspirational and informative examples of other people/products solving similar or analogous problems.
  • If there is an existing site/app, map out the screens.
  • As they come up in discussion, capture assumptions and unknowns on a wall or board with sticky notes. Later we can revisit this wall, group related items together and make plans to eliminate risky unknowns and address risky assumptions.

All of these definitions are expected to change as we move forward and learn more.

Deliverables:

  • Notes & documentation capturing the definitions and goals we discussed throughout the day. These notes should provide a solid reference and help with onboarding others later on.
  • A plan for initiating the next phase of the sprint.

Phase 2: Diverge

Goal:

Generate insights and potential solutions to our customers problems.

Explore as many ways of solving the problems as possible, regardless of how realistic, feasible, or viable they may or may not be.

Why:

The opportunity this phase generates enables us to evaluate and rationally eliminate options and identify potentially viable solutions to move forward with. This phase is also crucial to innovation and marketplace differentiation.

Activities:

  • Constantly ask, “How might we…”.
  • Generate, develop, and communicate new ideas.
  • Quick and iterative individual sketching.
  • Group sketching on whiteboards.
  • Mind Mapping individually and as a group.

Deliverables:

  • Critical path diagram: highlights the story most critical to the challenge at hand. Where does your customer start, where should they end up and what needs to happen along the way?

image

  • Prototype goals: What is it we want to learn more about? What assumptions do we need to address?

Phase 3: Converge

Goal:

Take all of the possibilities exposed during phases 1 and 2, eliminate the wild and currently unfeasible ideas and hone in on the ideas we feel the best about.

These ideas will guide the implementation of a prototype in phase 4 that will be tested with existing or potential customers.

Why:

Not every idea is actionable or feasible and only some will fit the situation and problem context. Exploring many alternative solutions helps provide confidence that we are heading in the right direction.

Activities:

  • Identify the ideas that aim to solve the same problem in different ways.
  • Eliminate solutions that can’t be pursued currently.
  • Vote for good ideas.
  • Storyboard the core customer flow. This could be a work flow or the story (from the customers perspective) of how they engage with, learn about and become motivated to purchase or utilise a product or service.

Deliverables:

  • The Prototype Storyboard: a comic book-style story of your customer moving through the previously-defined critical path. The storyboard is the blueprint for the prototype that will be created in phase 4.

image

  • Assumptions Table: A list of all assumptions inherent in our prototype, how we plan on testing them, and the expected outcomes which validate those assumptions.

image

Phase 4: Prototype

Goal:

Build a prototype that can be tested with existing or potential customers.

The prototype should be designed to learn about specific unknowns and assumptions. It’s medium should be determined by time constraints and learning goals. Paper, Keynote, and simple HTML/CSS are all good prototyping media.

The prototype storyboard and the first three phases of the sprint should make prototype-building fairly straight forward. There shouldn’t be much uncertainty around what needs to be done.

Why:

A prototype is a very low cost way of gaining valuable insights about what the product needs to be. Once we know what works and what doesn’t we can confidently invest time and money on more permanent implementation.

Activities:

  • Prototype implementation.

Deliverables:

  • A testable prototype.
  • A plan for testing. If we are testing workflows, we should also have a list of outcomes we can ask our testers to achieve with our prototype.

Phase 5: Test & Learn

Goal:

Test the prototype with existing or potential customers.

It is important to test with existing or potential customers because they are the ones you want your product to work and be valuable for. Their experiences with the problem and knowledge of the context have influence on their interaction with your product that non customers won’t have.

Why:

Your customers will show you the product they need. Testing our ideas helps us learn more about things we previously knew little about and gives us a much clearer understanding of which directions we should move next. It can also helps us course-correct and avoid building the wrong product.

Activities:

  • Observe and interview customers as they interact with your prototype.
  • Observe and interview customers as they interact with competitive products.

Deliverables:

  • Summary/report of our learnings from testing the prototype.
  • A plan for moving forward beyond the design sprint.

Closing

Our Product Design Sprint process has been heavily informed by IDEO’s Human Centered Design Toolkit and a series of blog posts by Google Ventures and we are grateful for the information they have shared.

We want the work we do to have a positive impact on the world. Our goal is not just to build “a” product, but to build the “right” product. A meaningful product that meets real people’s needs and can support a viable business.

We believe that Product Design Sprints and Design Thinking will help us bring more successful products and businesses to market.

If you'd like to hire us for a product design sprint, get in touch. Also, feel free to contact me if you're interested in talking and learning more about product design sprints.