Episode #437 - February 4th, 2014

Posted 2 months back at Ruby5

Token Based Authentication, Recommundle, git_pretty_accept, PStore, Practicing Ruby, and RailsBricks 2 all in this episode of the Ruby5!

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

Token Based Authentication in Rails

This week our very own Carlos Souza wrote up a blog post about how to use Token Based Authentication in your Rails app.
Token Based Authentication in Rails

Recommundle

Chris Tonkinson released recommundle, a recommendation engine for Gemfiles. You upload your project's gemfile and it recommends gems that it thinks you might be interested in checking out.
Recommundle

git_pretty_accept

George Mendoza released the git_pretty_accept gem this week which automates his teams preferred method of accepting github pull requests in their project to keep their history readable.
git_pretty_accept

Persisting data in Ruby with PStore

Rob Miller wrote up a blog post about how to persist data in ruby in situations where using a database might seem like overkill.
Persisting data in Ruby with PStore

Practicing Ruby journal moves to open-access

This week Gregory Brown of Prawn fame announced that he's giving open access to 68 articles from the Practicing Ruby journal.
Practicing Ruby journal moves to open-access

RailsBricks 2

Nico Schuele dropped us an email to let us know about RailsBricks 2. This new version is 100% in Ruby, doesn’t have anymore bash commands, and includes a test framework.
RailsBricks 2

Automatically wait for AJAX with Capybara

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Capybara's very good about waiting for AJAX. For example, this code will keep checking the page for the element for Capybara.default_wait_time seconds, allowing AJAX calls to finish:

expect(page).to have_css('.username', text: 'Gabe B-W')

But there are times when that's not enough. For example, in this code:

visit users_path
click_link 'Add Gabe as friend via AJAX'
reload_page
expect(page).to have_css('.favorite', text: 'Gabe')

We have a race condition between click_link and reload_page. Sometimes the AJAX call will go through before Capybara reloads the page, and sometimes it won't. This kind of nondeterministic test can be very difficult to debug, so I added a little helper.

Capybara's Little Helper

Here's the helper, via Coderwall:

# spec/support/wait_for_ajax.rb
module WaitForAjax
  def wait_for_ajax
    Timeout.timeout(Capybara.default_wait_time) do
      loop until finished_all_ajax_requests?
    end
  end

  def finished_all_ajax_requests?
    page.evaluate_script('jQuery.active').zero?
  end
end

RSpec.configure do |config|
  config.include WaitForAjax, type: :feature
end

We automatically include every file in spec/support/**/*.rb in our spec_helper.rb, so this file is automatically required. Since only feature specs can interact with the page via JavaScript, I've scoped the wait_for_ajax method to feature specs using the type: :feature option.

The helper uses the jQuery.active variable, which tracks the number of active AJAX requests. When it's 0, there are no active AJAX requests, meaning all of the requests have completed.

Usage

Here's how I use it:

visit users_path
click_link 'Add Gabe as friend via AJAX'
wait_for_ajax # This is new!
reload_page
expect(page).to have_css('.favorite', text: 'Gabe')

Now there's no race condition: Capybara will wait for the AJAX friend request to complete before reloading the page.

Change we can believe in (and see)

This solution can hide a bad user experience. We're not making any DOM changes on AJAX success, meaning Capybara can't automatically detect when the AJAX completes. If Capybara can't see it, neither can our users. Depending on your application, this might be OK.

One solution might be to have an AJAX spinner in a standard location that gets shown when AJAX requests start and hidden when AJAX requests complete. To do this globally in jQuery:

jQuery.ajaxSetup({
  beforeSend: function(xhr) {
    $('#spinner').show();
  },
  // runs after AJAX requests complete, successfully or not
  complete: function(xhr, status){
    $('#spinner').hide();
  }
});

What's next?

There is no official documentation on jQuery.active, since it's an internal variable, but this Stack Overflow answer is helpful. To see how we require all files in spec/support, read through our spec_helper template.

Credits

Thanks to Jorge Dias and Ancor Cruz on Coderwall for the original and refactored helper implementations.

Opening an Austin Office

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We're pleased to announce that we're opening an office in Austin, Texas!

image

Starting in early March we'll have a team in town consisting of myself and Alex (at least temporarily). Caleb will join shortly thereafter.

This new office will do the same work we're currently doing at all of our existing offices. We'll build high quality mobile and web apps for our clients and we'll do it face-to-face with clients in Austin.

Get in touch if you're interested in hiring or joining our Austin team.

We're looking forward to many years of Ruby meetups, iOS meetups, design meetups, 512 Pecan Porters, BBQ, afternoons at the Comal and Zilker Park, SXSW, Austin City Limits and nights at Stubb's.

See y'all there.

Episode #436 - January 31st, 2014

Posted 3 months back at Ruby5

Weekly Elixir news, control your AR Drone with Argus, use STI with an hstore, learning about Rails validators, sparklines in Ruby, and readme searching with HandCooler all in this episode of the Ruby5!

Listen to this episode on Ruby5

This episode is sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Elixir Fountain

Keeping up with what's going on in the Elixir community has never been easier. The Elixir Fountain weekly mailing list has you covered.
Elixir Fountain

Argus

Have a Parrot AR Drone and a command line? The Argus gem let's you control your quadracopter in Ruby!
Argus

STI + Hstore

Have a better STI experience in Rails by leveraging the Postgres Hstore with hstore_accessor.
STI + Hstore

Rails Errors and Validators

Learn the in's and out's of how Rails validators work with this detailed blog post.
Rails Errors and Validators

Sparkr

All the goodness of Spark now in your Ruby CLI!
Sparkr

HandCooler

Finding that gem readme has never been easier!
HandCooler

Optimizing Web Font Rendering Performance

Posted 3 months back at igvita.com

Web font adoption continues to accelerate across the web: according to HTTP Archive, ~37% of top 300K sites are using web fonts as of early 2014, which translates to a 2x+ increase over the past twelve months. Of course, this should not be all that surprising to most of us. Typography has always been an important part of good design, branding, and readability and web fonts offer many additional benefits: the text is selectable, searchable, zoomable, and high-DPI friendly. What's not to like?

Ah, but what about the rendering speed, don't web fonts come with a performance penalty? Fonts are an additional critical resource on the page, so yes, they can impact rendering speed of our pages. That said, just because the page is using web fonts doesn't mean it will (or has to) render slower.

There are four primary levers that determine the performance impact of web fonts on the page:

  1. The total number of fonts and font-weights used on the page.
  2. The total byte size of fonts used on the page.
  3. The transfer latency of the font resource.
  4. The time when the font downloads are initiated.

The first two levers are directly within the control of the designer of the page. The more fonts are used, the more requests will be made and more bytes will be incurred. The general UX best practice is to keep the number of used fonts at a minimum, which also aligns with our performance goals. Step one: use web fonts, but audit your font usage periodically and try to keep it lean.

Measuring web font latencies

The transfer latency of each font file is dependent on its bytesize, which in turn is determined by the number of glyphs, font metadata (e.g hinting for Windows platforms), and used compression method. Techniques such as font subsetting, UA-specific optimization, and more efficient compression (e.g. Google Fonts recently switched to Zopfli for WOFF resources), are all key to optimizing the transfer size. Plus, since we're talking about latency, where the font is served from makes a difference also – i.e. a CDN, and ideally the user's cache!

That said, instead of talking in the abstract, how long does it actually take the visitor to download the web font resource on your site? The best way to answer this question is to instrument your site via the Resource Timing API, which allows us to get the DNS, TCP, and transfer time data for each font - as a bonus, Google Fonts recently enabled Resource Timing support! Here is an example snippet to report font latencies to Google Analytics:

// check if visitor's browser supports Resource Timing
if (typeof window.performance == 'object') {
  if (typeof window.performance.getEntriesByName == 'function') {

  function logData(name, r) {
    var dns = Math.round(r.domainLookupEnd - r.domainLookupStart),
          tcp = Math.round(r.connectEnd - r.connectStart),
        total = Math.round(r.responseEnd - r.startTime);
    _gaq.push(
      ['_trackTiming', name, 'dns', dns],
      ['_trackTiming', name, 'tcp', tcp],
      ['_trackTiming', name, 'total', total]
    );
  }

  var _gaq = _gaq || [];
  var resources = window.performance.getEntriesByType("resource");
  for (var i in resources) {
    if (resources[i].name.indexOf("themes.googleusercontent.com") != -1) {
      logData("webfont-font", resources[i])
    }
    if (resources[i].name.indexOf("fonts.googleapis.com") != -1) {
      logData("webfont-css", resources[i])
    }
   }
  }
}

The above example captures the key latency metrics both for the UA-optimized CSS file and the font files specified in that file: the CSS lives on fonts.googleapis.com and is cached for 24 hours, and font files live on themes.googleusercontent.com and have a long-lived expiry. With that in place, let's take a look at the total (responseEnd - startTime) timing data in Google Analytics for my site:

For privacy reasons, the Resource Timing API intentionally does not provide a "fetched from cache” indicator, but we can nonetheless use a reasonable timing threshold - say, 20ms - to get an approximation. Why 20ms? Fetching a file from spinning rust, and even flash, is not free. The actual cache-fetch timing will vary based on hardware, but for our purposes we'll go with a relatively aggressive 20ms threshold.

With that in mind and based on above data for visitors coming to my site, the median time to get the CSS file is ~100ms, and ~26% of visitors get it from their local cache. Following that, we need to fetch the required font file(s), which take <20ms at the median – a significant portion of the visitors has them in their browser cache! This is great news, and a confirmation that the Google Fonts strategy of long-lived and shared font resources is working.

Your results will vary based on the fonts used, amount and type of traffic, plus other variables. The point is that we don't have to argue in the abstract about the latency and performance costs of web fonts: we have the tools and APIs to measure the incurred latencies precisely. And what we can measure, we can optimize.

Timing out slow font downloads

Despite our best attempts to optimize delivery of font resources, sometimes the user may simply have a poor connection due to a congested link, poor reception, or a variety of other factors. In this instance, the critical resources – including font downloads – may block rendering of the page, which only makes the matter worse. To deal with this, and specifically for web fonts, different browsers have taken different routes:

  • IE immediately renders text with the fallback font and re-renders it once the font download is complete.
  • Firefox holds font rendering for up to 3 seconds, after which it uses a fallback font, and once the font download has finished it re-renders the text once more with the downloaded font.
  • Chrome and Safari hold font rendering until the font download is complete.

There are many good arguments for and against each strategy and we won't go into that discussion here. That said, I think most will agree that the lack of any timeout in Chrome and Safari is not a great approach, and this is something that the Chrome team has been investigating for a while. What should the timeout value be? To answer this, we've instrumented Chrome to gather font-size and fetch times, which yielded the following results:

Webfont size range Percent 50th 70th 90th 95th 99th
0KB - 10KB 5.47% 136 ms 264 ms 785 ms 1.44 s 5.05 s
10KB - 50KB 77.55% 111 ms 259 ms 892 ms 1.69 s 6.43 s
50KB - 100KB 14.00% 167 ms 882 ms 1.31 s 2.54 s 9.74 s
100KB - 1MB 2.96% 198 ms 534 ms 2.06 s 4.19 s 10+ s
1MB+ 0.02% 370 ms 969 ms 4.22 s 9.21 s 10+ s

First, the good news is that the majority of web fonts are relatively small (<50KB). Second, most font downloads complete within several hundred milliseconds: picking a 10 second timeout would impact ~0.3% of font requests, and a 3 second timeout would raise that to ~1.1%. Based on this data, the conclusion was to make Chrome mirror the Firefox behavior: timeout after 3 seconds and use a fallback font, and re-render text once the font download has completed. This behavior will ship in Chrome M35, and I hope Safari will follow.

Hands-on: initiating font resource requests

We've covered how to measure the fetch latency of each resource, but there is one more variable that is often omitted and forgotten: we also need optimize when the fetch is initiated. This may seem obvious on the surface, except that it can be a tricky challenge for web fonts in particular. Let's take a look at a hands-on example:

@font-face {
  font-family: 'FontB';
  src: local('FontB'), url('http://mysite.com/fonts/fontB.woff') format('woff');
}
p { font-family: FontA }
<!DOCTYPE html>
<html>
<head>
  <link href='stylesheet.css' rel='stylesheet'> <!-- see content above -->
  <style>
    @font-face {
     font-family: 'FontA';
     src: local('FontA'), url('http://mysite.com/fonts/fontA.woff') format('woff');
   }
  </style>
  <script src='application.js' />
</head>
<body>
<p>Hello world!</p>
</body>
</html>

There is a lot going on above: we have an external CSS and JavaScript file, and inline CSS block, and two font declarations. Question: when will the font requests be triggered by the browser? Let's take it step by step:

  1. Document parser discovers external stylesheet.css and a request is dispatched.
  2. Document parser processes the inline CSS block which declares FontA - we're being clever here, we want the font request to go out as early as possible. Except, it doesn't. More on that in a second.
  3. Document parser blocks on external script: we can't proceed until that's fetched and executed.
  4. Once the script is fetched and executed we finish constructing the DOM, style calculation and layout is performed, and we finally dispatch request for fontA. At this point, we can also perform the first paint, but we can't render the text with our intended font since the font request is inflight... doh.

The key observation in the above sequence is that font requests are not initiated until the browser knows that the font is actually required to render some content on the page - e.g. we never request FontB since there is no content that uses it in above example! On one hand, this is great since it minimizes the number of downloads. On the other, it also means that the browser can't initiate the font request until it has both the DOM and the CSSOM and is able to resolve which fonts are required for the current page.

In the above example, our external JavaScript blocks DOM construction until it is fetched and executed, which also delays the font download. To fix this, we have a few options at our disposal: (a) eliminate the JavaScript, (b) add an async attribute (if possible), or (c) move it to the bottom of the page. However, the more general takeaway is that font downloads won't start until the browser can compute the the render tree. To make fonts render faster we need to optimize the critical rendering path of the page.

Tip: in addition to measuring the relative request latencies for each resource, we can also measure and analyze the request start time with Resource Timing! Tracking this timestamp will allow us to determine when the font request is initiated.

Optimizing font fetching in Chrome M33

Chrome M33 landed an important optimization that will significantly improve font rendering performance. The easiest way to explain the optimization is to look at a pre-M33 example timeline that illustrates the problem:

  1. Style calculation completed at ~840ms into the lifecycle of the page.
  2. Layout is triggered at ~1040ms, and font request is dispatched immediately after.

Except, why did we wait for layout if we already resolved the styles two hundred milliseconds earlier? Once we know the styles we can figure out which fonts we'll need and immediately initiate the appropriate requests – that's the new behavior in Chrome M33! On the surface, this optimization may not seem like much, but based on our Chrome instrumentation the gap between style and layout is actually much larger than one would think:

Percentile 50th 60th 70th 80th 90th
Time from Style → Layout 132 ms 182 ms 259 ms 410 ms 820 ms

By dispatching the font requests immediately after first style calculation the font download will be initiated ~130ms earlier at the median and ~800ms earlier at 90th percentile! Cross-referencing these savings with the font fetch latencies we saw earlier shows that in many cases this will allow us to fetch the font before the layout is done, which means that we won't have to block text rendering at all – this is a huge performance win.

Of course, one also should ask the obvious question... Why is the gap between style calculation and layout so large? The first place to start is in Chrome DevTools: capture a timeline trace and check for slow operations (e.g. long-running JavaScript, etc). Then, if you're feeling adventurous, head to chrome://tracing to take a peek under the hood – it may well be that the browser is simply busy processing and laying out the page.

Optimizing web fonts with Font Load Events API

Finally, we come to the most exciting part of this entire story: Font Load Events API. In a nutshell, this API will allow us to manage and define how and when the fonts are loaded – we can schedule font downloads at will, we can specify how and when the font will be rendered, and more. If you're familiar with the Web Font Loader JS library, then think of this API as that and more but implemented natively in the browser:

var font = new FontFace("FontA", "url(http://mysite.com/fonts/fontA.woff)", {});
font.ready().then(function() {
  // font loaded.. swap in the text / define own behavior.
});

font.load(); // initiate immediate fetch / don't block on render tree!

Font Load Events API gives us complete control over which fonts are used, when they are swapped in (i.e. should they block rendering), and when they're downloaded. In the example above we construct a FontFace object directly in JavaScript and trigger an immediate fetch – we can inline this snippet at the top of our page and avoid blocking on CSSOM and DOM entirely! Best of all, you can already play with this API in Canary builds of Chrome, and if all goes well it should find its way into stable release by M35.

Web font performance checklist

Web fonts offer a lot of benefits: improved readability, accessibility (searchable, selectable, zoomable), branding, and when done well, beautiful results. It's not a question of if web fonts should be used, but how to optimize their use. To that end, a quick performance checklist:

  1. Audit your font usage and keep it lean.
  2. Make sure font resources are optimized - see Google Web Fonts tricks.
  3. Instrument your font resources with Resource Timing: measure → optimize.
  4. Optimize the transfer latency and time of initial fetch for each font.
  5. Optimize your critical rendering path, eliminate unnecessary JS, etc.
  6. Spend some time playing with the Font Load Events API.

Just because the page is using a web font, or several, doesn't mean it will (or has to) render slower. A well optimized site can deliver a better and faster experience by using web fonts.

How To Edit An Existing Vim Macro

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Here's the situation:

You've just written an awesome vim macro and stopped recording. However, when you try an run the macro you realize that you forgot to add a ^ to the beginning of it and now it only works if you go back to the beginning of the line before running it. You might be thinking that its time to re-record, but there are two simple ways to edit an existing macro instead.

Yanking into a register:

  • "qp paste the contents of the register to the current cursor position
  • I enter insert mode at the begging of the pasted line
  • ^ add the missing motion to return to the front of the line
  • <Escape> return to visual mode
  • "qyy yank this new modified macro back into the q register
  • dd delete the pasted register from the file your editing

Editing the register visually:

  • :let @q=' open the q register
  • <Cntl-r><Cntl-r>q paste the contents of the q register into the buffer
  • ^ add the missing motion to return to the front of the line
  • ' add a closing quote
  • <Enter> finish editing the macro

What's next?

If you found this useful, you might also enjoy:

Episode #435 - January 28, 2014

Posted 3 months back at Ruby5

We destroy Rake with Thor, sit back for a Mina to go over Lite Config, hit some Rubygem Development Tips, and share a Weekly dose of Vim on this episode of Ruby5.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

Configure Rails with YAML with Lite Config

Last week, Gabe da Silveira released lite_config, a small, environment-aware, YAML configuration manager for Rails applications. It provides conveniences like lazy loading your config/ YAML files, indifferent access to keys, automatic scoping to your currently-running Rails environment, and the ability to locally override these settings.
Configure Rails with YAML with Lite Config

Replace Rake with Thor

Thor is incredibly useful and gives you an easy way to create Ruby-based command line applications. Did you know that Thor has extensions available? And your Thor calls can be testable? Check out Ryan Sonnek's recent post for details.
Replace Rake with Thor

6 Tips for Full Stack Open Source RubyGems Development

Last week, Giovanni Intini posted an article on the Mikamai blog covering 6 tips for open source Rubygem development. The cover considerations you should make when creating your gems as well as service available to help you track and maintain them.
6 Tips for Full Stack Open Source RubyGems Development

Mina Deployment for Rails

Sakchai Siripanyawuth wrote to us this week about a two part video on Rails deployment with Mina, part of a series called DevOps for Developers. Mina is a deployment manager, like Capistrano or Vlad, and works over SSH. Check out the videos for more info.
Mina Deployment for Rails

Vim Weekly

Vim Weekly is a new mailing list (old school, right? Like Vim!) that sends out just five new Vim tips per week. If you're already somewhat familiar with Vim and are looking to hone your skills, these bite size tips may be just what you need.
Vim Weekly

Replacing NERDTree with Ctrl-P

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

For many months, I used NERDTree to view my project directory within Vim. During my introduction to Rails application development, it was valuable to see the Rails tree structure in my left sidebar. Two weeks ago, I un-installed NERDTree from my .vimrc because I realized that I could improve my workflow without it.

nerd-tree I used to depend so much on NERDTree.

With NERDTree, the visual tree structure fixed to your left sidebar is valuable because it tells you how your project is organized. However, I realized that I was using NERDTree as a crutch to lean on whenever I needed to locate and open an existing file. By depending so much on this tool, I never questioned how I could optimize my Vim workflow in other ways.

Replacing NERDTree with Ctrl-P

I discovered that I could develop a more efficient workflow by eliminating NERDTree completely and using Ctrl-P to navigate through my project files. With Ctrl-P, you search using a "fuzzy" file finder. Therefore, Ctrl-P forces you to be familiar with your project structure. Additionally, the search finder shows you only information relevant to your search keyword, so you save screen space by not seeing all of your files at once.

Now that I don’t use NERDTree, I have greater incentives to enforce strict naming conventions throughout my applications. If I don’t use best practices to name and organize my files, I will have trouble locating them using Ctrl-P and be less efficient. Because my files are more properly organized, other team members will also locate them more efficiently.

ctrl-p Now, I use Ctrl-P exclusively to locate files and navigate through my project.

What's next?

If you have our dotfiles installed on your machine, you should already have Ctrl-P. If you are not already using Ctrl-P, install the Vim plugin to enhance your workflow. Review our style guides to see how we organize our projects.

Phusion Passenger 4.0.37 released

Posted 3 months back at Phusion Corporate Blog


Phusion Passenger is a fast and robust web server and application server for Ruby, Python, Node.js and Meteor. Passenger takes a lot of complexity out of deploying web apps, and adds powerful enterprise-grade features that are useful in production. High-profile companies such as Apple, New York Times, AirBnB, Juniper, American Express, etc are already using it, as well as over 350.000 websites.

Phusion Passenger is under constant maintenance and development. Version 4.0.37 is a bugfix release.

Phusion Passenger also has an Enterprise version which comes with a wide array of additional features. By buying Phusion Passenger Enterprise you will directly sponsor the development of the open source version.

Recent changes

  • Improved Node.js compatibility. Calling on() on the request object now returns the request object itself. This fixes some issues with Express, Connect and Formidable. Furthermore, some WebSocket-related issues have been fixed.
  • Improved Meteor support. Meteor application processes are now shut down quicker. Previously, they linger around for 5 seconds while waiting for all connections to terminate, but that didn’t work well because WebSocket connections were kept open indefinitely. Also, some WebSocket-related issues have been fixed.
  • Introduced a new tool `passenger-config detach-process` for gracefully detaching an application process from the process pool. Has a similar effect to killing the application process directly with `kill <PID>`, but killing directly may cause the HTTP client to see an error, while using this command guarantees that clients see no errors.
  • Fixed a crash that occurs when an application fails to spawn, but the HTTP client disconnects before the error page is generated. Fixes issue #1028.
  • Fixed a symlink-related security vulnerability.

    Urgency: low
    Scope: local exploit
    Summary: writing files to arbitrary directory by hijacking temp directories
    Affected versions: 4.0.5 and later
    Fixed versions: 4.0.37

    Description: Phusion Passenger creates a "server instance directory" in /tmp during startup, which is a temporary directory that Phusion Passenger uses to store working files. This directory is deleted after Phusion Passenger exits. For various technical reasons, this directory must have a semi-predictable filename. If a local attacker can predict this filename, and precreates a symlink with the same filename that points to an arbitrary directory with mode 755, owner root and group root, then the attacker will succeed in making Phusion Passenger write files and create subdirectories inside that target directory. The following files/subdirectories are created:

    • control_process.pid
    • generation-X, where X is a number.

    If you happen to have a file inside the target directory called `control_process.pid`, then that file’s contents are overwritten. These files and directories are deleted during Phusion Passenger exit. The target directory itself is not deleted, nor are any other contents inside the target directory, although the symlink is.

    Thanks go to Jakub Wilk for discovering this issue.

Installing or upgrading to 4.0.37

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball

Final

Fork us on Github!

Phusion Passenger’s core is open source. Please fork or watch us on Github. :)

<iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=watch&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=fork&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;type=follow&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="190" height="30"></iframe>

If you would like to stay up to date with Phusion news, please fill in your name and email address below and sign up for our newsletter. We won’t spam you, we promise.



Building Sinatra with Lotus

Posted 3 months back at Luca Guidi - Home

The beauty of Lotus are its components. Each of them is well designed to achieve one and only one goal. The main advantage of this architecture is that delevopers can easily use and reuse those frameworks in countless ways.

Lotus::Router accepts anonymous functions as endpoints. This feature can be used to build Sinatra with it.

Initial setup

We need to setup a Gemfile with:

source 'https://rubygems.org'
gem 'lotus-router'

As second step, we create an Hello World application (app.rb) with Lotus::Router:

require 'rubygems'
require 'bundler/setup'
require 'lotus/router'

Application = Rack::Builder.new do
  app = Lotus::Router.new do
    get '/' do
      [200, {}, ['Hello, World!']]
    end
  end
  run app
end.to_app

Return value of the block as response body

You may have noticed a discrepancy between the typical Sinatra usage and the example above: the framework sets the return value of that endpoint as the body of the response, here we’re returning a serialized Rack response.

Internally, Lotus::Router uses Lotus::Routing::Endpoint to wrap application’s endpoints. They can be any type of object that respond to #call, and it’s up to us to return a Rack::Response. In our case, we have just a string, if we inherit from that class, we can wrap the body in a proper response:

class Endpoint < Lotus::Routing::Endpoint
  def call(env)
    [200, {}, [super]]
  end
end

The next step is to use this endpoint.

Lotus::Router uses a specific set of rules to understand which endpoint needs to be associated with a given path. For instance, when you write get '/dashboard', to: 'dashboard#index', that :to option is processed and the router will look for a DashboardController::Index class.

Those conventions are implemented by Lotus::Routing::EndpointResolver, which is used as default resolver. If you want to use a different policy, or customize the way it works, pass your own resolver to the router constructor (:resolver option). We want to use the defaults, and only specify to usa of our custom endpoint.

require 'rubygems'
require 'bundler/setup'
require 'lotus/router'

class Endpoint < Lotus::Routing::Endpoint
  def call(env)
    [200, {}, [super]]
  end
end

r = Lotus::Routing::EndpointResolver.new(endpoint: Endpoint)

Application = Rack::Builder.new do
  app = Lotus::Router.new(resolver: r) do
    get '/' do
      'Hello, World!'
    end
  end
  run app
end.to_app

Request params

Now that we have mimicked the simplest Sinatra usage, let’s have a look at the next example: request params. Endpoint is agnostic, it’s part of an HTTP router, that’s why it passes the complete Rack env to the real endpoint that it wraps. Instead, we want to use only the tokens coming from the URL. This is really simple to do:

require 'rubygems'
require 'bundler/setup'
require 'lotus/router'

class Endpoint < Lotus::Routing::Endpoint
  def call(env)
    [200, {}, [super(params(env))]]
  end

  private
  def params(env)
    env.fetch('router.params')
  end
end

r = Lotus::Routing::EndpointResolver.new(endpoint: Endpoint)

Application = Rack::Builder.new do
  app = Lotus::Router.new(resolver: r) do
    get '/' do
      'Hello, World!'
    end

    get '/greet/:planet' do |params|
      "Hello from the #{ params[:planet] }!"
    end
  end
  run app
end.to_app

A step further

What we did until now it’s great but noisy. We want to extract the boilerplate code into a separated file. I’ve prepared a microgem to be used with our Gemfile.

source 'https://rubygems.org'
gem 'lotus-sinatra', git: 'https://gist.github.com/8665228.git'

Now we can leave that beautiful DSL alone.

require 'rubygems'
require 'bundler/setup'
require 'lotus-sinatra'

get '/' do
  'Hello, World!'
end

get '/greet/:planet' do |params|
  "Hello from the #{ params[:planet] }!"
end

Conclusion

This example confirms how valuable is the separation between Lotus frameworks and that Dependency Injection is a virtue.

To stay updated with the latest releases, to receive code examples, implementation details and announcements, please consider to subscribe to the Lotus mailing list.

<link href="//cdn-images.mailchimp.com/embedcode/slim-081711.css" rel="stylesheet" type="text/css"/>

REPL Driven Development

Posted 3 months back at Jay Fields Thoughts

When I describe my current workflow I use the TLA RDD, which is short for REPL Driven Development. I've been using REPL Driven Development for all of my production work for awhile now, and I find it to be the most effective workflow I've ever used. RDD differs greatly from any workflow I've used in the past, and (despite my belief that it's superior) I've often had trouble concisely describing what makes the workflow so productive. This entry is an attempt to describe what I consider RDD to be, and to demonstrate why I find it the most effective way to work.

RDD Cycle

First, I'd like to address the TLA RDD. I use the term RDD because I'm relying on the REPL to drive my development. More specifically, when I'm developing, I create an s-expression that I believe will solve my problem at hand. Once I'm satisfied with my s-expression, I send that s-expression to the REPL for immediate evaluation. The result of sending an s-expression can either be a value that I manually inspect, or it can be a change to a running application. Either way, I'll look at the result, determine if the problem is solved, and repeat the process of crafting an s-expression, sending it to the REPL, and evaluating the result.

If that isn't clear, hopefully the video below demonstrates what I'm talking about.

<iframe width="420" height="315" src="//www.youtube.com/embed/P8SWtYXXOuo" frameborder="0" allowfullscreen=""></iframe>

If you're unfamiliar with RDD, the previous video might leave you wondering: What's so impressive about RDD? To answer that question, I think it's worth making explicit what the video is: an example of a running application that needs to change, a change taking place, and verification that the application runs as desired. The video demonstrates change and verification; what makes RDD so effective to me is what's missing: (a) restarting the application, (b) running something other than the application to verify behavior, and (c) moving out of the source to execute arbitrary code. Eliminating those 3 steps allows me to focus on what's important, writing and running code that will be executed in production.

Feedback

I've found that, while writing software, getting feedback is the single largest time thief. Specifically, there are two types of feedback that I want to get as quickly as possible: (1) Is my application doing what I believe it is? (2) What does this arbitrary code return when executed? I believe the above video demonstrates how RDD can significantly reduce the time needed to answer both of those questions.

In my career I've spent significant time writing applications in C#, Ruby, & Java. While working in C# and Java, if I wanted to make and verify (in the application) any non-trivial change to an application, I would need to stop the application, rebuild/recompile, & restart the application. I found the slowness of this feedback loop to be unacceptable, and wholeheartedly embraced tools such as NUnit and JUnit.

I've never been as enamored with TDD as some of my peers; regardless, I absolutely endorsed it. The Design aspect of TDD was never that enticing to me, but tests did allow me to get feedback at a significantly superior pace. Tests also provide another benefit while working with C# & Java: They're the poorest man's REPL. Need to execute some arbitrary code? Write a test, that you know you're going to immediately delete, and execute away. Of course, tests have other pros and cons. At this moment I'm limiting my discussion around tests to the context of rapid feedback, but I'll address TDD & RDD later in this entry.

Ruby provided a more effective workflow (technically, Rails provided a more effective workflow). Rails applications I worked on were similar to my RDD experience: I was able to make changes to a running application, refresh a webpage and see the result of the new behavior. Ruby also provided a REPL, but I always ran the REPL external to my editor (I knew of no other option). This workflow was the closest, in terms of efficiency, that I've ever felt to what I have with RDD; however, there are some minor differences that do add up to an inferior experience: (a) having to switch out of a source file to execute arbitrary code is an unnecessary nuisance and (b) refreshing a webpage destroys any client side state that you've built up. I have no idea if Ruby now has editor & repl integration, if it does, then it's likely on par with the experience I have now.

Semantics

  • It's important to distinguish between two meanings of "REPL" - one is a window that you type forms into for immediate evaluation; the other is the process that sits behind it and which you can interact with from not only REPL windows but also from editor windows, debugger windows, the program's user interface, etc.
  • It's important to distinguish between REPL-based development and REPL-driven development:
    • REPL-based development doesn't impose an order on what you do. It can be used with TDD or without TDD. It can be used with top-down, bottom-up, outside-in and inside-out approaches, and mixtures of them.
    • REPL-driven development seems to be about "noodling in the REPL window" and later moving things across to editor buffers (and so source files) as and when you are happy with things. I think it's fair to say that this is REPL-based development using a series of mini-spikes. I think people are using this with a bottom-up approach, but I suspect it can be used with other approaches too.
-- Simon Katz
I like Simon's description, but I don't believe that we need to break things down to two different TLAs. Quite simply, (sadly) I don't think enough people are developing in this way, and the additional specification causes a bit of confusion among people who aren't familiar with RDD. However, Simon's description is so spot on I felt the need to describe why I'm choosing to ignore his classifications.

RDD & TDD

RDD and TDD are not in direct conflict with each other. As Simon notes above, you can do TDD backed by a REPL. Many popular testing frameworks have editor specific libraries that provide immediate feedback through REPL interaction.

When working on a feature, the short term goal is to have it working in the application as fast as possible. Arbitrary execution, live changes, and only writing what you need are 3 things that can help you complete that short term goal as fast as possible. The video above is the best example I have of how you go from a feature request to software that does what you want in the smallest amount of time. In the video, I only leave the buffer to verify that the application works as intended. If the short term goal was the only goal, RDD without writing tests would likely be the solution. However, we all know that that are many other goals in software. Good design is obviously important. If you think tests give you better design, then you should probably mix both TDD & RDD. Preventing regression is also important, and that can be accomplished by writing tests after you have a working feature that you're satisfied with. Regression tests are great for giving confidence that a feature works as intended and will continue to in the future.

REPL Driven Development doesn't need to replace your current workflow, it can also be used to extend your existing TDD workflow.

Hobo 2.1.0 released!

Posted 3 months back at The Hobo Blog

We’re proud to announce the release of Hobo 2.1.0 with Rails 4 support!

Take a look at the Download and Installation Instructions.

Many of the changes required in upgrading a Hobo 2.0 application are necessitated by the switch from Rails 3.2 to 4.0. Railscasts has a good guide to upgrading to Rails 4.0.

From Hobo’s point of view, you shouldn’t need to change almost anything :).

Gemfile

Now Hobo uses “will_paginate_hobo” gem, instead of the git repository “git://github.com/Hobo/will_paginate.git”. This should make it easier to install in systems without Git installed (users have reported problems with Windows and Git).

You also need to add the protected_attributes gem to your Gemfile.

Internal changes

In order to make Hobo compatible with Rails 4, these are the main changes that have been done:

Routing

  • url_for does not accept parameters any more
  • Remove deprecated routes system
  • match is no longer accepted in routes.rb, it has been replaced by “get” and “post”

ActiveRecord

  • Model.find(:all) is deprecated
  • finder.scoped :conditions => conditions has been replaced with finder.where(conditions)
  • raise_on_type_mismatch has been renamed to raise_on_type_mismatch!

Other

  • protected_attributes gem has been added to support the “old” way of protecting attributes
  • Domizio has made Hobo thread safe :)
  • Hobo’s custom will_paginate has been packaged into the hobo_will_paginate gem. This should make possible to install Hobo without Git (it seems to be a bit hard under Windows).## Installation

Hobo 2.1.0 released!

Posted 3 months back at The Hobo Blog

We’re proud to announce the release of Hobo 2.1.0 with Rails 4 support!

Take a look at the Download and Installation Instructions.

Many of the changes required in upgrading a Hobo 2.0 application are necessitated by the switch from Rails 3.2 to 4.0. Railscasts has a good guide to upgrading to Rails 4.0.

From Hobo’s point of view, you shouldn’t need to change almost anything :).

Gemfile

Now Hobo uses “will_paginate_hobo” gem, instead of the git repository “git://github.com/Hobo/will_paginate.git”. This should make it easier to install in systems without Git installed (users have reported problems with Windows and Git).

You also need to add the protected_attributes gem to your Gemfile.

Internal changes

In order to make Hobo compatible with Rails 4, these are the main changes that have been done:

Routing

  • url_for does not accept parameters any more
  • Remove deprecated routes system
  • match is no longer accepted in routes.rb, it has been replaced by “get” and “post”

ActiveRecord

  • Model.find(:all) is deprecated
  • finder.scoped :conditions => conditions has been replaced with finder.where(conditions)
  • raise_on_type_mismatch has been renamed to raise_on_type_mismatch!

Other

  • protected_attributes gem has been added to support the “old” way of protecting attributes
  • Domizio has made Hobo thread safe :)
  • Hobo’s custom will_paginate has been packaged into the hobo_will_paginate gem. This should make possible to install Hobo without Git (it seems to be a bit hard under Windows).## Installation

Phusion Passenger 4.0.36 released

Posted 3 months back at Phusion Corporate Blog


Phusion Passenger is a fast and robust web server and application server for Ruby, Python, Node.js and Meteor. Passenger takes a lot of complexity out of deploying web apps, and adds powerful enterprise-grade features that are useful in production. High-profile companies such as Apple, New York Times, AirBnB, Juniper, American Express, etc are already using it, as well as over 350.000 websites.

Phusion Passenger is under constant maintenance and development. Version 4.0.36 is a bugfix release.

Phusion Passenger also has an Enterprise version which comes with a wide array of additional features. By buying Phusion Passenger Enterprise you will directly sponsor the development of the open source version.

Recent changes

  • [Enterprise] Fixed some Mass Deployment bugs.
  • [Enterprise] Fixed a bug that causes an application group to be put into Deployment Error Resistance Mode if rolling restarting fails while deployment error resistance is off. Deployment Error Resistance Mode is now only activated if it’s explicitly turned on.
  • Passenger Standalone now gzips JSON responses.
  • Fixed some cases in which Passenger Standalone does not to properly cleanup its temporary files.

Installing or upgrading to 4.0.36

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball

Final

Fork us on Github!

Phusion Passenger’s core is open source. Please fork or watch us on Github. :)

<iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=watch&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=fork&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;type=follow&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="190" height="30"></iframe>

If you would like to stay up to date with Phusion news, please fill in your name and email address below and sign up for our newsletter. We won’t spam you, we promise.



Episode #434 - January 24th, 2014

Posted 3 months back at Ruby5

Command line fuzzy finding, workers in go, consolidating your docsites, interviewing front-end developers, tracking upcoming ruby conferences, and a long-awaited update to PhantomJS all in this episode of the Ruby5!

Listen to this episode on Ruby5

This episode is sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Selecta

Selecta is an open source fuzzy text finder for the command line. It is easy to work with an integrate into your existing workflows!
Selecta

Goworker

Have slow Ruby workers? Goworker is compatible with resque and might process your background tasks much faster than your existing Ruby workers.
Goworker

DevDocs

DevDocs combines multiple API documentations in a fast, organized, and searchable interface.
DevDocs

Front-end Job Interview Questions

A list of helpful front-end related questions you can use to interview potential candidates.
Front-end Job Interview Questions

rubyconferences.org

Wondering what conferences are coming up in the Ruby community? The recently launched rubyconferences.org site has all the details!
rubyconferences.org

PhantomJS Update

PhantomJS got an update that removes those pesky CoreText performance warnings in your log. Brew update today and all that ugliness will go away!
PhantomJS Update