Our iOS, Rails, and Backbone.js Books Are Now Available for Purchase

Posted 8 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Starting today, you can buy any of our books through these links.

Our book offerings currently include:

Each of these books comes in MOBI, EPUB, PDF, and HTML and includes access to the GitHub repository with the source Markdown / LaTeX file and an example application.

For the interested, for the past few months these books were included in and exclusively available through our subscription learning product, Upcase.

We determined that the books were not a good fit for Upcase, and so now we have split them out.

Episode #498 - September 23, 2014

Posted 8 days back at Ruby5

We go Airborne for Ruby 2.1.3 while Eagerly Decorating the skies and Swiftly avoiding the Daemons on this episode of Ruby5.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
Top Ruby Jobs

Ruby 2.1.3 is released

Last week, MRI Ruby 2.1.3 was released. It’s primarily a bug fix release, but does contain new garbage collection tuning which seemingly drastically reduces memory consumption.
Ruby 2.1.3 is released

Automatic Eager Loading in Rails with Goldiloader

The team from Salsify recently released a gem called Goldiloader. This gem attempts to automatically eager load associated records and avoid n+1 queries.
Automatic Eager Loading in Rails with Goldiloader

Airborne - RSpec-driven API testing

A few days ago a new, RSpec-driven API testing framework called Airborne took off on GitHub. It works with Rack applications and provides useful response header and JSON contents matchers for RSpec.
Airborne - RSpec-driven API testing

Active Record Eager Loading with Query Objects and Decorators

On the Thoughtbot blog this week, Joe Ferris wrote about using Query Objects and Decorators to easily store the data returned in ActiveRecord models and use it in your views. Query objects can help you wrap up complex SQL without polluting our models.
Active Record Eager Loading with Query Objects and Decorators

Don’t Daemonize your Daemons

Yesterday, Mike Perham put together a short, yet very useful, post entitled “Don’t Daemonize Your Daemons!” It was written as a retort to Jake Gordon’s Daemonizing Ruby Processes post last week, highlighting that fact that most people, including Jake, make daemonizing processes overly difficult.
Don’t Daemonize your Daemons

Swift for Rubyists

If you’re interested in diving into Apple’s new Swift language, we highly recommend the video of JP Simard’s talk on Swift For Rubyists on the realm.io blog. iOS 8 is out now, so Swift applications are now allowed in the App Store.
Swift for Rubyists

Validating JSON Schemas with an RSpec Matcher

Posted 9 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

At thoughtbot we’ve been experimenting with using JSON Schema, a widely-used specification for describing the structure of JSON objects, to improve workflows for documenting and validating JSON APIs.

Describing our JSON APIs using the JSON Schema standard allows us to automatically generate and update our HTTP clients using tools such as heroics for Ruby and Schematic for Go, saving loads of time for client developers who are depending on the API. It also allows us to improve test-driven development of our API.

If you’ve worked on a test-driven JSON API written in Ruby before, you’ve probably encountered a request spec that looks like this:

describe "Fetching the current user" do
  context "with valid auth token" do
    it "returns the current user" do
      user = create(:user)
      auth_header = { "Auth-Token" => user.auth_token }

      get v1_current_user_url, {}, auth_header

      current_user = response_body["user"]
      expect(response.status).to eq 200
      expect(current_user["auth_token"]).to eq user.auth_token
      expect(current_user["email"]).to eq user.email
      expect(current_user["first_name"]).to eq user.first_name
      expect(current_user["last_name"]).to eq user.last_name
      expect(current_user["id"]).to eq user.id
      expect(current_user["phone_number"]).to eq user.phone_number
    end
  end

  def response_body
    JSON.parse(response.body)
  end
end

Following the four-phase test pattern, the test above executes a request to the current user endpoint and makes some assertions about the structure and content of the expected response. While this approach has the benefit of ensuring the response object includes the expected values for the specified properties, it is also verbose and cumbersome to maintain.

Wouldn’t it be nice if the test could look more like this?

describe "Fetching the current user" do
  context "with valid auth token" do
    it "returns the current user" do
      user = create(:user)
      auth_header = { "Auth-Token" => user.auth_token }

      get v1_current_user_url, {}, auth_header

      expect(response.status).to eq 200
      expect(response).to match_response_schema("user")
    end
  end
end

Well, with a dash of RSpec and a pinch of JSON Schema, it can!

Leveraging the flexibility of RSpec and JSON Schema

An important feature of JSON Schema is instance validation. Given a JSON object, we want to be able to validate that its structure meets our requirements as defined in the schema. As providers of an HTTP JSON API, our most important JSON instances are in the response body of our HTTP requests.

RSpec provides a DSL for defining custom spec matchers. The json-schema gem’s raison d'être is to provide Ruby with an interface for validating JSON objects against a JSON schema.

Together these tools can be used to create a test-driven process in which changes to the structure of your JSON API drive the implementation of new features.

Creating the custom matcher

First we’ll add json-schema to our Gemfile:

Gemfile

group :test do
  gem "json-schema"
end

Next, we’ll define a custom RSpec matcher that validates the response object in our request spec against a specified JSON schema:

spec/support/api_schema_matcher.rb

RSpec::Matchers.define :match_response_schema do |schema|
  match do |response|
    schema_directory = "#{Dir.pwd}/spec/support/api/schemas"
    schema_path = "#{schema_directory}/#{schema}.json"
    JSON::Validator.validate!(schema_path, response, strict: true)
  end
end

We’re make a handful of decisions here: We’re designating spec/support/api/schemas as the directory for our JSON schemas and we’re also implementing a naming convention for our schema files.

JSON::Validator#validate! is provided by the json-schema gem. Passing strict: true to the validator ensures that validation will fail when an object contains properties not defined in the schema.

Defining the user schema

Finally, we define the user schema using the JSON Schema specification:

spec/support/api/schemas/user.json

{
  "type": "object",
  "required": ["user"],
  "properties": {
    "user" : {
      "type" : "object",
      "required" : [
        "auth_token",
        "email",
        "first_name",
        "id",
        "last_name",
        "phone_number"
      ],
      "properties" : {
        "auth_token" : { "type" : "string" },
        "created_at" : { "type" : "string", "format": "date-time" },
        "email" : { "type" : "string" },
        "first_name" : { "type" : "string" },
        "id" : { "type" : "integer" },
        "last_name" : { "type" : "string" },
        "phone_number" : { "type" : "string" },
        "updated_at" : { "type" : "string", "format": "date-time" }
      }
    }
  }
}

TDD, now with schema validation

Let’s say we need to add a new property, neighborhood_id, to the user response object. The back end for our JSON API is a Rails application using ActiveModel::Serializers.

We start by adding neighborhood_id to the list of required properties in the user schema:

spec/support/api/schemas/user.json

{
  "type": "object",
  "required": ["user"],
  "properties":
    "user" : {
      "type" : "object",
      "required" : [
        "auth_token",
        "created_at",
        "email",
        "first_name",
        "id",
        "last_name",
        "neighborhood_id",
        "phone_number",
        "updated_at"
      ],
      "properties" : {
        "auth_token" : { "type" : "string" },
        "created_at" : { "type" : "string", "format": "date-time" },
        "email" : { "type" : "string" },
        "first_name" : { "type" : "string" },
        "id" : { "type" : "integer" },
        "last_name" : { "type" : "string" },
        "neighborhood_id": { "type": "integer" },
        "phone_number" : { "type" : "string" },
        "updated_at" : { "type" : "string", "format": "date-time" }
      }
    }
  }
}

Then we run our request spec to confirm that it fails as expected:

Failures:

  1) Fetching a user with valid auth token returns requested user
     Failure/Error: expect(response).to match_response_schema("user")
     JSON::Schema::ValidationError:
       The property '#/user' did not contain a required property of 'neighborhood_id' in schema
       file:///Users/laila/Source/thoughtbot/json-api/spec/support/api/schemas/user.json#

Finished in 0.34306 seconds (files took 3.09 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/requests/api/v1/users_spec.rb:6 # Fetching a user with valid auth token returns requested user

We make the test pass by adding a neighborhood_id attribute in our serializer:

class Api::V1::UserSerializer < ActiveModel::Serializer
  attributes(
    :auth_token,
    :created_at,
    :email,
    :first_name,
    :id,
    :last_name,
    :neighborhood_id,
    :phone_number,
    :updated_at
  )
end
.

Finished in 0.34071 seconds (files took 3.14 seconds to load)
1 example, 0 failures

Top 1 slowest examples (0.29838 seconds, 87.6% of total time):
  Fetching a user with valid auth token returns requested user
    0.29838 seconds ./spec/requests/api/v1/users_spec.rb:6

Hooray!

What’s next?

Tender is mobile friendly!

Posted 9 days back at entp hoth blog - Home

Starting today, if you access a Tender site on mobile, you will get a nice mobile view (at last!). If your site uses custom CSS, you will need to manually activate it: please read the KB article for details.

Let us know how you like it :)

Cheers!

Lighthouse integrates with Raygun.io!

Posted 9 days back at entp hoth blog - Home

Raygun.io (https://raygun.io/) is an error tracking service that helps you build better software, allowing your team to keep an eye on the health of your applications by notifying you of software bugs in real time. Raygun works with every major web and mobile programming language and platform.

raygun.io

They recently added support for Lighthouse and we wrote a KB article to get you started.

So check them out, and start tracking!

Meaningful Exceptions

Posted 10 days back at Luca Guidi - Home

Writing detailed API documentation helps to improve software design.

We already know that explaining a concept to someone leads us to a better grasp. This is true for our code too. This translation process to a natural language forces us to think about a method from the outside perspective. We describe the intent, the input, the output and how it reacts under unexpected conditions. Put it in black and white and you will find something to refine.

It happened to me recently.

I was reviewing some changes in lotus-utils, while I asked myself: “What if we accidentally pass nil as argument here”? The answer was easy: NoMethodError, because nil doesn’t respond to a specific method that the implementation invokes.

A minute later, there was already an unit test to cover that case and a new documentation detail to explain it. Solved.

Well, not really. Let’s take a step back before.

First solution

When we design public API, we are deciding the way that client code should use our method and what to expect from it. Client code doesn’t know nothing about our implementation, and it shouldn’t be affected if we change it.

The technical reason why the code raises that exception is:

arg * 2

'/' * 2 # => "//"
nil * 2 # => NoMethodError

The first solution was to catch that error and to re-raise ArgumentError.

Improved solution

During the process of writing this article, I’ve recognized two problems with this proposal.

The first issue is about the implementation. What if we refactor the code in a way that NoMethodError is no longer raised?

2.times.map { arg }.join

2.times.map { '/' }.join # => "//"
2.times.map { nil }.join # => ""

Our new implementation has changed the behavior visible from the outside world. We have broken the software contract between our library and the client code.

It expected ArgumentError in case of nil, but after that modification, this isn’t true anymore.

The other concern is about the semantic of the exception. According to RubyDoc:

“ArgumentError: Raised when the arguments are wrong and there isn’t a more specific Exception class.”

We have a more specific situation here, we expect a string, but we’ve got nil. Probably, TypeError fits better our case.

Conclusion

Our test suite can be useful to check the correctness of a procedure under a deterministic scenario, but sometimes we write assertions under a narrowed point of view.

Explaining the intent with API docs mitigates this problem and helps other people to understand our initial idea.

Check if the semantic of the raised exceptions is coherent with that conceptualization.

To stay updated with the latest releases, to receive code examples, implementation details and announcements, please consider to subscribe to the Lotus mailing list.

<link href="//cdn-images.mailchimp.com/embedcode/slim-081711.css" rel="stylesheet" type="text/css"/>


A plan by any other name ...

Posted 12 days back at entp hoth blog - Home

… still gets you better, simpler, customer support!

We decided to change the names of our plans. If you are currently on the following plans, don’t fret! Nothing has changed other than the name. All your existing features are still there. If you were on a legacy plan, nothing changes for you at all.

  • Core => Starter
  • Extra => Standard
  • Ultimo => Pro

Let us know if you have any questions at help@tenderapp.com

Episode #497 - September 19th, 2014

Posted 12 days back at Ruby5

Start using Fourchette, roll-out features by the instance, read logs with a little help from your friends, run your own bitcoin node, and say hello to byebug!

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Fourchette App

Deployable version of the Fourchette core
Fourchette App

helioth

Feature-flipping and rollout for your apps with ActiveRecord
helioth

hutils

A collection of command line utilities for working with logfmt
hutils

Toshi

An open source Bitcoin node built to power large scale web applications
Toshi

byebug

Byebug is a simple to use, feature rich debugger for Ruby 2
byebug

ActiveRecord Eager Loading with Query Objects and Decorators

Posted 13 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We recently came across an interesting problem, which was discussed in a previous post, Postgres Window Functions:

We want to get each post’s three most recent comments.

As discussed, you can’t use simple eager loading:

Post.order(created_at: :desc).limit(25).includes(:comments)

This will load every comment for each post. When there are many comments per post, this quickly becomes unacceptable.

Starting Slow

It’s frequently easiest to start with a slow implementation and make it faster when necessary. In many cases, the slower (and possibly simpler) implementation will work just fine, and it’s best to deploy it as-is. Let’s look at a naive implementation of our list of posts and comments:

class PostsController < ApplicationController
  def index
    @posts = Post.order(created_at: :desc).limit(5)
  end
end
class Post < ActiveRecord::Base
  has_many :comments, dependent: :destroy

  def latest_comments
    comments.order(created_at: :desc).limit(3)
  end
end

This will frequently do fine, but causes N+1 queries, which will look like this in your log:

Started GET "/" for 127.0.0.1 at 2014-09-18 11:36:18 -0400
Processing by PostsController#index as HTML
  Post Load (0.4ms)  SELECT "posts".* FROM "posts"
    ORDER BY "posts"."created_at" DESC LIMIT 25
  Comment Load (0.2ms)  SELECT "comments".* FROM "comments"
    WHERE "comments"."post_id" = $1
    ORDER BY "comments"."created_at" DESC
    LIMIT 3  [["post_id", 27]]
  Comment Load (0.3ms)  SELECT "comments".* FROM "comments"
    WHERE "comments"."post_id" = $1
    ORDER BY "comments"."created_at" DESC
    LIMIT 3  [["post_id", 28]]
  Comment Load (0.2ms)  SELECT "comments".* FROM "comments"
    WHERE "comments"."post_id" = $1
    ORDER BY "comments"."created_at" DESC
    LIMIT 3  [["post_id", 29]]
  ...

If you’re using New Relic like we do, you’ll know a slow transaction has an N+1 problem when it shows many queries to the same model or table in the transaction log. Once performance starts to suffer, you’ll want to consolidate those queries.

In the previous post, we described how you could use a Postgres Window Function to find the comments you want in one query:

SELECT * FROM (
  SELECT comments.*, dense_rank() OVER (
    PARTITION BY comments.post_id
    ORDER BY comments.created_at DESC
  ) AS comment_rank
) AS ranked_comments
WHERE comment_rank < 4;

However, how can we plug this query into ActiveRecord such that we can use the data in our views?

It’s actually fairly easy. You need two new objects: a Query Object and a Decorator. Let’s refactor to introduce these objects, and then we’ll plug in our query.

The Query Object

We can perform the Extract Class refactoring and create a Feed model:

class Feed
  def initialize(posts: posts)
    @posts = posts.order(created_at: :desc).limit(5)
  end

  def posts
    @posts
  end
end
class PostsController < ApplicationController
  def index
    @feed = Feed.new(posts: Post.all)
  end
end

The Decorator

We can use SimpleDelegator to create a quick decorator class for Post:

class PostWithLatestComments < SimpleDelegator
  def latest_comments
    comments.order(created_at: :desc).limit(3)
  end
end

We can apply this decorator to each Post in the Feed:

class Feed
  def posts
    @posts.map { |post| PostWithLatestComments.new(post) }
  end
end

The SQL Query

At this point, we’ve done nothing except to introduce two new classes to our system. However, we’ve provided ourselves an opportunity.

We frequently use Query Objects to wrap up complex SQL without polluting models. In addition to encapsulating SQL, though, they can also hold context, empowering objects to remember the query from whence they came. We’ll use this property of query objects to to plug our SQL into our application.

First, we’ll use the above SQL query to find the comments relevant to our posts:

# feed.rb

def comments
  Comment.
    select("*").
    from(Arel.sql("(#{ranked_comments_query}) AS ranked_comments")).
    where("comment_rank <= 3")
end

def ranked_comments_query
  Comment.where(post_id: @posts.map(&:id)).select(<<-SQL).to_sql
    comments.*,
    dense_rank() OVER (
      PARTITION BY comments.post_id
      ORDER BY comments.created_at DESC
    ) AS comment_rank
  SQL
end

Then, we’ll group those comments by post_id into a Hash:

class Feed
  def initialize(posts: posts)
    @posts = posts.order(created_at: :desc).limit(5)
    @comment_cache = build_comment_cache
  end

  # ...

  private

  def build_comment_cache
    comments.inject({}) do |cache, comment|
      cache[comment.post_id] ||= []
      cache[comment.post_id] << comment
      cache
    end
  end

  # ...
end

Now, we pass that Hash to our decorator:

# feed.rb

def posts
  @posts.map { |post| PostWithLatestComments.new(post, @comment_cache) }
end
class PostWithLatestComments < SimpleDelegator
  def initialize(post, comments_by_post_id)
    super(post)
    @comments_by_post_id = comments_by_post_id
  end

  def latest_comments
    @comments_by_post_id[id] || []
  end
end

The Result

Our Feed class is now smart enough to perform two SQL queries:

  • One query to posts to find the posts we care about.
  • One query to comments (using Postgres Window Functions) to find the latest three comments for each post.

It then decorates each post, providing the preloaded Hash of comments to the decorator. This allows the decorated posts to find their latest three comments without performing an additional query.

The finished Feed class looks like this:

class Feed
  def initialize(posts: posts)
    @posts = posts.order(created_at: :desc).limit(5)
    @comment_cache = build_comment_cache
  end

  def posts
    @posts.map { |post| PostWithLatestComments.new(post, @comment_cache) }
  end

  private

  def build_comment_cache
    comments.inject({}) do |cache, comment|
      cache[comment.post_id] ||= []
      cache[comment.post_id] << comment
      cache
    end
  end

  def comments
    Comment.
      select("*").
      from(Arel.sql("(#{ranked_comments_query}) AS ranked_comments")).
      where("comment_rank <= 3")
  end

  def ranked_comments_query
    Comment.where(post_id: @posts.map(&:id)).select(<<-SQL).to_sql
      comments.*,
      dense_rank() OVER (
        PARTITION BY comments.post_id
        ORDER BY comments.created_at DESC
      ) AS comment_rank
    SQL
  end
end

As you can see, most of the logic is concerned with generating that SQL query, and the machinery for plugging the results into our ActiveRecord models is very lightweight.

At this point, requests in our log look something like this:

Started GET "/" for 127.0.0.1 at 2014-09-18 13:53:39 -0400
Processing by PostsController#index as HTML
  Post Load (0.4ms)  SELECT "posts".* FROM "posts"
    ORDER BY "posts"."created_at" DESC
    LIMIT 5
  Comment Load (0.5ms)  SELECT * FROM (
    SELECT comments.*,
      dense_rank() OVER (
        PARTITION BY comments.post_id
        ORDER BY comments.created_at DESC
      ) AS comment_rank
    FROM "comments"
    WHERE "comments"."post_id" IN (154, 153)
    ) AS ranked_comments WHERE (comment_rank <= 3)

You can use this approach for many situations where it’s difficult to use simple eager loading.

What’s Next?

Learn how the query in this post works by reading about Postgres Window Functions.

I'll do Angelina Jolie

Posted 13 days back at Saaien Tist

From: http://cartoonfestival.knokke-heist.be/pagina/iedereen-geniaal

"I'll do Angelina Jolie". Never thought I'd say that phrase while talking to well-known Belgian cartoonists, and actually be taken serious.

Backtrack about one year. We're at the table with the crème-de-la-crème of Belgium's cartoon world (Zaza, Erwin Vanmol, LECTRR, Eva Mouton, ...), in a hotel in Knokke near the coast.  "We" is a gathering of researchers covering genetics, bioinformatics, ethics, and law. The setup: the Knokke-Heist International Cartoon Festival. This very successful festival centers each year around a particular topic. 2013 was "Love is..."; 2014 is about genetics. Hence our presence at the site. On the program for day 1: explaining genetics and everything that gets dragged into it (privacy, etc) to the cartoonists. Day 2: discussion on which messages we should bring at the festival, and a quick pictionary to check if we actually explained the concepts well. (As I was doodling myself at the moment, I briefly got to be a "cartoonist" as well and actually draw one of those :-)
"So what's the thing with Angelina Jolie?", you ask? We figured that she be the topic of part of the cartoonfestival installation (talking about breast cancer, obviously), and I volunteered to help out setting up that section...

<object width="320" height="266" class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="https://ytimg.googleusercontent.com/vi/28k8hQkjKAs/0.jpg"><param name="movie" value="https://youtube.googleapis.com/v/28k8hQkjKAs&amp;source=uds"/><param name="bgcolor" value="#FFFFFF"/><param name="allowFullScreen" value="true"/><embed width="320" height="266" src="https://youtube.googleapis.com/v/28k8hQkjKAs&amp;source=uds" type="application/x-shockwave-flash" allowfullscreen="true"></embed></object>


Fast forward to this late summer. The cartoonfestival is in full swing, and I'm trying to explain the genetic dogma and codon table to a bunch of 8-13 year-olds in the Children's University. I thought it'd be nice to let them muck about with strawberries to get the DNA out, and write their names in secret code (well: just considering each letter to be an amino acid...). I was really nervous in the days/weeks before the actual event; kids can be a much harsher public than university students. Or so I thought; it was quite the opposite: feedback from the children was marvellous and I really enjoyed their enthusiasm. To be repeated... :)

I know this post is way overdue (especially given the fact that the cartoonfestival actually closed last weekend). But with this I hope to resurrect this blog from its comatose state since I started my current position 4 years ago...




Real World JSON Parsing with Swift

Posted 14 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Last time, we looked at using concepts from Functional Programming and Generics to parse JSON received from a server into a User model. The final result of the JSON parsing looked like this:

struct User: JSONDecodable {
  let id: Int
  let name: String
  let email: String

  static func create(id: Int)(name: String)(email: String) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONObject(json) >>> { d in
      User.create <^>
        d["id"]    >>> _JSONInt    <*>
        d["name"]  >>> _JSONString <*>
        d["email"] >>> _JSONString
    }
  }
}

This is great, but in the real world the objects we get back from an API will not always be perfect. Sometimes an object will have only a few important keys and the rest can be retrieved later.

For example, when we fetch the current user we want all of the user’s info, but when we fetch a user by their id we don’t want the email for security reasons. To hide the email, the server will only respond with the id and name for all users that are not the current user. To reuse the same User object, it’s more realistic to describe a User like this:

struct User {
  let id: Int
  let name: String
  let email: String?
}

You can see that the user’s email property is now an optional String. If you remember, the <^> (fmap) and <*> (apply) operators ensure that we only get our User struct if the JSON contains all the keys; otherwise, we get .None. If the email returned from the server is .None or nil, the decode function will fail. We can fix this by adding a pure function.

pure is a function that takes a value without context and puts that value into a minimal context. Swift optionals are values in a “there-or-not” context; therefore, pure means .Some(value). The implementation is simply:

func pure<A>(a: A) -> A? {
  return .Some(a)
}

Now we can use it to parse a User who may or may not have an email:

struct User: JSONDecodable {
  let id: Int
  let name: String
  let email: String?

  static func create(id: Int)(name: String)(email: String?) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONObject(json) >>> { d in
      User.create <^>
             d["id"]          >>> _JSONInt     <*>
             d["name"]        >>> _JSONString  <*>
        pure(d["email"]       >>> _JSONString)
    }
  }
}

Without pure, d["email"] >>> _JSONString would return .None if there were no "email" key within d. <*> looks for an optional and when it sees .None, it would stop creating the User and return .None for the whole function. However, when we call pure on the result, if the email was not present we’ll get .Some(.None) and <*> will accept and unwrap the optional then pass .None into the initializer.

More Type Inference

The above code works great, but there is still a lot of syntax we need to write in order to create our User. We can use Swift’s type inference to refactor this code.

We have been using a few functions to parse a JSON AnyObject type into the type that the create function needs. With type inference, we can use the definition of the create function to tell the JSON parsing function what type we’re looking for. Currently, _JSONInt and _JSONString look like this:

func _JSONInt(json: JSON) -> Int? {
  return json as? Int
}

func _JSONString(json: JSON) -> String? {
  return json as? String
}

It’s easy to see that these functions are very similar, in fact, they only differ by the type. Sounds like a use for Generics.

func _JSONParse<A>(json: JSON) -> A? {
  return json as? A
}

Now we can use _JSONParse in our decode function instead of needing a specific parsing function for every JSON type.

struct User: JSONDecodable {
  let id: Int
  let name: String
  let email: String?

  static func create(id: Int)(name: String)(email: String?) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONParse(json) >>> { (d: JSONObject) in
      User.create <^>
             d["id"]          >>> _JSONParse  <*>
             d["name"]        >>> _JSONParse  <*>
        pure(d["email"]       >>> _JSONParse)
    }
  }
}

This works because User.create is a function that takes an Int and it is being applied to d["id"] >>> _JSONParse. The compiler will infer that the generic type within _JSONParse has to be an Int.

You’ll notice that in order to use _JSONParse in place of _JSONObject we had to cast d to a JSONObject so that _JSONParse can infer the type.

Our decoding function is getting better but there is still a lot of duplication. It would be nice to eliminate all the calls to _JSONParse. These lines are all similar except for the key being used to extract the JSON value. We can abstract this code to save on duplication. While we do that we can make a similar function to abstract any value that also needs to call pure.

func extract<A>(json: JSONObject, key: String) -> A? {
  return json[key] >>> _JSONParse
}

func extractPure<A>(json: JSONObject, key: String) -> A?? {
  return pure(json[key] >>> _JSONParse)
}

And now we have:

struct User: JSONDecodable {
  let id: Int
  let name: String
  let email: String?

  static func create(id: Int)(name: String)(email: String?) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONParse(json) >>> { d in
      User.create <^>
        extract(d, "id")        <*>
        extract(d, "name")      <*>
        extractPure(d, "email")
    }
  }
}

Now that we have the extract function that takes a JSONObject as its first parameter, we can remove the type cast on d because it is inferred to be a JSONObject by passing it into extract.

extract and extractPure are a lot to type every time and it could be more readable if it were infix. Let’s create a couple custom operators to do the job for us. We’ll use <| and <|?, which are inspired from Haskell’s popular JSON parsing library, Aeson. Aeson uses .: and .:?, but those are illegal operators in Swift, so we’ll use the <| version instead.

NOTE: ? is illegal to use in a custom operator in Swift 1.0. This is solved in Swift 1.1 which is currently in Beta 2. You can use <|* as an alternative to <|? for now.

infix operator <|  { associativity left precedence 150 }
infix operator <|? { associativity left precedence 150 }

func <|<A>(json: JSONObject, key: String) -> A? {
  return json[key] >>> _JSONParse
}

func <|?<A>(json: JSONObject, key: String) -> A?? {
  return pure(json[key] >>> _JSONParse)
}

Now we can use these operators in out User decoding. We’ll also move the operators to the front of the line for better style.

struct User: JSONDecodable {
  let id: Int
  let name: String
  let email: String?

  static func create(id: Int)(name: String)(email: String?) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONParse(json) >>> { d in
      User.create
        <^> d <|  "id"
        <*> d <|  "name"
        <*> d <|? "email"
    }
  }
}

Wow! Using Generics and relying on Swift’s type inference can really reduce the amount of code we have to write. What’s really interesting is how close we can get to a purely functional programming language like Haskell. Using Aeson in Haskell, decoding a User would look like this:

instance FromJSON User where
  parseJSON (Object o) = User
    <$> o .:  "id"
    <*> o .:  "name"
    <*> o .:? "email"
  parseJSON _ = mzero

Conclusion

We’ve come a long way since the first post. Let’s bring back the old way and look how it compares to what we can do now excluding the NSData to AnyObject? conversion.

Original Method

extension User {
  static func decode(json: AnyObject) -> User? {
    if let jsonObject = json as? [String:AnyObject] {
      if let id = jsonObject["id"] as AnyObject? as? Int {
        if let name = jsonObject["name"] as AnyObject? as? String {
          if let email = jsonObject["email"] as AnyObject? {
            return User(id: id, name: name, email: email as? String)
          }
        }
      }
    }
    return .None
  }
}

Improved Method

extension User: JSONDecodable {
  static func create(id: Int)(name: String)(email: String?) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONParse(json) >>> { d in
      User.create
        <^> d <|  "id"
        <*> d <|  "name"
        <*> d <|? "email"
    }
  }
}

The related code can be found on GitHub.

Why Group Texts Must Die

Posted 15 days back at Jake Scruggs

Recently I re-tweeted this thought from Pete Holmes:

<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script> because I'm in the middle of a communication crisis.  I do "knowledge work" so it’s considered inappropriate to have a device constantly making little beeps and boops while the person next to me is working on some insane LibXML (you don’t wanna know) bug.  Programming requires extreme concentration and distractions are to be avoided. 

The rub of it is that I’m “on call” if our software product bursts into flames.  Therefore, I must keep my phone in a mode capable of disturbing me so I don’t eat lunch, play ping-pong, or just code right through the disastrous failure of our software. 

This used to be fine back when text messages were either:
  • Time sensitive
  • Important
  • From one person
In first two cases the beeps my phone made were appropriate and in the last case I could simply text back to that person that I was busy if they didn’t get the me ignoring them hint.  However, as we all sadly know, the plague of group texts has completely disrupted how we handle notifications.  Now when my mother in-law, a nice person who likes to share, sends me and 10 other contacts a pic of a lake I spend the rest of the afternoon getting “Nice!,” “Did you take a swim?,” “No, it’s too cold”, “where are you guys?” etc. texts.  Every one of those I should probably check to make sure it isn’t an important work thing — just in case. 

Why not a mass email? Why do I get group texts trying to plan something days or weeks away? Shouldn’t that be an email thread?  The answer is as obvious as it is sad-making: People have given up on email.  For every Inbox Zero zealot, there are a hundred people who’ve essentially let their inbox run wild with clutter.  Missed important emails because of that clutter? — better move to text messages For Everything.  Of course, this leads to cluttered text messages. Plenty of times I’ve been furiously searching through my email to find someone’s reply only to remember “Oh, they’re one of those text-y people, I better scroll through all the various group texts threads they’ve been involved in… Sigh.”

A small aside:  Some people don’t have unlimited text messages on their phone.  The horror, right?  My boss consistently crashes through her text message limit b/c of “friendly” group messages.

The problem of communication clutter isn’t going away.  Ever. Abandoning email for texts just moves the problem.  I implore you to take charge of your inbox, people.  Getting emails you never read? Unsubscribe or Block! Can’t bring yourself to do that? Well, how about creating a rule that moves your daily…

Hold on, got a text…  yup, a silly friendly chatty one from a friend who never emails anymore.  I really like this guy but such frivolity is a tweet or an email or a Facebook or a Whatever but not a text that demands my attention.

Where was I…  Oh yeah, create rules in your email client of choice that automatically move “less often read” messages to a folder for later (read: never) reading. If you’re still getting a flood then you must keep going. Unsubscribing, blocking, and rules are your new watchwords. If you can’t handle this fire hose of information now what do you think is going happen when everyone can send “animated emojis” on a whim from their Apple Watch (or some other wearable computer thing)?

Your “inbox” is not just your email inbox.  It is Every Damn Message you receive in Any form. They all via for your attention and giving up on any one communication form only annoys your friends while providing, at best, temporary relief.




Apology:  I’m sorry if you’ve sent me a group text in the past and see this as an attack.  I really do want to hear about your cats, see your kid’s pics, and ponder your stray thoughts… In an appropriate medium.  Does that sound bitchy? 

Use Git Hooks to Automate Necessary but Annoying Tasks

Posted 15 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Certain tasks like updating dependencies or migrating a database must be done after pulling code or checking out a branch. Other tasks such as re-indexing our ctags improve our development experience. Both kinds of tasks are easy to forget to do and are therefore error-prone. To address the problem, we’ve recently added a standard, extensible set of git hooks to our dotfiles in order to automate necessary, but annoying tasks.

Git Hooks

Git has a commonly under-utilized feature: hooks. You can think of a hook as an event that gets triggered before and after various stages of revision control process. Some hooks of note are:

  • prepare-commit-msg - Fires before the commit message prompt.
  • pre-commit - Fires before a git commit.
  • post-commit - Fires after a git commit.
  • pre-checkout - Fires before changing branches.
  • post-checkout - Fires after changing branches.
  • post-merge - Fires after merging branches.
  • pre-push - Fires before code is pushed to a remote.

Extending our Hooks

Our dotfiles' convention for extension is to place our custom hooks in {pre,post}-$EVENT files within our ~/.git_template.local/hooks directory. Now, anything we add to those hook files will be automatically executed, running tasks that we normally would forget.

What tasks do you commonly forget?

I forget to re-index my ctags!

Lucky for you, we’ve set up git to re-index your ctags after each git command.

I always forget to run bundle install after switching branches!

Automatically install new gems:

# ~/.git_template.local/hooks/post-checkout

[ -f Gemfile ] && bundle install > /dev/null &

I never remember to run pending migrations!

Automatically run your migrations:

# ~/.git_template.local/hooks/post-checkout

[ -f db/schema.rb ] && bin/rake db:migrate > /dev/null &

I document my API with fdoc, but I forget to generate the pages!

Automatically generate the HTML docs:

# ~/.git_template.local/hooks/post-checkout

bin/fdoc convert ./spec/fixtures --output=./html > /dev/null &

I really like Go’s commitment to a standard code format, but I constantly forget to format my files!

Run go fmt before you commit:

# ~/.git_template.local/hooks/pre-commit

gofiles=$(git diff --cached --name-only --diff-filter=ACM | grep '.go$')
[ -z "$gofiles" ] && exit 0

function checkfmt() {
  unformatted=$(gofmt -l $gofiles)
  [ -z "$unformatted" ] && return 0

  echo >&2 "Go files must be formatted with gofmt. Please run:"
  for fn in $unformatted; do
    echo >&2 "  gofmt -w $PWD/$fn"
  done

  return 1
}

checkfmt || fail=yes

[ -z "$fail" ] || exit 1

exit 0

I want my extensive network of friends to know when I’m merging code!

Send out a Yo every time you merge a branch:

# ~/.git_template.local/hooks/post-merge

curl --data "api_token=$YO_API_TOKEN" https://api.justyo.co/yoall/ > /dev/null &

When we aggressively simplify and automate the tedious parts of the development process, we can focus on what’s important: getting things done.

What’s next?

If you found this useful, you might also enjoy:

Hound Reviews CoffeeScript For Style Violations

Posted 15 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Four months ago, we announced Hound, a hosted service that reviews Ruby code for style violations and comments about them on your GitHub pull requests. Since then, about 3,000 users have signed up for Hound.

Today, we’re pleased to announce that Hound can review CoffeeScript code in addition to Ruby code.

Screenshot of Hound linting CoffeeScript file

Default Behavior

Hound uses CoffeeLint to check CoffeeScript style. This follows our precedent for Ruby, which uses the excellent open source RuboCop library.

By default, Ruby is enabled and CoffeeScript is disabled for your repos. Without explicit configuration in your repo, Hound uses these files to configure each language:

Configuration

You can enable or disable each language, or use your own CoffeeLint or RuboCop config file, by adding a .hound.yml file to your repo.

Example .hound.yml:

ruby:
  enabled: true
  config_file: .rubocop.yml

coffee_script:
  enabled: true
  config_file: config/coffeelint.json

You can place the files anywhere on your file path that you prefer.

To use Hound’s defaults but still control which languages are enabled:

ruby:
  enabled: false

coffee_script:
  enabled: true

Special thanks to Nathan Youngman for helping us design the configuration API.

Let Hound Guard Your Repo

Hound is still very young. We’re eager for your feedback on the service. It is free for open source repos, $9 per month per private personal repo, and $24 per month per private organizational repo.

Try Hound today

Jasper

Episode #496 - September 16th, 2014

Posted 15 days back at Ruby5

This episode covers an open source admin framework, the Rails protect from forgery method, fast testing, and a new reactive framework.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
Top Ruby Jobs

Volt

Volt is a reactive framework that runs your ruby code on the server and client side via opal. Instead of syncing data via HTTP, Volt setups a persistent connection between the client and server. When a client updates some data, it updates in both the database and for any other listening clients automatically.
Volt

Fast Tests: Comparing Zeus With Spring on Rails 4.1 and RSpec 3

Justin Gordon recently released an article posing that question and gives some advice on choosing between the Zeus with Parallel Test and Spring. Zeus preloads your Rails app so that development tasks, like console, server, generate, and specs, take less than a second. By incorporating Parallel Tests you get even more speed by running them in parallel. Spring also preloads your Rails app and keeps the application running in the background. That way, you don’t have to boot the test environment each time. Both are really great options for speeding up your tests.
Fast Tests: Comparing Zeus With Spring on Rails 4.1 and RSpec 3

Understanding Rails' protect_from_forgery

In this blog post, John Poulin gets into the nitty gritty details of the Rails’ protect_from_forgery method. He illustrates each step taken when handling forged requests in Rails 3 and Rails 4. He also pinpoints some potential pitfalls as well as how they can be mitigated.
Understanding Rails' protect_from_forgery

Open Sourcing Admin Framework for Ruby on Rails

Upmin support has released the Upmin Admin framework for creating powerful admin backends with little effort. It uses existing methods to create the admin without the need for custom forms, controllers, or actions. It automatically creates paginated search pages for your existing models as well as pages to view and update existing records.
Open Sourcing Admin Framework for Ruby on Rails