RSVP Now: The Future of App Deployment

Posted about 17 hours back at Phusion Corporate Blog

Phusion USA Roadtrip 2014

Phusion will be traveling across the US this October to give tech talks on their future vision of app deployment. We’ve been working on some pretty exciting stuff and our friends over at AirBnB and ConstantContact have generously offered to host us over at their San Francisco and Waltham offices respectively to talk about this:

Writing an app is one thing, deploying it to a production ready environment and keeping it online in the face of countless potential scenarios of adversity is an entirely different beast. Not only does it currently still involve a fair bit of expertise when it comes to unix-fu, it also involves keeping an eye out on the latest software and configure them properly to combat things like security breaches. This is all generally considered tedious and cumbersome work, and often outside the domain of knowledge of developers. Wouldn’t it be great if we didn’t need to go through as many hoops as we need to do today and make it more developer friendly?

This talk will go over the most important steps currently involved in setting up a production environment for your a web app and will propose alternative approaches as well in the form of new software solutions developed by Phusion. This talk will focus on app deployment, monitoring and server provisioning, but will also touch upon topics such as UI design and UX as the latter plays an important part in making things more accessible. More specifically, we’ll discuss Docker, Polymer, Node, Rails, Phusion Passenger, Union Station and much more.

RSVP to attend!

<style> .phusion_roadshow_date_table td { padding: 10px; } .phusion_roadshow_date_table thead tr { background-color: transparent !important; } .phusion_roadshow_date_table thead tr td { font-weight: bold !important; } .phusion_roadshow_date_table tr { background-color: #F7F8FB !important; } .phusion_roadshow_date_table tr:nth-child(2) { background-color: #EDF0F6 !important; } .phusion_roadshow_date_table + p { margin-top: 20px; } .phusion_roadshow_button { padding: 5px 10px 5px 10px; border-radius: 4px; background-color: green; color: white !important; font-size: large; white-space: nowrap; } .phusion_roadshow_button:hover { background-color: blue; text-decoration: none; } </style>
Date Location Details/RSVP
Oct 23rd, 2014 — 6:00pm — 8:00pm Constant Contact Waltham Office Rails Boston Meetup.com
Oct 29th, 2014 — 6:00pm – 8:00pm AirBnB SF HQ AirBnB Meetups

Get notified about our tech talk recording

Unable to attend? No worries! We’ll be giving this series of tech talks over at our friends at Twitter too, who have generously offered to record it. We expect to be able to post it up online sometime in the future. Be sure to follow us on @phusion_nl and/or sign up to our newsletter to stay in the loop on this.

Sign up to be notified about our tech talk recordings, and other Phusion related news. You can unsubscribe any time.



On behalf of the Phusion team, we’re looking forward to meeting up with you next month!

The post RSVP Now: The Future of App Deployment appeared first on Phusion Corporate Blog.

OpenDataCommunities relaunch with design refresh and improved data browsing

Posted about 18 hours back at RicRoberts :

This week we released a refreshed version of OpenDataCommunities, DCLG’s linked open data site. It’s now stylistically more in line with the department’s gov.uk presence, plus we’ve added a blog and upgraded the data platform to the latest version of our PublishMyData platform, making it easier to access and use the data you want.

Some of the main changes are around navigation. The homepage now displays the latest datasets from the data catalogue and the most recent blog posts. We’ve also introduced new top-level navigation across the whole site, so you can quickly select the section most relevant to your visit.

OpenDataCommunities Tabs

The News tab takes you to the site’s new blog where you can browse by article, tags or author. DCLG are keen to explore new ways of communicating with users, so the blog includes a commenting feature and social media links.

The Data tab naturally provides access to the data catalogue - now running on PublishMyData version 2 - which includes a bunch of new features for data users. One of the features we’re most excited about is expert mode, which our CTO, Ric, recently wrote about on the OpenDataCommunities blog.

There are improvements on the Apps section too: We’ve rewritten the deprivation and wellbeing mappers to use OpenStreetMap map tiles instead of Google Maps. And given some of the other tools such as the Spreadsheet builder (previously know as the stats selector) a bit of an update too.

Deprivation Mapper

This relaunch helps DCLG to publish and present their data in a more accessible way than ever before (read what they say about it here). As it’s backed by our new version of PublishMyData, it benefits from improved data browsing - particularly for expert data users who want to use the APIs. And updated mappers set the scene for the next stage of the project which includes more map-based visualisations. Watch this space!

Data for the Arts

Posted about 18 hours back at RicRoberts :

Along with Future Everything, Dundee University and a range of arts organisations, we’ve recently agreed to collaborate on a new research project. It aims to help arts organisations make the most of the data stored within their existing networks to improve their services, efficiency, and decision making. It’s funded by Nesta, AHRC and Arts Council England’s Digital R&D fund for the Arts.

So we’ll be bringing our graph database, data science and user-experience skills to the party whilst Dundee University will be offering their expertise on Social Network Analysis. Future Everything are coordinating the project and contributing their long-standing and deep knowledge of what happens when technology meets the arts. Together we’ll work out how to build tools to exploit the kind of data held by our partner arts organisations.

We’re excited to see what we can build together, and also to have the chance to learn from the experiences of other consortia being funded by the programme. Read a bit more on this project at FutureEverything’s site.

Arts API: The First Workshop

Posted about 18 hours back at RicRoberts :

Last week saw the first ArtsAPI project workshop, which was hosted in Manchester by FutureEverything and attended by ourselves; Dundee university; the Arts API team from FutureEverything and arts organisations partners of the project.

It was a good first meeting, with lots of ideas. The point of the project is to help arts organisations make the most of the data stored within their existing networks so they can improve their services, efficiency, and decision making. It was decided that people want to characterise and analyse the social network so they can measure the impact it has. So, we’ll assess the impact of relationships between organisations by evidencing how influential their network of relationships is.

Arts API Logo

Our role is to design a data model of the social network of arts organisations. On the day, there were exercises to help everyone think about which datasets are available and which are most useful. There was also discussion about which questions the arts organisations want ask our ArtsAPI - which was great for us because it made people think about which features of the network are most important and how they’re interconnected.

The meeting also helped focus on the big research and technological challenges we’ll face in this project:

  • how do we measure the impact of a social network’s quantifiable characteristics?
  • how do we extract sufficiently detailed and reliable information from the available data sources to populate our data model of that social network?
  • how do we help the arts organisations understand and analyse their network, so they can provide evidence of their impact, and to find ways that they can increase their impact?

We have our work cut out over the next year or so, but the workshop last week was a great start.

Card Sorting

Posted 1 day back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

When we design a product, we choose words and group content to help users use the product. Those decisions form the basis of the product’s “information architecture.” During this process, it’s tempting to project our ideas of who our users are, how they behave, and what they want.

To combat against this tendency toward our biases, we run design exercises such as card sorting.

card-sorting

What is “card sorting”?

Card sorting is a design exercise that guides us toward creating the most coherent information architecture of a product. During a card sorting session, participants are asked to associate two sets of flashcards by grouping them. While the first set of flashcards contains categories, the second set contains sample content.

For example, if these are our categories:

  • Subscribe
  • Unsubscribe
  • Connect

Then our sample content might be:

  • Follow someone
  • Post a new message
  • Send private message
  • Like a post
  • Share a post
  • Delete content
  • Unfriend

We observe how they map one set of information to another to create category-content mappings. When we conduct card sorting, we ask this question:

How do people organize content into categories?

The way participants group and sort the cards will reveal a users' mental model and guide the information architecture of the product.

Preparation

Card sorting is a low-cost and time-efficient way to validate assumptions and identify new learnings about our users and product.

Here’s what we need:

  • 30 minutes
  • 3-5 participants
  • Flash cards or Post-it notes

After we’ve scheduled time to meet with our participants, we:

  1. Create category cards (set #1).
  2. Create sample content cards (set #2). Set #2 should have about twice as many cards as Set #1.

card-sorting

Running the exercise

Each participant goes through this exercise one at a time.

  1. Ask the participant to match cards in set #2 (sample content) to corresponding cards in set #1 (categories) based on what makes sense to them. There is no right or wrong answer.
  2. Observe and take notes of the participants' category-content mappings.
  3. At the end of each participant’s session, take a picture of which cards in set #1 were matched to cards in set #2.
  4. Repeat.

card-sorting-gif

Gathering results

In order to learn about our participants' category-content mappings, we plot our collected data onto a spreadsheet. We’re looking for common patterns in the resulting mappings, as these are the things that will influence our product decisions. Here’s how we organize the spreadsheet:

  1. Label the first row with category card names.
  2. Label the first column with sample content names.
  3. Since we have images of results by participant, place a tally inside the cell where category column and sample row intersect.
  4. Repeat for each participant.

card-sorting

The resulting spreadsheet reveals the frequency of each category-content mapping by participants. We now have our data gathered.

Deriving outcomes

To determine the best outcomes given our data, we identify the strongest and weakest category-content mappings.

The cells with the most tallies represent strong category-content mappings. Cells with the fewest tallies represent weak category-content-mappings. In my example, I had only four participants. So, the most number of tallies that a cell can have is four (IIII).

card-sorting

The strongest category-content mappings are represented by the categories with the largest number of tallies. There is general consensus about these informational relationships.

For example, all of our participants readily associated “Community” with terms such as “Mainville Middle School 2013-2014” and “thoughtbot SF”. This insight informs how we represent community information in the product.

Now, let’s identify the weakest category-content mappings.

card-sorting

Cells with fewer tallies represent weaker category-content mapping. Another way to identify weak category-content mapping would be to look for rows with a wide distribution of tallies.

Weak category-content mappings deserve our attention because they reveal a lack of consensus about how one type of information relates to another. They highlight cognitive gaps that must be addressed so that our wider user base can understand how to use our product.

Typically, categories with lower tallies could be reconsidered, as they did not reveal themselves to be a significant group of information.

Questions to ask ourselves

Here are some questions to ask in order to learn from the results:

  • What common patterns have been revealed?
  • Are there any unexpected groupings from the participants?
  • Which relationships were discovered?
  • Should I consolidate any of these categories?
  • Do I need to create any new categories?
  • What have I learned about the participants' mental models?
  • How might I create a workflow that matches the participants' mental models?

Next steps

To address uncertainties about users' understanding of a product, try running this low-cost and time-efficient exercise. For assistance running this or other product design exercises, contact us.

I believe...

Posted 1 day back at Saaien Tist


Ryo Sakai reminded me a couple of weeks ago about Simon Sinek's excellent TED talk "Start With Why - How Great Leaders Inspire Action"; which inspired this post... Why do I do what I do?

The way data can be analysed has been automated more and more in the last few decades. Advances in machine learning and statistics make it possible to gain a lot of information from large datasets. But are we starting to rely to much on those algorithms? Different issues seem to pop up more and more. For one thing, research in algorithm design has enabled many more applications, but at the same time makes these so complex that they start to operate as black boxes. Not only to the end-user who provides the data, but even for the algorithm developer. Another issue with pre-defined algorithms is that having these around precludes us to identifying unexpected patterns. If the algorithm or statistical test is not specifically written to find a certain type of pattern, it will not find it. Third issue: (arbitrary) cutoffs. Many algorithms rely heavily on the user (or even worse: the developer) defining a set of cutoff values. This is true in machine learning as well as statistics. A statistical test returning a p-value of 4.99% is considered "statistically significant", but you'd throw away your data if that p-value were 5.01%. What's the intrinsic thing at 5% that makes you have to choose between "yes, this is good" and "let's throw our hypothesis out the window"? All in all, much of this comes back to the fragility of using computers (hat tip to Toni for the book by Nassim Taleb): you have to tell them what to do and what to expect. They're not resilient to changes in setting, data, prior knowledge, etc; at least not as much as we are.

So where does this bring us? It's my firm belief that we need to put the human back in the loop of data analysis. Yes, we need statistics. Yes, we need machine learning. But also: yes, we need a human individual to actually make sense of the data and drive the analysis. To make this possible, I focus on visual design, interaction design, and scalability. Visual design because the representation of data in many cases needs improvement to be able to cope with high-dimensional data; interaction design because it's often by "playing" with the data that the user can gain insights; and scalability because it's not trivial to process big data fast enough that we can get interactivity.

Parsing Embedded JSON and Arrays in Swift

Posted 2 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

In the previous posts (first post, second post) about parsing JSON in Swift we saw how to use functional programming concepts and generics to make JSON decoding consise and readable. We left off last time creating a custom operator that allowed us to decode JSON into model objects using infix notation. That implementation looked like this:

struct User: JSONDecodable {
  let id: Int
  let name: String
  let email: String?

  static func create(id: Int)(name: String)(email: String?) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONParse(json) >>> { d in
      User.create
        <^> d <|  "id"
        <*> d <|  "name"
        <*> d <|? "email"
    }
  }
}

We can now parse JSON into our model objects using the <| and <|? operators. The final piece we’re missing here is the ability to get keys from nested JSON objects and the ability to parse arrays of types.

Note: I’m using <|? to stay consistent with the previous blog post but ?s are not allowed in operators until Swift 1.1. You can use <|* for now.

Getting into the Nest

First, let’s look at getting to the data within nested objects. A use case for this could be a Post to a social network. A Post has a text component and a user who authored the Post. The model might look like this:

struct Post {
  let id: Int
  let text: String
  let authorName: String
}

Let’s assume that the JSON we receive from the server will look like this:

{
  "id": 5,
  "text": "This is a post.",
  "author": {
    "id": 1,
    "name": "Cool User"
  }
}

You can see that the author key is referencing a User object. We only want the user’s name from that object so we need to get the name out of the embedded JSON. Our Post decoder starts like this:

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Post {
    return Post(id: id, text: text, authorName: authorName)
  }

  static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author"
    }
  }
}

This won’t work because our create function is telling the <| operator to try and make the value associated to the "author" key a String. However, it is a JSONObject so this will fail. We know that d <| "author" by itself will return a JSONObject? so we can use the bind operator to get at the JSONObject inside the optional.

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Post {
    return Post(id: id, text: text, authorName: authorName)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" >>> { $0 <| "name" }
    }
  }
}

This works, but there are two other issues at play. First, you can see that reaching further into embedded JSON can result in a lot of syntax on one line. More importantly, Swift’s type inference starts to hit its limit. I experienced long build times because the Swift compiler had to work very hard to figure out the types. A quick fix would be to give the closure a parameter type: { (o: JSONObject) in o <| "name" }, but this was even more syntax. Let’s try to overload our custom operator <| to handle this for us.

A logical next step would be to make the <| operator explicitly accept a JSONObject? optional value instead of the non-optional allowing us to eliminate the bind (>>>) operator.

func <|<A>(object: JSONObject?, key: String) -> A? {
  return object >>> { $0 <| key }
}

Then we use it in our Post decoder like so:

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Post {
    return Post(id: id, text: text, authorName: authorName)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
    }
  }
}

That syntax looks much better; however, Swift has a bug / feature that allows a non-optional to be passed into a function that takes an optional parameter and Swift will automatically turn the value into an optional type. This means that our overloaded implementation of <| that takes an optional JSONObject will be confused with its non-optional counterpart since both can be used in the same situations.

Instead, let’s specify an overloaded version of <| that removes the generic return value and explicity sets it to JSONObject.

func <|(object: JSONObject, key: String) -> JSONObject {
  return object[key] >>> _JSONParse ?? JSONObject()
} 

We try to parse the value inside the object to a JSONObject and if that fails we return an empty JSONObject to the next part of the decoder. Now the d <| "author" <| "name" syntax works and the compiler isn’t slowed down.

Arrays and Arrays of Models

Now let’s look at how we can parse JSON arrays into a model. We’ll use our Post model and add an array of Strings as the comments on the Post.

struct Post {
  let id: Int
  let text: String
  let authorName: String
  let comments: [String]
}

Our decoding function will then look like this:

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String)(comments: [String]) -> Post {
    return Post(id: id, text: text, authorName: authorName, comments: comments)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
        <*> d <| "comments"
    }
  }
}

This works with no extra coding. Our _JSONParse function is already good enough to cast a JSONArray or [AnyObject] into a [String].

What if our Comment model was more complex than just a String? Let’s create that.

struct Comment {
  let id: Int
  let text: String
  let authorName: String
}

This is very similar to our original Post model so we know the decoder will look like this:

extension Comment: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Comment {
    return Comment(id: id, text: text, authorName: authorName)
  }

  static func decode(json: JSON) -> Comment? {
    return _JSONParse(json) >>> { d in
      Comment.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
    }
  }
}

Now our Post model needs to use the Comment model.

struct Post {
  let id: Int
  let text: String
  let authorName: String
  let comments: [Comment]
}

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String)(comments: [Comment]) -> Post {
    return Post(id: id, text: text, authorName: authorName, comments: comments)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
        <*> d <| "comments"
    }
  }
}

Unfortunately, _JSONParse isn’t good enough to take care of this automatically so we need to write another overload for <| to handle the array of models.

func <|<A>(object: JSONObject, key: String) -> [A?]? {
  return d <| key >>> { (array: JSONArray) in array.map { $0 >>> _JSONParse } }
}

First, we extract the JSONArray using the <| operator. Then we map over the array trying to parse the JSON using _JSONParse. Using map, we will get an array of optional types. What we really want is an array of only the types that successfully parsed. We can use the concept of flattening to remove the optional values that are nil.

func flatten<A>(array: [A?]) -> [A] {
  var list: [A] = []
  for item in array {
    if let i = item {
      list.append(i)
    }
  }
  return list
}

Then we add the flatten function to our <| overload:

func <|<A>(object: JSONObject, key: String) -> [A]? {
  return d <| key >>> { (array: JSONArray) in 
    array.map { $0 >>> _JSONParse } 
  } >>> flatten
}

Now, our array parsing will eliminate values that fail _JSONParse and return .None if the key was not found within the object.

The final step is to be able to decode a model object. For this, we need to define an overloaded function for _JSONParse that knows how to handle models. We can use our JSONDecodable protocol to know that there will be a decode function on the model that knows how to decode the JSON into a model object. Using this we can write a _JSONParse implementation like this:

func _JSONParse<A: JSONDecodable>(json: JSON) -> A? {
  return A.decode(json)
}

Now we can decode a Post that contains an array of Comment objects. However, we’ve introduced a new problem. There are two implementations for the <| operator that are ambiguous. One returns A? and the other returns [A]? but and array of a type could also be A so the compiler doesn’t know which implementation of <| to use. We can fix this by making every type that we want to use the A? version to conform to JSONDecodable. This means we will have to make the native Swift types conform as well.

extension String: JSONDecodable {
  static func decode(json: JSON) -> String? {
    return json as? String
  }
}

extension Int: JSONDecodable {
  static func decode(json: JSON) -> Int? {
    return json as? Int
  }
}

Then make the <| implementation that returns A? work only where A conforms to JSONDecodable.

func <|<A: JSONDecodable>(object: JSONObject, key: String) -> A?

Conclusion

Through a series of blog posts, we’ve seen how functional programming and generics can be a powerful tool in Swift for dealing with optionals and unknown types. We’ve also explored using custom operators to make JSON parsing more readable and consise. As a final look at what we can do, let’s see the Post decoder one last time.

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String)(comments: [Comment]) -> Post {
    return Post(id: id, text: text, authorName: authorName, comments: comments)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
        <*> d <| "comments"
    }
  }
}

We’re excited to announce that we’re releasing an open source library for JSON parsing based on what we’ve learned writing this series. We are calling it Argo, named after the Greek word for swift and Jason of the Argonauts' boat. Jason’s father was Aeson, which is a JSON parsing library in Haskell that inspired Argo. You can find it on GitHub. We hope you enjoy it as much as we do.

The Bad News

During this part of the JSON parsing I ran up against the limits of the Swift compiler quickly. The larger your model object, the longer the build takes. This is an issue with the Swift compiler having trouble working out all the nested type inference. While Argo works, it can be impracticle for large objects. There is work being done on a separate branch to reduce this time.

Episode #499 - September 26, 2014

Posted 5 days back at Ruby5

Shell Shocked, Factory Girl for frontend tests with Hangar, and upgrading from Rails 3.2 to 4.2

Listen to this episode on Ruby5

Sponsored by NewRelic

New Relic APM identifies many transactions that serve your end users and other systems
NewRelic

Shell Shock

Stephane Chazelas has discovered a vulnerability in Bash covering almost every version up to and including version 4.3
Shell Shock

Prefer Objects as Method Parameters, Not Class Names

Posted 5 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

In an application we worked on, we presented users with multiple choice questions and then displayed summaries of the answers. Users could see one of several summary types. You could view the percentage of users who selected the correct answer, or see a breakdown of the percentage of users who selected each answer.

Some of these summary classes were simple:

class MostRecentAnswer
  def summary_for(question)
    question.most_recent_answer_text
  end
end

We allowed the user to select which summary to view, so we accepted a summary_type as a parameter. We needed to pass the summarizer around, so we accepted a class name in the parameters and that name directly to our model.

class SummariesController < ApplicationController
  def index
    @survey = Survey.find(params[:survey_id])
    @summaries = @survey.summaries_using(params[:summary_type])
  end
end

class Survey < ActiveRecord::Base
  has_many :questions

  def summaries_using(summarizer_type)
    summarizer = summarizer_type.constantize.new
    questions.map do |question|
      summarizer.summary_for(question)
    end
  end
end

This works, but it set us up for trouble later.

The Survey#summaries_using method accepts a class name, which means it can only reference constants instead of objects.

I’ve come to call this “class-oriented programming,” because it results in an over-emphasis on classes. Because code like this can only reference constants, it results in classes which use inheritance instead of composition.

Runtime vs Static State

Some Rails applications live with much of their data trapped in static state. Anything that isn’t a local or instance variable is static state. Here are some examples:

VERSION = 2

cattr_accessor :version
self.version = 2

@@version = 2

We don’t usually talk about “static” methods and attributes in Ruby, but all of the information contained in the above example is static state, because only one reference can exist at one time for the entire program.

This becomes a problem when you want to mix static state and runtime state, because static state is viral, as static state can only compose other static state.

Runtime State in Rails Applications

In our original example, you would be able to get away with using a class-based solution, because the MostRecentAnswer summarizer doesn’t need any information besides the question to summarize.

Here’s a new challenge: after the summary of each answer, also include the current user’s answer. Such a summarizer could be implemented in a decorator:

class WithUserAnswer
  def initialize(base_summarizer, user)
    @base_summarizer = base_summarizer
    @user = user
  end

  def summary_for(question)
    user_answer = question.answer_text_for(@user)
    base_summary = @base_summarizer.summary_for(question)
    "#{base_summary} (Your answer: #{user_answer})"
  end
end

This won’t work with a class-based solution, though, because the parameters to the initialize method vary for different summarizers. These parameters may have little in common and may be initialized far away from where they’re used, so it doesn’t make sense to pass all of them all of the time.

We can rewrite our example to pass an object instead of a class name:

class SummariesController < ApplicationController
  def index
    @survey = Survey.find(params[:survey_id])
    @summaries = @survey.summaries_using(summarizer)
  end

  private

  def summarizer
    if params[:include_user_answer]
      WithUserAnswer.new(base_summarizer, current_user)
    else
      base_summarizer
    end
  end

  def base_summarizer
    params[:summary_type].constantize.new
  end
end

class Survey < ActiveRecord::Base
  has_many :questions

  def summaries_using(summarizer)
    questions.map do |question|
      summarizer.summary_for(question)
    end
  end
end

Now that Survey accepts a summarizer object instead of a class name, we can pass objects which combine static and runtime state, like the current user.

The controller still uses constantize, because it’s not possible to pass an object as an HTTP parameter. However, by avoiding class names as much as possible, this example has become more flexible.

What’s Next?

You can learn more about factories, composition, decorators and more in Ruby Science.

Security advisory: Phusion Passenger and the CVE-2014-6271 Bash vulnerability

Posted 5 days back at Phusion Corporate Blog

On 24 September 2014, an important security vulnerability for Bash was published. This vulnerability, dubbed “Shellshock” and with identifiers CVE-2014-6271 and CVE-2014-7169, allows remote code execution.

This vulnerability is not caused by Phusion Passenger, but does affect Phusion Passenger. We strongly advise users to upgrade their systems as soon as possible. Please note that while CVE-2014-6271 has been patched, CVE-2014-7169 isn’t. A fix is still pending.

Please refer to your operating system vendor’s upgrade instructions, for example:

The post Security advisory: Phusion Passenger and the CVE-2014-6271 Bash vulnerability appeared first on Phusion Corporate Blog.

Maintenance Thursday 25th at 8pm EST

Posted 6 days back at entp hoth blog - Home

Lighthouse will be in maintenance mode tomorrow night at 8pm EST, for about 1h, hopefully less.

This is a bit short notice, but we have to perform some important hardware updates.

As usual, you can contact us at support@lighthouseapp.com if you have any question or concern.

Phusion Passenger 4.0.52 released

Posted 7 days back at Phusion Corporate Blog


Phusion Passenger is a fast and robust web server and application server for Ruby, Python, Node.js and Meteor. Passenger takes a lot of complexity out of deploying web apps, and adds powerful enterprise-grade features that are useful in production. High-profile companies such as Apple, New York Times, AirBnB, Juniper, American Express, etc are already using it, as well as over 350.000 websites.

Phusion Passenger is under constant maintenance and development. Version 4.0.52 is a bugfix release.

Phusion Passenger also has an Enterprise version which comes with a wide array of additional features. By buying Phusion Passenger Enterprise you will directly sponsor the development of the open source version.

Recent changes

Version 4.0.50 and 4.0.51 have been skipped because they were hotfixes for Enterprise customers. The changes in 4.0.50, 4.0.51 and 4.0.52 combined are as follows:

  • Fixed a null termination bug when autodetecting application types.
  • Node.js apps can now also trigger the inverse port binding mechanism by passing '/passenger' as argument. This was introduced in order to be able to support the Hapi.js framework. Please read this StackOverflow answer for more information regarding Hapi.js support.
  • It is now possible to abort Node.js WebSocket connections upon application restart. Please refer to this page for more information. Closes GH-1200.
  • Passenger Standalone no longer automatically resolves symlinks in its paths.
  • passenger-config system-metrics no longer crashes when the system clock is set to a time in the past. Closes GH-1276.
  • passenger-status, passenger-memory-stats, passenger-install-apache2-module and passenger-install-nginx-module no longer output ANSI color codes by default when STDOUT is not a TTY. Closes GH-487.
  • passenger-install-nginx-module --auto is now all that’s necessary to make it fully non-interactive. It is no longer necessary to provide all the answers through command line parameters. Closes GH-852.
  • Minor contribution by Alessandro Lenzen.
  • Fixed a potential heap corruption bug.
  • Added Union Station support for Rails 4.1.

Installing or upgrading to 4.0.52

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball

Final

Fork us on Github!Phusion Passenger’s core is open source. Please fork or watch us on Github. :)

<iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=watch&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=fork&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;type=follow&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="190" height="30"></iframe>

If you would like to stay up to date with Phusion news, please fill in your name and email address below and sign up for our newsletter. We won’t spam you, we promise.



Our iOS, Rails, and Backbone.js Books Are Now Available for Purchase

Posted 7 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Starting today, you can buy any of our books through these links.

Our book offerings currently include:

Each of these books comes in MOBI, EPUB, PDF, and HTML and includes access to the GitHub repository with the source Markdown / LaTeX file and an example application.

For the interested, for the past few months these books were included in and exclusively available through our subscription learning product, Upcase.

We determined that the books were not a good fit for Upcase, and so now we have split them out.

Episode #498 - September 23, 2014

Posted 7 days back at Ruby5

We go Airborne for Ruby 2.1.3 while Eagerly Decorating the skies and Swiftly avoiding the Daemons on this episode of Ruby5.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
Top Ruby Jobs

Ruby 2.1.3 is released

Last week, MRI Ruby 2.1.3 was released. It’s primarily a bug fix release, but does contain new garbage collection tuning which seemingly drastically reduces memory consumption.
Ruby 2.1.3 is released

Automatic Eager Loading in Rails with Goldiloader

The team from Salsify recently released a gem called Goldiloader. This gem attempts to automatically eager load associated records and avoid n+1 queries.
Automatic Eager Loading in Rails with Goldiloader

Airborne - RSpec-driven API testing

A few days ago a new, RSpec-driven API testing framework called Airborne took off on GitHub. It works with Rack applications and provides useful response header and JSON contents matchers for RSpec.
Airborne - RSpec-driven API testing

Active Record Eager Loading with Query Objects and Decorators

On the Thoughtbot blog this week, Joe Ferris wrote about using Query Objects and Decorators to easily store the data returned in ActiveRecord models and use it in your views. Query objects can help you wrap up complex SQL without polluting our models.
Active Record Eager Loading with Query Objects and Decorators

Don’t Daemonize your Daemons

Yesterday, Mike Perham put together a short, yet very useful, post entitled “Don’t Daemonize Your Daemons!” It was written as a retort to Jake Gordon’s Daemonizing Ruby Processes post last week, highlighting that fact that most people, including Jake, make daemonizing processes overly difficult.
Don’t Daemonize your Daemons

Swift for Rubyists

If you’re interested in diving into Apple’s new Swift language, we highly recommend the video of JP Simard’s talk on Swift For Rubyists on the realm.io blog. iOS 8 is out now, so Swift applications are now allowed in the App Store.
Swift for Rubyists

Validating JSON Schemas with an RSpec Matcher

Posted 8 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

At thoughtbot we’ve been experimenting with using JSON Schema, a widely-used specification for describing the structure of JSON objects, to improve workflows for documenting and validating JSON APIs.

Describing our JSON APIs using the JSON Schema standard allows us to automatically generate and update our HTTP clients using tools such as heroics for Ruby and Schematic for Go, saving loads of time for client developers who are depending on the API. It also allows us to improve test-driven development of our API.

If you’ve worked on a test-driven JSON API written in Ruby before, you’ve probably encountered a request spec that looks like this:

describe "Fetching the current user" do
  context "with valid auth token" do
    it "returns the current user" do
      user = create(:user)
      auth_header = { "Auth-Token" => user.auth_token }

      get v1_current_user_url, {}, auth_header

      current_user = response_body["user"]
      expect(response.status).to eq 200
      expect(current_user["auth_token"]).to eq user.auth_token
      expect(current_user["email"]).to eq user.email
      expect(current_user["first_name"]).to eq user.first_name
      expect(current_user["last_name"]).to eq user.last_name
      expect(current_user["id"]).to eq user.id
      expect(current_user["phone_number"]).to eq user.phone_number
    end
  end

  def response_body
    JSON.parse(response.body)
  end
end

Following the four-phase test pattern, the test above executes a request to the current user endpoint and makes some assertions about the structure and content of the expected response. While this approach has the benefit of ensuring the response object includes the expected values for the specified properties, it is also verbose and cumbersome to maintain.

Wouldn’t it be nice if the test could look more like this?

describe "Fetching the current user" do
  context "with valid auth token" do
    it "returns the current user" do
      user = create(:user)
      auth_header = { "Auth-Token" => user.auth_token }

      get v1_current_user_url, {}, auth_header

      expect(response.status).to eq 200
      expect(response).to match_response_schema("user")
    end
  end
end

Well, with a dash of RSpec and a pinch of JSON Schema, it can!

Leveraging the flexibility of RSpec and JSON Schema

An important feature of JSON Schema is instance validation. Given a JSON object, we want to be able to validate that its structure meets our requirements as defined in the schema. As providers of an HTTP JSON API, our most important JSON instances are in the response body of our HTTP requests.

RSpec provides a DSL for defining custom spec matchers. The json-schema gem’s raison d'être is to provide Ruby with an interface for validating JSON objects against a JSON schema.

Together these tools can be used to create a test-driven process in which changes to the structure of your JSON API drive the implementation of new features.

Creating the custom matcher

First we’ll add json-schema to our Gemfile:

Gemfile

group :test do
  gem "json-schema"
end

Next, we’ll define a custom RSpec matcher that validates the response object in our request spec against a specified JSON schema:

spec/support/api_schema_matcher.rb

RSpec::Matchers.define :match_response_schema do |schema|
  match do |response|
    schema_directory = "#{Dir.pwd}/spec/support/api/schemas"
    schema_path = "#{schema_directory}/#{schema}.json"
    JSON::Validator.validate!(schema_path, response, strict: true)
  end
end

We’re make a handful of decisions here: We’re designating spec/support/api/schemas as the directory for our JSON schemas and we’re also implementing a naming convention for our schema files.

JSON::Validator#validate! is provided by the json-schema gem. Passing strict: true to the validator ensures that validation will fail when an object contains properties not defined in the schema.

Defining the user schema

Finally, we define the user schema using the JSON Schema specification:

spec/support/api/schemas/user.json

{
  "type": "object",
  "required": ["user"],
  "properties": {
    "user" : {
      "type" : "object",
      "required" : [
        "auth_token",
        "email",
        "first_name",
        "id",
        "last_name",
        "phone_number"
      ],
      "properties" : {
        "auth_token" : { "type" : "string" },
        "created_at" : { "type" : "string", "format": "date-time" },
        "email" : { "type" : "string" },
        "first_name" : { "type" : "string" },
        "id" : { "type" : "integer" },
        "last_name" : { "type" : "string" },
        "phone_number" : { "type" : "string" },
        "updated_at" : { "type" : "string", "format": "date-time" }
      }
    }
  }
}

TDD, now with schema validation

Let’s say we need to add a new property, neighborhood_id, to the user response object. The back end for our JSON API is a Rails application using ActiveModel::Serializers.

We start by adding neighborhood_id to the list of required properties in the user schema:

spec/support/api/schemas/user.json

{
  "type": "object",
  "required": ["user"],
  "properties":
    "user" : {
      "type" : "object",
      "required" : [
        "auth_token",
        "created_at",
        "email",
        "first_name",
        "id",
        "last_name",
        "neighborhood_id",
        "phone_number",
        "updated_at"
      ],
      "properties" : {
        "auth_token" : { "type" : "string" },
        "created_at" : { "type" : "string", "format": "date-time" },
        "email" : { "type" : "string" },
        "first_name" : { "type" : "string" },
        "id" : { "type" : "integer" },
        "last_name" : { "type" : "string" },
        "neighborhood_id": { "type": "integer" },
        "phone_number" : { "type" : "string" },
        "updated_at" : { "type" : "string", "format": "date-time" }
      }
    }
  }
}

Then we run our request spec to confirm that it fails as expected:

Failures:

  1) Fetching a user with valid auth token returns requested user
     Failure/Error: expect(response).to match_response_schema("user")
     JSON::Schema::ValidationError:
       The property '#/user' did not contain a required property of 'neighborhood_id' in schema
       file:///Users/laila/Source/thoughtbot/json-api/spec/support/api/schemas/user.json#

Finished in 0.34306 seconds (files took 3.09 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/requests/api/v1/users_spec.rb:6 # Fetching a user with valid auth token returns requested user

We make the test pass by adding a neighborhood_id attribute in our serializer:

class Api::V1::UserSerializer < ActiveModel::Serializer
  attributes(
    :auth_token,
    :created_at,
    :email,
    :first_name,
    :id,
    :last_name,
    :neighborhood_id,
    :phone_number,
    :updated_at
  )
end
.

Finished in 0.34071 seconds (files took 3.14 seconds to load)
1 example, 0 failures

Top 1 slowest examples (0.29838 seconds, 87.6% of total time):
  Fetching a user with valid auth token returns requested user
    0.29838 seconds ./spec/requests/api/v1/users_spec.rb:6

Hooray!

What’s next?