Episode #434 - January 24th, 2014

Posted 6 months back at Ruby5

Command line fuzzy finding, workers in go, consolidating your docsites, interviewing front-end developers, tracking upcoming ruby conferences, and a long-awaited update to PhantomJS all in this episode of the Ruby5!

Listen to this episode on Ruby5

This episode is sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Selecta

Selecta is an open source fuzzy text finder for the command line. It is easy to work with an integrate into your existing workflows!
Selecta

Goworker

Have slow Ruby workers? Goworker is compatible with resque and might process your background tasks much faster than your existing Ruby workers.
Goworker

DevDocs

DevDocs combines multiple API documentations in a fast, organized, and searchable interface.
DevDocs

Front-end Job Interview Questions

A list of helpful front-end related questions you can use to interview potential candidates.
Front-end Job Interview Questions

rubyconferences.org

Wondering what conferences are coming up in the Ruby community? The recently launched rubyconferences.org site has all the details!
rubyconferences.org

PhantomJS Update

PhantomJS got an update that removes those pesky CoreText performance warnings in your log. Brew update today and all that ugliness will go away!
PhantomJS Update

Introducing Lotus::Router

Posted 6 months back at Luca Guidi - Home

For me, the first step in the long path of building a web framework was an HTTP router. By understanding requests coming from an user, it pays back with an immediate gratification: start it, open a browser and see a result.

My hope was to embark on a short journey, and reuse as much as possible existing libraries. But I soon discovered that the biggest problem of Ruby web frameworks is reusability of components. Rails uses journey, which is coupled with ActionPack code base. Sinatra has its own hardcoded routing system. Plain Rack apps require the developer to fiddle with low level details of env.

All those solutions work great for the narrowed problem they are solving: HTTP routing for a given system. What if I wanted to build an high-level router, not just for a specific framework, but for all the Ruby web apps?

That’s where the idea of Lotus::Router came in.

Lotus::Router is an HTTP Router for Ruby, it’s fast, lightweight and compatible with the Rack protocol.

It’s designed to work as a standalone software or within a context of a Lotus application, and provides features such as: fixed and partial URL matching, redirect, namespaces, named routes and RESTful resource(s).

Usage

During the design process of this software I had in mind two main goals: simplicity and employ well known ideas. Ease of use is crucial to software adoption, but also meet a developer’s acquaintance with what he (or her) already utilize is critical as well. This is a pattern that you will notice often during the discover of Lotus: on one hand, it leverages on well established concepts, on the other one, it adds value by bringing fresh ideas.

require 'rubygems'
require 'lotus-router'

router = Lotus::Router.new do
  get  '/hello', to: ->(env) { [200, {}, ['Hello, World!']] }
  get  '/dashboard',   to: 'dashboard#index'
  get  '/middleware',  to: RackMiddleware
  get  '/rack-app',    to: RackApp.new

  redirect '/legacy', to: '/'

  namespace 'admin' do
    get '/users', to: UsersController::Index
  end

  resource  'identity'
  resources 'users'
end

For those who are unfamiliar with this (I hope none of you), let me explain the basic usage.

We have an HTTP verb as method, #get in the example. This method is invoked with a string which is the relative URL to match ("/hello"), and with an endpoint (to: #...) that is where a request will be routed to. Thanks to the Ruby’s weak typing nature, an endpoint can be a proc, a string, a class or an object. According to simple conventions, Lotus::Router is able resolve that option in a Rack endpoint, which must be provided by your application.

I would like you to notice that the DSL is implemented with a block accepted by the constructor, and it uses public methods of the object, there is no magic here. I could write the previous example like this:

router = Lotus::Router.new
router.get  '/', to: ->(env) { [200, {}, ['Hello, World!']] }
# ...

Another aspect that is important is that we obtain a router object. Instead of being relegated to a secondary role and hidden behind the opaque mechanisms of other frameworks, this is the first time that a router it’s promoted to a first class citizenship. This is a pillar of the Lotus architecture: let components to emerge. In this way developers can be better understand, introspect and test.

router = Lotus::Router.new(scheme: 'https', host: 'host.com')
router.get '/login', to: 'sessions#new', as: :login

router.path(:login) # => "/login"
router.url(:login)  # => "https://host.com/login"

Imagine how much it would be easy ‐ with a system like this ‐ to implement routing helpers.

This is only a taste of what Lotus::Router can do: please have a look at the README and the API doc, for a detailed explanation.

Roadmap

The experiment of releasing a Lotus component on the 23rd of every month is going well. On February will be the turn of Lotus::Controller.

To stay updated with the latest releases, to receive code examples, implementation details and announcements, please consider to subscribe to the Lotus mailing list:

<link href="//cdn-images.mailchimp.com/embedcode/slim-081711.css" rel="stylesheet" type="text/css"/><style type="text/css"> #mc_embed_signup{background-color:#eee; clear:left} #mc_embed_signup form input.email{width:96%} #mc_embed_signup form input.button{background-color:#27ae60} </style>

ActiveRecord's where.not

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Rails 4.0 introduced a helpful new method for ActiveRecord queries: where.not. It can make clunky queries easier to read.

Usage

This query:

User.where.not(name: 'Gabe')

is effectively the same as this:

User.where('name != ?', 'Gabe')

It's "effectively" the same because where.not has some extra juice: it will fully qualify the column name with the table name, continue to work if the table or column get aliased (during a left outer join clause with includes), and will continue to work if the database implementation is switched.

I've usually seen it used for NOT NULL queries:

# Old and busted
# User.where('name IS NOT NULL')
# New hotness
User.where.not(name: nil)

But it works with arrays too:

# Without `where.not`
# Something.where("name NOT IN ?", User.unverified.pluck(:name))
# With `where.not`
Something.where.not(name: User.unverified.pluck(:name))

That example takes advantage of the fact that ActiveRecord automatically uses IN (or in this case NOT IN) if the value you're querying against is an array.

Complex usage

Here's a more complex example:

class Course < ActiveRecord::Base
  def self.with_no_enrollments_by(student)
    includes(:enrollments).
      references(:enrollments).
      where.not(enrollments: { student_id: student.id })
  end
end

You can ignore the first two lines, which tell ActiveRecord that we're going through the enrollments table (student has_many :courses, through: :enrollments). The method finds courses where the course has no enrollments by the student. It is the complement to student.courses.

Without where.not, it would look like this:

def with_no_enrollments_by(student)
  includes(:enrollments).
    references(:enrollments).
    where('enrollments.student_id != ?', student.id)
end

I prefer the pure-Ruby approach of the where.not version instead of the string SQL of the latter because it's easier to read and it's easier to change later.

What's next?

If you found this post helpful, I recommend our post on null relations or a close reading of the official ActiveRecord docs.

Starting and Stopping Background Services with Homebrew

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

I love Homebrew, but sometimes it really gets me down, you know? Especially when I have to deal with launchctl.

launchctl loads and unloads services that start at login. In OS X, these services are represented by files ending with .plist (which stands for "property list"). These plists are usually stored in either ~/Library/LaunchAgents or /Library/LaunchAgents. You load them (i.e. tell them to start at login) with launchctl load $PATH_TO_LIST and unload them with launchctl unload $PATH_TO_LIST. Loading a plist tells the program it represents (e.g. redis) to start at login, while unloading it tells the program not to start at login.

This post-install message from Homebrew may look familiar:

To have launchd start mysql at login:
    ln -sfv /usr/local/opt/mysql/*.plist ~/Library/LaunchAgents
Then to load mysql now:
    launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist
Or, if you don't want/need launchctl, you can just run:
    mysql.server start

Typing launchctl load and launchctl unload takes too long, and I can never remember where Homebrew plists are. Fortunately, Homebrew includes a lovely interface for managing this without using launchctl or knowing where plists are.

brew services

While it's not publicized, brew services is available on every installation of Homebrew. First, run the ln command that Homebrew tells you about in the post-installation message above:

ln -sfv /usr/local/opt/mysql/*.plist ~/Library/LaunchAgents

For Redis, you'd run:

# `brew info redis` will tell you what to run if you missed it
ln -sfv /usr/local/opt/redis/*.plist ~/Library/LaunchAgents

And so on. Now you're ready to brew a service:

$ brew services start mysql
==> Successfully started `mysql` (label: homebrew.mxcl.mysql)

That bit about "label: " means it just loaded ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist with launchctl load.

Let's say MySQL's acting funky. We can easily restart it:

brew services restart mysql
Stopping `mysql`... (might take a while)
==> Successfully stopped `mysql` (label: homebrew.mxcl.mysql)
==> Successfully started `mysql` (label: homebrew.mxcl.mysql)

Now let's see everything we've loaded:

$ brew services list
redis      started      442 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.redis.plist
postgresql started      443 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.postgresql.plist
mongodb    started      444 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.mongodb.plist
memcached  started      445 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.memcached.plist
mysql      started    87538 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.mysql.plist

Note that the list of services includes services you started with launchctl load, not just services you loaded with brew services.

Let's say we uninstalled MySQL and Homebrew didn't remove the plist for some reason (it usually removes it for you). There's a command for you:

$ brew services cleanup
Removing unused plist /Users/gabe/Library/LaunchAgents/homebrew.mxcl.mysql.plist

Kachow.

Hidden Homebrew commands

Homebrew ships with a whole bunch of commands that don't show up in brew --help. You can see a list of them in the Homebrew git repo. Each file is named like brew-COMMAND, and you run them with brew command. I recommend brew beer.

What's next?

If you liked this, I recommend reading through Homebrew's Tips and Tricks. You can also try out another Homebrew extension for installing Mac apps: homebrew-cask.

Open Data Scotland: a Linked Data pilot study for the Scottish Government

Posted 6 months back at RicRoberts :

digital social map

Last month we launched Open Data Scotland - a pilot site built for the Scottish Government to showcase how Linked Open Data can make for smarter, more efficient data use. Accompanying the site is a report (download pdf) which we produced to explain what open linked data is; how to publish it effectively and its potential use and benefits to the Scottish public sector.

The site is split into three parts:

We wanted to emphasise the potential of Linked Data to a range of users. So, we’re using datasets with topics that have proven popular in other projects, such as deprivation data and we’ve targeted each section of the site to a slightly different audience.

One new concept that we introduced in this project are contextual tutorials, aimed at a range of users: from those working with spreadsheets to those interested in more technical Linked Data wizardry. We love it because it gives a whole new set of people a friendly way in to using the data. We’re introducing a whole new audience to the power of Linked Data.

Something else new to this project are data kits. These kits bridge the gap between the interactive visualisations and the more technical aspects. They also help more advanced users get started on working with the data in the site. This helps to get the right information to the people who want it, in a form that allows them to use it quickly and easily.

We’re really excited about this project which emphasises how interactive, accessible and useful Linked Data can be. Bill introduced the pilot at the Open Data Scotland conference in December. Check it out yourself and let us know what you think.

Brewfile: a Gemfile, but for Homebrew

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Bundler users define dependencies for Ruby applications in a Gemfile and install those dependencies by running bundle install.

Homebrew users can define dependencies for their OS X operating system with a Brewfile and install those dependencies by running brew bundle. Let's write a basic Brewfile:

# Brewfile
install openssl
# a comment
link --force openssl

Note that Homebrew will treat lines that start with # as comments. Every other line will be passed to brew. So this:

install openssl
# a comment
link -f openssl

is run as these commands:

brew install openssl
brew link --force openssl

Usage

I can think of a few places where a Brewfile would be welcome:

  • In dotfiles, either yours or your company's. For example, we use it in our excellent dotfiles repo.
  • A setup script for your app (bundle install && brew bundle)
  • A setup script for a new machine. I often forget to install one of them (like rbenv-gem-rehash).

It's a neat encapsulation for non-programming-language dependencies like phantomjs.

What's next?

If you found this useful, I recommend reading through the source of the brew bundle command. For more Homebrew tricks, read through our OSX-related posts.

Scrolling DOM elements to the top, a Zepto plugin

Posted 6 months back at mir.aculo.us - Home

There’s bunches of plugins, extensions and techniques to smoothly scroll page elements, but most of them are convoluted messes and probably do more than you need. I like “small and works well”, and it’s a good exercise for those JavaScript and DOM muscles to write a small plugin from time to time.

My goal was to have an animated “scroll to top” for the mobile version of Freckle—normally the browser would take care of that (tap status bar to scroll to top), but in a more complex layout the built-in mechanisms for this quickly fail and you’ll have to implement some of the interactions users expect (like tap status bar to scroll to top) yourself. Specifically, this is for the native app wrapper (Cordova) I use for Freckle’s upcoming mobile app. It’s hooked up so that taps on the statusbar invoke a JavaScript method.

During development of this I needed the same thing for arbitrary scroll positions as well, so “scrolltotop” is a bit of a misnomer now. Anyway, here’s the annotated code:

<script src="https://gist.github.com/madrobby/8507960.js"></script>

Often, writing your own specialized plug-in is faster than trying to understand and configure existing code. If you do, share it! :)

Episode #433 - January 17, 2014

Posted 6 months back at Ruby5

ActiveSupport Notifications, RailsBricks, DotEnv, Builder, Decorator, Chain of Responsibility, and null object patterns

Listen to this episode on Ruby5

NewRelic
NewRelic recently posted about what Nonlinear Dynamics Teach Us About App Stability

Instrumenting Your Code With ActiveSupport Notifications
We've been having hack lunches at CustomInk | Tech to level up our rails knowledge. Find out what we learned about ActiveSupport Notifications

RailsBricks
RailsBricks will setup Bootstrap 3, Font Awesome, Devise, Kaminari and build out the basic models and views for those gems

Composable Matchers in RSpec 3.0
One of RSpec 3’s big new features is composable matchers. This feature will help make your tests more powerful with less brittle expectations

DotEnv
One of the tenets of a Twelve-Factor App is to store configuration in env vars. They are easy to change between deploys without changing any code; and unlike config files, there is little chance of them being checked into the code repo accidentally.

Code Show and Tell: PolymorphicFinder
You just need a quick refactor to use the Builder, Decorator, Chain of Responsibility, and null object pattern

We're NASA and We Know It (Mars Curiosity) Song
Thank you for listening to Ruby5. Be sure to tune in every Tuesday and Friday for the latest news in the Ruby and Rails community.

Rails + Angular + Jasmine: A Modern Testing Stack

Posted 6 months back at zerosum dirt(nap) - Home

When I started on my first Angular+Rails project around 12 months ago, there wasn't a lot of guidance around code organization, interop, and testing, and we got a lot of these things wrong. Since then, I've worked on several additional projects using the same tech stack and have had several more chances to screw it up all over again. After a few of these, I feel like I've finally got some conventions in place that work well for better code organization, interop, and testing.

This morning the team over at Localytics (hi Raj!) wrote up a good retrospective on their use of Angular + Rails over the past year, including lessons they learned and ongoing challenges. They touch on several of the same issues that my colleagues and I have run into, and the writeup inspired me to dust off my old busted blog to document some of my own findings.

Testing Your JavaScript Has Never Been Easier

One area that I felt like needed some further clarity was testing. In particular, how a Rails-centric application can cleanly and easily integrate tests around Angular frontend logic. Fortunately, once you figure out how to set this up, you'll find that unit testing Angular code in Jasmine -- especially controller and factory code -- is surprisingly easy to do. It's really the first time I've been sufficiently happy with a frontend testing configuration.

To see a working example for yourself and hack around with it, go snag the sample project I pushed up to GitHub. Bundle and run it, and play around with the shockingly awesome todo list application. Because the world really needed another one of those. When you've had enough of that, take a look at the contents of the spec/javascripts directory.

We're using the jasmine-rails test runner with CoffeeScript here, because that's what works for me (sorry Karma). Pay close attention to the spec_helper.coffee, which does much of the dependency injection needed to provide clean and intuitively named interfaces in our example controller spec.

<script src="https://gist.github.com/8480021.js?file="></script>
<noscript>
<html><body>You are being <a href="https://github.com/gist/8480021">redirected</a>.</body></html>
</noscript>

This gives us nice ways to interface with the factories and controllers we're defining, as well as Angular's own ngMock library (super useful for stubbing server-side endpoints), the event loop, and even template compilation for partials and directives. A couple of these are illustrated in the sample controller spec shown here:

<script src="https://gist.github.com/8480013.js?file="></script>
<noscript>
<html><body>You are being <a href="https://github.com/gist/8480013">redirected</a>.</body></html>
</noscript>

Jasmine's syntax should be very familiar to anyone who does RSpec BDD work, and the work we've done in our spec helper really cleans up the beforeEach setup that's required in each individual controller spec. These particular tests make heavy use of ngMock, which you won't always need to use, and the calls to flush() are required to fulfill pending requests, preserving the async nature of the backend but allowing the tests to execute synchronously.

Testing Continuously With Guard

Although the Jasmine web interface is nice, but I'm a big fan of using Guard in order to watch for filesystem events and kick off automated test runs from the command line. By including the guard-jasmine gem and updating our Guardfile we can continuously test both our server-side RSpec logic and the Jasmine unit tests all at the same time through a single interface:

One thing I haven't addressed here is directive testing, which can be a bit more difficult. I'll try to address that in a future post, or if you have your own recipes, feel free to link em up in the comments.

Special thanks to Mark Bates for working with me on early versions of this approach, and convincing me that Angular was worth looking at in the first place.

Recursive Macros in Vim

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Macros in vim can be a huge time saver, especially if they apply to a large number of lines. A trick I've been using recently is to use recursive macros to format large chunks of a file.

Let's say we have the following list of thousands of dates:

10/30/2013
11/30/2013
12/30/2013
...

And we want to change each to the following:

10/30/2013 : 10-30-2013
11/30/2013 : 11-30-2013
12/30/2013 : 12-30-2013
...

Macro Recording Time

Let's create the macro:

qqq             #clear out anything that may already be in the q register
qq              #start recording a macro and store it in the q register
y$              #copy to the end of the current line
A               #append the end of the current line
<Space>:<Space> #add a colon surrounded by spaces
p               #paste the date from the buffer
<Escape>        #return to visual mode
F/              #find the last instance of /
r-              #replace the / with a -
;.              #repeat the last find and replace
^               #go to the front of the line
j               #move down one line
@q              #make the macro recursive by having it invoke itself
q               #stop recording the macro

Now when you run @q vim will run the macro on every line until it finishes while you sit back and relax. I like using recursive macros because the loop will be exited if it fails to execute on a line. This improves the speed of making changes without risking applying it incorrectly throughout the file, provided you write your macros carefully.

What's next?

If you liked this post you should check out our vim screencast series The Art of Vim.

Phusion Passenger 4.0.35 released

Posted 6 months back at Phusion Corporate Blog

Version 4.0.34 has been skipped because it was an non-public release for QA purposes. The changes in 4.0.34 and 4.0.35 combined are:

  • The Node.js loader code now sets the isApplicationLoader attribute on the bootstrapping module. This provides a way for apps and frameworks that check for module.parent to check whether the current file is loaded by Phusion Passenger, or by other software that work in a similar way.

    This change has been introduced to solve a compatibility issue with CompoundJS. CompoundJS users should modify their server.js, and change the following:

    if (!module.parent) {
    

    to:

    if (!module.parent || module.parent.isApplicationLoader) {
    
  • Improved support for Meteor in development mode. Terminating Phusion Passenger now leaves less garbage Meteor processes behind.

  • It is now possible to disable the usage of the Ruby native extension by setting the environment variable PASSENGER_USE_RUBY_NATIVE_SUPPORT=0.
  • Fixed incorrect detection of the Apache MPM on Ubuntu 13.10.
  • When using RVM, if you set PassengerRuby/passenger_ruby to the raw Ruby binary instead of the wrapper script, Phusion Passenger will now print an error.
  • Added support for RVM >= 1.25 wrapper scripts.
  • Fixed loading passenger_native_support on Ruby 1.9.2.
  • The Union Station analytics code now works even without native_support.
  • Fixed passenger-install-apache2-module and passenger-install-nginx-module in Homebrew.
  • Binaries are now downloaded from an Amazon S3 mirror if the main binary server is unavailable.
  • And finally, although this isn’t really a change in 4.0.34, it should be noted. In version 4.0.33 we changed the way Phusion Passenger’s own Ruby source files are loaded, in order to fix some Debian and RPM packaging issues. The following doesn’t work anymore:

    require 'phusion_passenger/foo'
    

    Instead, it should become:

    PhusionPassenger.require_passenger_lib 'foo'
    

    However, we overlooked the fact that this change breaks Ruby apps which use our Out-of-Band GC feature, because such apps had to call require 'phusion_passenger/rack/out_of_band_gc'. Unfortunately we’re not able to maintain compatibility without reintroducing the Debian and RPM packaging issues. Users should modify the following:

    require 'phusion_passenger/rack/out_of_band_gc'
    

    to:

    if PhusionPassenger.respond_to?(:require_passenger_lib)
      # Phusion Passenger >= 4.0.33
      PhusionPassenger.require_passenger_lib 'rack/out_of_band_gc'
    else
      # Phusion Passenger < 4.0.33
      require 'phusion_passenger/rack/out_of_band_gc'
    end
    

Installing or upgrading to 4.0.35

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball

Compare Commits Between Git Branches

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Working with a lot of git branches can be a bit of a headache. Graph visualisations can get tangled and confusing, especially when they include more than just the branches you care about. Sound familiar? You need git show-branch.

I have a feature branch called stock-information on a project that's hosted on Heroku. I want to compare it to my master branch and to the master branch on my staging remote:

git show-branch stock-information staging/master master

The output can be a little confusing at first, but once you learn how to read it it's a huge time saver:

! [stock-information] WIP: Link to data series
 ! [staging/master] Add a description to Stock
  ! [master] Display Stocks
---
+   [stock-information] WIP: Link to data series
+   [stock-information~1] Create DataSeries for Stocks.
++  [staging/master] Add a description to Stock
++  [staging/master~1] Import external Stock information
+++ [master] Display Stocks

The first three lines are column headings. They show the commit at the tip of each of the branches I specified, with a ! to indicate which column will represent this branch in the lines that follow.

After the --- come the commits. The + characters near the start of the lines indicate which of the branches this commit is present on.

For example, the first commit only has a + in the first column. This lines up with the ! for stock-information in the heading section. So, we know that this commit is on the stock-information branch but not staging/master or master.

The third commit ("Add a description to Stock") has a + in each of the first two columns, which indicates it is present on the stock-information and staging-master.

The output will end with the last commit that is present on all of the specified branches, indicated by a + in each of the leading columns.

What's next?

If you found this useful, you might also enjoy:

Who's using the Internet for social good?

Posted 6 months back at RicRoberts :

Digital Social Innovation is a project we’ve been working on for Nesta which is all about tracking organisations and activities across Europe using the Internet for social good.

You can explore who’s been working on what and with whom via an interactive map, which updates in realtime as more data is added.

digital social map

Any organisation in Europe can sign up to showcase themselves and their activities and, because the projects are linked, you can see at a glance who else is working on them. Each activity has a page describing it and the areas it impacts as well as a lovely map visualisation, showing who’s joining in on it. For example, check out the CitySDK project.

The icing on the cake is that all the data entered via the site is instantly accessible in a Linked Open Data site powered by our PublishMyData platform. So, personal details excepted, anyone and everyone can access anything and everything in there. Personally, this is one of our favourite features of this project; the more that people can access the data, the more it can be used. And getting data used is what we’re all about. Full details of how to access the data programatically via the APIs can be found here.

digital social data

The information collected in the site is being analysed by our collaborators in the project (who include FutureEverything, Esade, IRI and the Waag Society) to help identify the most important trends and influencers in this area, and so provide policy recommendations to the EU, who are funding the project.

We’re proud to have worked on this and think it’s an interesting and innovative use of Linked Data. Read out more about the project on its About page and blog.

We're Hiring a Producer

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We're looking to hire a full-time producer in Boston.

This person will have the following responsiblities:

  • Recording, editing, and writing show notes for the Giant Robots podcast.
  • Recording and editing the Build Phase podcast.
  • Scheduling guests for the Giant Robots podcast.
  • Shooting and editing The Weekly Iteration (a recurring video show for Learn subscribers).
  • Shooting and editing longer, video-based workshops.
  • Managing outsourced editors for larger projects.
  • Managing our studio space and equipment.

The ideal candidate has experience recording and editing both video and audio, but we'll happily consider passionate learners with experience in just one of the two.

This position is full-time, with benefits including weekly catered lunches, health insurance, and unlimited paid time off.

It also has an extremely high degree of autonomy. You'll be given a credit card—if you think we need a piece of equipment, just order it. If you want to try a new way of shooting, or a new tool for editing, go for it. Great candidates would rather be set loose on a problem than told what to do about it. thoughtbot is an organization that embraces change, and we're looking for someone who is always looking to do things better than last time.

To apply, please email resumes@thoughtbot.com.

sed 102: Replace In-Place

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Many people know how to use basic sed:

$ sed 's/hello/bonjour/' greetings.txt
$ echo "hi there" | sed 's/hi/hello'

That'll cover 80% of your sed usage. This post is about the other 20%. Think of it as a followup course after sed 101.

So you can change streams by piping output to sed. What if you want to change the file in-place?

Replacing in-place

sed ships with the -i flag. Let's consult man sed:

-i extension
    Edit files in-place, saving backups with the specified extension.

Let's try it:

$ ls
greetings.txt
$ cat greetings.txt
hello
hi there
$ sed -i .bak 's/hello/bonjour' greetings.txt
$ ls
greetings.txt
greetings.txt.bak
$ cat greetings.txt
bonjour
hi there
$ cat greetings.txt.bak
hello
hi there

So the original file contents are saved in a new file called [file_name].bak, and the new, changed version is in the original greetings.txt. Now all we have to do is:

$ rm greetings.txt.bak

And we've changed the file in-place. You are now the toast of the office, sung of by bards:

there walks the Unix programmer / they who know of sed -i

Let's get l33t

Wait, there's more in that man entry for sed -i:

If a zero-length extension is given, no backup will be saved.  It is not
recommended to give a zero-length extension when in-place editing files, as
you risk corruption or partial content in situations where disk space is
exhausted, etc.

Zero-length extension, eh? Let's use our original greetings.txt file before we changed it:

$ sed -i '' 's/hello/bonjour' greetings.txt
$ ls
greetings.txt
$ cat greetings.txt
bonjour
hi there
$ cat greetings.txt.bak
cat: greetings.txt.bak: No such file or directory

The -i '' tells sed to use a zero-length extension for the backup. A zero-length extension means that the backup has the same name as the new file, so no new file is created. It removes the need to run rm after doing an in-place replace.

I haven't run into any disk-space problems with -i ''. If you are worried about the man page's warning, you can use the -i .bak technique I mention in the previous section.

Find and replace in multiple files

We like sed so much that we use it in our replace script. It works like this:

$ replace foo bar **/*.rb

The first argument is the string we're finding. The second is the string with which we're replacing. The third is a pattern matching the list of files within which we want to restrict our search.

Now that you're a sed master, you'll love reading replace's source code.

What's next?

If you found this useful, you might also enjoy:

  • sed by example taught me sed. It's a great resource in an easy-to-follow format.
  • The Grymoire sed guide is also an easy-to-follow guide that starts off easy and dives deep. It's helpful when learning and as a reference.