Getting Started with Sass, Bourbon, and Neat with Yeoman

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Yeoman is a toolchain for front-end development utilizing Grunt and Bower to scaffold, develop, and build webapps.

There are official generators maintained by the Yeoman team such as generator-angular for AngularJS and generator-backbone for Backbone.js.

Yeoman also supplies framework specific generators for Backbone and Angular like yo backbone:model Foo. With Yeoman, you can spend more time writing code and less time configuring out of the box.

In this post, we will create a basic Yeoman project, install Sass (without Compass) using Grunt, and set up Bourbon and Neat using Bower.

Creating a Yeoman Webapp project

Start by installing Yeoman.

npm install -g yo

Next, install a Yeoman generator. For this example, I'll be using the Yeoman Webapp Generator:

npm install -g generator-webapp

Create a folder for your project (in this case: yeoman-example), change directory to the folder, then run yo webapp:

$ mkdir yeoman-example
$ cd yeoman-example
$ yo webapp

When prompted, make sure "Sass with Compass" and "Bootstrap" are deselected. We will be adding Sass ourselves using the official grunt-contrib-sass plugin.

The setup should look similar to this:

yo-webapp setup screen

Installing grunt-contrib-sass

Make sure you have Sass installed. You can find out by running sass -v and if it outputs a version number.

$ gem install sass
$ sass -v
Sass 3.2.14 (Media Mark)

Next, install grunt-contrib-sass using the command:

npm install grunt-contrib-sass --save-dev

In the project's app folder, create a new folder called sass. This is where we will put our sass files. Move main.css to app/sass and change the extension to .scss:

$ mkdir app/sass
$ mv app/styles/main.css app/sass/main.scss

Installing Bourbon and Neat using Bower

Install Bourbon and Neat using bower install --save:

bower install --save bourbon
bower install --save neat

This downloads and saves the Bourbon and Neat repositories in the app/bower_components directory.

In main.scss, import bourbon and neat:

// In `app/sass/main.scss`
@import 'bourbon';
@import 'neat';

// Other imports and styles go here

Configuring Sass with Grunt

Next, we'll need to configure our Gruntfile for compiling .scss files. Open Gruntfile.js and update grunt.initConfig to configure what files to compile with Sass.

We will also add an options hash with Bourbon and Neat's stylesheets to the Sass loadPath. By adding these to the loadPath, Sass will see Bourbon and Neat's stylesheets in bower_components/..when we use @import 'bourbon'; and @import 'neat';:

grunt.initConfig({
  // ...
  sass: {
    dist: {
      files: [{
        expand: true,
        cwd: '<%= yeoman.app %>/sass',
        src: ['*.scss'],
        dest: '<%= yeoman.app %>/styles',
        ext: '.css'
      }],

      options: {
        loadPath: [
          '<%= yeoman.app %>/bower_components/bourbon/app/assets/stylesheets',
          '<%= yeoman.app %>/bower_components/neat/app/assets/stylesheets'
        ]
      }
    }
  },
  // ...

We also want the sass task to execute when we run grunt build. You can achieve this by adding sass to the build task:

grunt.registerTask('build', [
 'clean:dist',
 'useminPrepare',
 'sass',
 // ...
]);

Setting up Auto Compile

When you run grunt serve, Grunt will start a server, watch files, and run tasks based on what files are changed.

In Gruntfile.js, update the watch.styles hash in grunt.initConfig to compile .scss files whenever they are changed:

grunt.initConfig({
  // ...
  watch: {
    styles: {
      files: ['<%= yeoman.app %>/sass/{,*/}*.scss'],
      tasks: ['sass', 'newer:copy:styles', 'autoprefixer']
    }
    // ...
  },
  // ...
}

Wrapping up

Your project is now ready to go with Sass, Bourbon, and Neat!

Understanding how to add and configure Sass with Yeoman allows you to use different generators without worrying if they come with the right options for Sass.

Hawk's Snowy Perch

Posted 6 months back at Mike Clark

Hawk's Snowy Perch

Spectacularly beautiful morning with fresh snow, and this red-tailed hawk was enjoying the powder.

Powder Day

Posted 6 months back at Mike Clark

Powder Day

Spectacularly beautiful morning with fresh snow, and this red-tailed hawk was enjoying the powder.

Episode #446 - March 7th, 2014

Posted 6 months back at Ruby5

Running your own CI with Drone and Docker, building web-based RubyMotion apps with Under OS, funding for the Hello Ruby book, rubygems.org operating costs, Rails 4 assets on Heroku, and turning your text on its head with flippit all in this episode of the Ruby5.

Listen to this episode on Ruby5

This episode is sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Drone & Docker

Hate Jenkins but want to run your own CI server? This blog post from Jean-Philippe Boily will walk you through setting one up with Drone and Docker!
Drone & Docker

Under OS

Building html based applications for iOS has never been easier thanks to this new platform built on top of RubyMotion.
Under OS

Hello Ruby Book Funded

The Hello Ruby book project was successfully funded on February 22.
Hello Ruby Book Funded

RubyGems.org Costs

Ever wonder how much it costs to run rubygems.org?
RubyGems.org Costs

Rails 4 Assets on Heroku

This article contains information needed to run the asset pipeline in Rails version 4 and above on Heroku.
Rails 4 Assets on Heroku

Flippit

Tired of all your text being rightside-up? The flippit service and gem from Rocketeer Jonathan Jackson makes it easier than ever to turn your world upside down!
Flippit

Printing Ralph

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

I have a 3D printer. It's a lot of work, but is also a lot of fun. It's fun because turning 3D models into actual, tangible objects is just cool. It's work, because it requires a lot of tinkering to get it right. You need to churn out a lot of small objects to make sure the printer is calibrated correcly. The object should be simple enough that you can measure how much you need to tweak for the next print.

I figured early on in my printing career that I'd print something that people would like to have, even if the print quality wasn't perfect. And what's a better thing to print, both in terms of calibration and in terms of being neat to have, than Ralph, thoughtbot's lovable mascot.

A few of the Ralphs I've printed

I've made quite a few so far.

Thing is, Ralph started as an image, not a 3D model. But if we have a few pieces of software (and a 3D printer, of course), we can turn an image like Ralph into solid chunk of plastic.

I used an Open Source vector drawing program called Inkscape to modify a vector image of Ralph that our designers made. But because I didn't want the eyes and brainwaves to fall over, I needed a backing to hold it all together. Once I modified the vector drawing to close up all the gaps, I had all the pieces ready to make the jump into 3D.

The Third Dimension

Taking the front and back vector images and turning them into a 3D object required another program, OpenSCAD. This is an Open Source 3D modeler that builds objects via code, rather than clicking, pushing, and pulling graphically. I imported the vectors and extruded them on top of one another and ended up with the object you can see above. This is an STL file. There are a number of other ways to do this, like using OpenJSCAD or TinkerCad, or directly via Inkscape.

STL files are the standard format for printable 3D models. GitHub can display them and even perform diffs on them, which is really useful if you ever need it. There are also a number of sites like Thingiverse that contain multitudes of objects ready for the printing. Some even come with OpenSCAD source so you can easily modify them to suit your needs.

Here's what Ralph looks like on GitHub's STL viewer:

<script src="https://embed.github.com/view/3d/jyurek/3d/master/Objects/Ralph/ralph-with-backing.stl"></script>

Cutting it up

Once you have a 3D model in STL format, you need to turn it into instructions for the printer to follow. We give it to a program called a slicer, which compiles the 3D model to printer machine code. The program turns the 3D model into horizontal slices that the printer can use to build the object up, layer by layer. I use a program called Slic3r, which is also Open Source and fairly easy to use. These Ralphs were to be printed two at a time, so I scaled down the model and arranged two on the print surface.

Ralphs getting ready to be sliced

All the physical calculations happen during slicing: how fast the nozzle moves, how it gets to every single spot on the layer, how much plastic to put there, what pattern it's going to use to fill everything in, how thick each layer will be, and so on.

The neat thing about this is that, because it just did all these calculations, it knows how much plastic it needs to build the model. These two Ralphs need 4.7 meters of filament (for a total of 31.2 cubic centimeters of PLA plastic).

This is where the vast majority of software tweaking comes in to play, as these variables can make a print look awesome or terrible. Fortunately I've just recalibrated my printer, so I was pretty optimistic that these would look good. Here are a few pictures of some older Ralph prints I made (on the left of each) compared to a print from this batch (on the right):

The difference between an OK print and a pretty good print

And the end result is a plastic Ralph (or 10) that can sit on your desk and cheer you on while you're furiously coding away. 3D printing is still young, but it's a really fun hobby to get into, especially if you're the kind of person who loves to tinker.

Liftoff 1.0

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Way back in January of 2013, we released Liftoff to help developers quickly configure Xcode projects. We used it heavily internally, but felt like it was only solving part of the problem. So we've improved it, and are proud to announce Liftoff 1.0.

Distribution

Earlier versions of Liftoff were distributed as a Ruby gem, but that added some weird overhead to the tool, since it isn't used for Ruby development. For this reason, Liftoff 1.0 is now distributed through Homebrew. We're adding it to our thoughtbot/formulae tap, and all future updates will be done there.

The RubyGems version will stay up, but is deprecated. If you install the Homebrew version, you should make sure to uninstall the RubyGems version to avoid confusion/potential conflicts.

liftoffrc

The next thing that we were able to improve was the way you configure Liftoff itself. Originally, we used command line flags to enable specific configurations (there was no way to selectively disable configurations). This worked fine for us because we never touched these flags. But it made Liftoff extremely rigid and clumsy for people who wanted to configure their projects differently.

I had opened an issue after an internal conversation about possibly using a config file instead of command line options, and after an awesome contribution from @mokagio, we had a viable solution for configuring projects quickly and easily, without increasing overhead for users out of the box.

The liftoffrc file is written in YAML, and works with a 3 stage fallback system on a per-key basis. The lookup order is:

  1. Local (./.liftoffrc)
  2. User (~/.liftoffrc)
  3. Default (<liftoff installation location>/defaults/liftoffrc)

If a key isn't defined at one level, it will fall back to the next level. So you can safely override individual keys without changing the default behavior, or build your own set of defaults at the User level and override those options at the Local level.

Take a look at the default liftoffrc to see what keys are available for customization.

Project Creation

The largest change in Liftoff 1.0 is that you can now use it to create new projects from scratch, as opposed to only being able to use it for configuring existing projects. Now, when you run liftoff in a directory that doesn't contain a project, you'll get a prompt asking you for the project name, the company name, your name, and the prefix. These values will be used to create a directory structure, populate template files, and configure the new project. You can see what the default directory/group structure will look like in the default liftoffrc.

This becomes especially powerful when you consider that since the keys used to generate the new project are defined in liftoffrc, they are easily overridden for your specific needs. You can even pre-define some defaults for the options collected at the command line to speed up the data entry. For example, I'm setting author inside ~/.liftoffrc so that I don't have to enter my name any time I want to create a new project. I'm also setting company: thoughtbot inside ~/Code/thoughtbot/.liftoffrc and company: Gordon Fontenot inside ~/Code/personal/.liftoffrc. Now, projects I create have sensible defaults based on where I'm creating them.

Additionally, you can completely redefine the project structure based on your personal preference, or your employer's requirements. Again, referring to the default liftoffrc, you can see that the directory structure is a simple dictionary. And since the directory structure is mimicked in the group structure (including linking groups to their directory counterparts, which Xcode doesn't do by default), the group structure will match.

We're also creating .gitkeep files in each directory on disk, which is critical, because Xcode is all-too-happy to delete a directory off disk once it sees that there aren't any files left in it. That's a sure-fire way to end up with merge-conflicts in your pbxproj file.

Wrapping up

So that's Liftoff 1.0. We've put a lot of work into this release, and it's been a really great addition to our toolbelt so far. If you have ideas on how to make it even better, [open an issue][liftoff-issues], or even better: submit a pull request. If you're ready to check it out for yourself, install it via Homebrew:

brew tap thoughtbot/formulae
brew install liftoff

What's next?

Arduino Sensor Network

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Previously, we looked at creating a Low Power Custom Arduino Sensor Board for use in a sensor network. Now, let's look at writing the software for our sensor network using that custom board. We will revisit the bathroom occupancy sensor as an example.

Low Power Using Interrupts

Last time we looked at optimizing power by sleeping the Arduino and waking it only when a door's state changed. To do this, we used the pin change interrupt on pins 2 and 3. Unfortunately, we can only monitor change on those pins when the processor is in idle mode which doesn't offer us max power savings. It would be nice to put the processor into its highest power savings mode: deep sleep. In deep sleep, those pins can still use interrupts but instead of interrupting on change, it would interrupt on a single state of HIGH or LOW. To use this type of interrupt we would have to remove the interrupt after it fires and then add it back to trigger on the opposite state every time the processor was woken. This is doable but there is a simpler solution.

The Watchdog Timer (WDT) is a timer that runs on microcontrollers as a safety feature. Its purpose is to notify the processor if a fault or exception occurs. Say for instance, that you have code in the main execution loop that you know takes no more than 100ms to execute. You could set the WDT to interrupt just over 100ms. Then at the end of every execution loop reset the WDT. Now if the WDT interrupt is ever reached, you know that the WDT timed out meaning the code took longer than expected to execute. You can then handle the failure accordingly in the interrupt callback.

We are going to use the WDT a little differently than its intended use. If we set the WDT to interrupt every second, then we can put the processor into deep sleep and it will wake up in second intervals. Every time it wakes up, we can check the doors and transmit their state if it has changed. This may not be ideal for optimal power savings but it doesn't cost much more power and it allows us to have a more general application for sensing and reporting anything. Our Arduino will wakeup every second, check some sensors, and then report their state if they changed. Using this model, we can have the sensor board be more than just a bathroom door detector. It could be placed around the office and report temperature, humidity, brightness, motion, etc.

Let's create a new Arduino project and setup the WDT.

// Import the interrupt library
#include <avr/interrupt.h>

volatile int __watch_dog_timer_flag = 1;

// Define WDT interrupt callback
ISR(WDT_vect)
{
  __watch_dog_timer_flag = 1;
}

void setup()
{
  // Disable processor reset on WDT time-out
  MCUSR &= ~(1<<WDRF);

  // Tell WDT we're going to change its prescaler
  WDTCSR |= (1<<WDCE);

  // Set prescaler to 1 second
  WDTCSR = 1 << WDP1 | 1 << WDP2;

  // Turn on the WDT
  WDTCSR |= (1 << WDIE);
}

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    // do things here ...
  }
}

First, we create a flag variable that we use to know which interrupt was fired. We use the volatile keyword to let the compiler know that this variable might change at any time. This is important for variables being modified within interrupt callbacks and the main execution loop. Next, we define the interrupt callback using the ISR, Interrupt Service Routine, function. We tell the ISR which interrupt callback we're defining, WDT_vect is for the Watchdog Timer Interrupt Vector. The only thing we need to do Inside the interrupt callback is set the flag.

Next, we setup the WDT using some register bit manipulation. The MCUSR register is the processor's status register and allows us to reset the processor when the WDT times out. We don't want this to happen so we use the bitwise & operator to set the WDRF, Watchdog Reset Flag, bit to 0. Then, we configure the WDT Control Register, WDTCSR. Setting the WDCE bit tells the processor that we are going to change the timer prescaler. Then, we set the timer prescaler with the WDP1 and WDP2 bits so the WDT will time-out around 1 second. Now, we can enable the WDT by setting the WDIE bit. You can find out more about these registers in the datasheet. Finally, in the execution loop, we check to see if the flag is set, meaning the WDT has triggered. If it has triggered, we reset the flag and execute the application specific code.

Revisiting the bathroom occupancy detector, the sensor board in charge of monitoring the downstairs bathrooms has to sense the input from 2 reed switches on the doors. We will use pins D2 and D3 for the reed switches. We also want to activate the internal pull-up resistor which adds a resistor to power inside the chip. This gives our door pins a default state if the door is not closed.

byte leftDoorStatus = 0;
byte rightDoorStatus = 0;

void setup()
{
  // WDT init ...

  pinMode(2, INPUT);
  digitalWrite(2, HIGH);
  
  pinMode(3, INPUT);
  digitalWrite(3, HIGH);
}

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    byte left = digitalRead(2);
  
    if (leftDoorStatus != left) {
      leftDoorStatus = left;
    }

    byte right = digitalRead(3);
    
    if (rightDoorStatus != right) {
      rightDoorStatus = right;
    }
  }
}

Here, we added two global status variables. Then, we set the pins 2 and 3 as INPUT and turn on their internal pull-up resistors using digitalWrite(x, HIGH);. In the loop function, we check and compare the doors' status with the global status. If the status has changed we set the global variable. Now, we can use the nRF24 board to communicate these changes to the hub.

#include <SPI.h>
#include <Mirf.h>
#include <nRF24L01.h>
#include <MirfHardwareSpiDriver.h>

// ...

void setup()
{
  // ...

  Mirf.csnPin = 10;
  Mirf.cePin = 9;
  Mirf.spi = &MirfHardwareSpi;
  Mirf.init();
  Mirf.setRADDR((byte *)"bath1");
  Mirf.payload = 32;
  Mirf.config();
}

Make sure to include the proper libraries. We can download the Mirf library and place it into our Arduino libraries folder. Setup the Mirf by setting the csnPin and cePin, which are on pins 10 and 9 respectively, telling it to use the hardware SPI, setting the address as bath1, and the payload as 32 bytes. Now in the execution loop we can transmit the data when a status has changed.

const String rightDoorID = "E1MLhY2yhH";
const String leftDoorID = "bEOr5qhMHY";

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    byte left = digitalRead(2);
  
    if (leftDoorStatus != left) {
      leftDoorStatus = left;
      sendDataWithIDAndStatus(leftDoorID, leftDoorStatus);
    }

    byte right = digitalRead(3);
    
    if (rightDoorStatus != right) {
      rightDoorStatus = right;
      sendDataWithIDAndStatus(rightDoorID, rightDoorStatus);
    }
  }
}

void sendDataWithIDAndStatus(String id, byte status)
{
  byte doorStatus[12];
  id.getBytes(doorStatus, 11);
  doorStatus[11] = status;

  Mirf.setTADDR((byte *)"tbhub");
  Mirf.send(doorStatus);
  while(Mirf.isSending()) ;
  Mirf.powerDown();

  free(doorStatus);
}

First, we add two IDs for our door sensors. These IDs correspond to their respective ID in the cloud storage service we are using to store the data (more on this later). When the status has changed we call sendDataWithIDAndStatus(id, status) which combines the ID and status of the door into a byte array and uses Mirf to transmit the array to the hub, tbhub. We wait for transmission to finish and then tell the Mirf board to sleep.

The last thing we have to do is sleep the processor after the application code has executed.

#include <avr/power.h>
#include <avr/sleep.h>

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    // Application code ...

    enterSleepMode();
  }
}

void enterSleepMode()
{
  set_sleep_mode(SLEEP_MODE_PWR_DOWN);
  sleep_enable();
  sleep_mode();

  sleep_disable();
  power_all_enable();
}

Call enterSleepMode() after our application code in the main loop. This function sets and enables the sleep mode then tells the processor to enter sleep mode with sleep_mode(). When the processor wakes up from the interrupt, code execution begins where it left off, disabling sleep and turning on power for all peripherals.

A Library to Simplify

We have provided an Arduino library that we can use to make this much simpler. Add the thoughtbot directory into the Arduino libraries directory and restart the Arduino software.

The library provides the TBClient class and a wrapper file TBWrapper.

The TBClient class abstracts the communication to the hub. Initialize a client class by calling TBClient client((byte *)"cname", 32);. This will initialize the Mirf software. The first parameter is the name of the client device. This name will be used to receive transmissions meant just for this board. It's very important that this name be 5 characters long or else the wireless library won't work. The second parameter is the size of the transmission payload in bytes. The max is 32 bytes which we set above even though we might not use all that. TBClient also provides a sendData(byte *address, byte *data) function for transmitting. It takes in the 5 character address of the device to transmit to and the byte array of data to transmit.

TBWrapper is a file that wraps the standard Arduino setup() and loop() functions to setup the WDT and put the processor in deep sleep. If we wanted custom sleep and interrupt logic other than what we did above, we could remove this file. Keeping this file will simplify the code so that we can concern ourselves only with our application. With TBWrapper, use clientSetup() and clientLoop() instead of setup() and loop() respectively. Inside clientSetup(), we can setup any pins or modules we need for our sensing application. clientLoop() will be executed about every second when the processor comes out of sleep. In here, we should check our sensors and transmit their data if any have changed.

To use this library, create a new file with the Arduino software. In the menu, under Sketch select Import Library... and pick thoughtbot. Also import the Mirf and SPI libraries. The final code after refactoring the above code to use the libraries will look like this:

#include <SPI.h>
#include <Mirf.h>
#include <nRF24L01.h>
#include <MirfHardwareSpiDriver.h>
#include <MirfSpiDriver.h>

#include <TBClient.h>
#include <TBWrapper.h>

const String rightDoorID = "E1MLhY2yhH";
const String leftDoorID = "bEOr5qhMHY";

TBClient client((byte *) "bath1", 32);

byte leftDoorStatus = 0;
byte rightDoorStatus = 0;

void clientSetup()
{
  pinMode(2, INPUT);
  digitalWrite(2, HIGH);
  
  pinMode(3, INPUT);
  digitalWrite(3, HIGH);
}

void clientLoop()
{
  byte left = digitalRead(2);
  
  if (leftDoorStatus != left) {
    leftDoorStatus = left;
    sendDataWithIDAndStatus(leftDoorID, leftDoorStatus);
  }

  byte right = digitalRead(3);
  
  if (rightDoorStatus != right) {
    rightDoorStatus = right;
    sendDataWithIDAndStatus(rightDoorID, rightDoorStatus);
  }
}

void sendDataWithIDAndStatus(String id, byte status)
{
  byte doorStatus[12];
  id.getBytes(doorStatus, 11);
  doorStatus[11] = status;
  client.sendData((byte *)"tbhub", (byte *)doorStatus);
  free(doorStatus);
}

The Hub

The hub, our Arduino Yún, also has an nRF24 board attached and is receiving the transmissions. It will post the sensor data to an internet service so we can access that data from anywhere. We decided to use Parse as the internet service because of its ease to use and large data capacity for the free tier.

Let's look at how we can receive data from our sensor board and post it to the cloud.

#include <SPI.h>
#include <Mirf.h>
#include <nRF24L01.h>
#include <MirfHardwareSpiDriver.h>
#include <MirfSpiDriver.h>

#include <Bridge.h>
#include <Process.h>

void setup()
{
  Mirf.spi = &MirfHardwareSpi;
  Mirf.init();
  
  Mirf.setRADDR((byte *) "tbhub");
  Mirf.payload = 32;
  
  Mirf.config();
  
  Bridge.begin();
}

Here, we are setting up the Mirf library by giving it the name of our device, tbhub, and the payload size, 32 bytes. The Bridge.begin(); call is setting up the Arduino to be able to talk to the on-board Linux computer. Now we can monitor for received data in the loop() function.

void loop()
{
  if (Mirf.dataReady()) {
    byte data[32];
    Mirf.getData((byte *) &data);
    String id = String((char *)data);
    sendData(id, data[11]);
  }
}

When we receive data, we extract the sensor ID from the first bytes of the string and send it along with the status byte to the Parse API.

void sendData(String id, byte value)
{
  Process curl;
  curl.begin("curl");
  curl.addParameter("-k");
  curl.addParameter("-X");
  curl.addParameter("POST");
  curl.addParameter("-H");
  curl.addParameter("X-Parse-Application-Id:YOUR-APPLICATION-ID");
  curl.addParameter("-H");
  curl.addParameter("X-Parse-REST-API-Key:YOUR-PARSE-API-KEY");
  curl.addParameter("-H");
  curl.addParameter("Content-Type:application/json");
  curl.addParameter("-d");
  
  String data = "{\"sensor\":{\"__type\":\"Pointer\",\"className\":\"Sensor\",\"objectId\":\"";
  data += id;
  data += "\"},\"value\":";
  data += value;
  data += "}";
  
  curl.addParameter(data);
  curl.addParameter("https://api.parse.com/1/classes/SensorValue");
  curl.run();
}

Process is a class available on the Arduino Yún that sends a command to the Linux computer for execution. Unfortunately, the string parameter in the addParameter(String) method, must not contain any spaces, leaving the code to look messy and repetitive. We are using curl to POST the new sensor status to a Parse object called SensorValue. The string identifiers for each door on the sensor board correspond to a Sensor object on Parse. Above, we are creating a new SensorValue object in Parse that points to the appropriate Sensor object.

This code and the code for the client can be found in the GitHub repository.

Conclusion

Now we have the code to make our sensor board run, and with that we can start sensing and reporting anything we can imagine. The hardware and software is all open source, so make a sensor network at your office or home and report back to us with your awesome creations!

Episode #445 – March 4th, 2014

Posted 6 months back at Ruby5

It's pattern-mania this week with: interactors, adapters and components-based architectures. Omniref allows us to take a step back to look at dependencies between popular Ruby libraries and we learn about RubyMotion gotchas for Rails developers.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

Interactors

Yesterday, the team over at Grouper released the first part of a blog series they call Rails, the Missing Parts. In part one, they talk about using Interactors in your Rails application to detangle your ActiveRecord objects and business rules from your controllers. They have the benefit of encapsulating your business rules and model interactions in one, more easily testable place. And, they have the side benefit of allowing you compose new service objects with others to make more intricate interactions. I think it’s interesting to mention that David Heinemeier Hansson jumped into the Hacker News discussion to point out that this is a good practice, only when it needs to be done. It’s overkill to do it always, but if you’ve got a sign up form or something that manages multiple models, then maybe it makes sense.
Interactors

Component-based Architecture in Ruby and Rails

Speaking of Interactors or Service classes, there is a talk from Stephan Hagemann at MountainWest RubyConf 2013 that is a great overview on component-based architectures in Ruby and Rails. He shows with simple examples how you can extract self-contained business logic into modules, gems, engines, etc. He doesn’t actually use these as external gems. His central point seems to be that it’s easier to think about modules — even if you don’t fully extract them — when they have their own namespace. I tend to agree with him: clear naming tends to make it easier to see the edges of a class’s responsibility. As he demonstrates, the fact that a Rails app originally defines no namespaces sort of encourages a hodge podge mentality where responsibilities are mixed and it’s not clear what’s in charge of what exactly. Stephan shows how to create the gem structure without the need to run gem build or actually publish the gem itself. Instead it all stays within the Rails app despite. So he gets the benefits of a distinct interface and he can add the gem to the Gemfile using a local path. Ditto for mountable Rails engines.
Component-based Architecture in Ruby and Rails

Reflecting on RubyMotion Experiences

Last week, Jordan Maguire put together an article on his experiences using RubyMotion where he reflected on The Frontier Group’s 3000 or so collective hours of using it. It’s one part in what may become a series on how to work with RubyMotion from the perspective of a Ruby on Rails developer. In this article, he touches on quite a lot, but I appreciated the “don’t think of controllers in Rails when you’re working with controllers in Cocoa Touch,” “state and persistence are drastically different in a client application,” and most amusingly, the observation that “Obj-C looks like the syntax was derived at random from a bag of broken glass, barbed wire, and salt.”. Even though you’re working in Ruby at the end of the day you’re building Objective-C applications. As such, you should know Objective-C at least enough to be able to convert Objective-C code to RubyMotion.
Reflecting on RubyMotion Experiences

Reading Rails: The Adapter Pattern

Last week Adam Sanderson wrote up a blog post about how adapters are used for the MultiJSON gem, ActiveRecord and even the DateTime and Time classes. Quite a few people will find inspiration looking at ActiveRecord’s AbstractAdapter. It contains the basic database functionality while the MysqlAdapter for instance inherits from it and includes more stuff specific to MySQL databases, and the chain goes on all the way down to PostgreSQL. These patterns are very handy when building an adapter for external APIs for instance. Not to mention give you the ability to make a testing adapter that makes no network calls. Sounds like a fun read. The last example in the post is the way Rails (through ActiveSupport) basically patches DateTime to play nice with the Time class by adding a consistent #to_i method to it. As with any foray into Rails source code, you’re likely to pick up some nifty trick or discover some impressive hacks along the way.
Reading Rails: The Adapter Pattern

What's Relevant in the Ruby Universe?

Last month, Omniref released a major update to their Ruby source code indexing system, adding cross library reference inference and inline documentation from included modules, among other things. Omniref is a bit like a Ruby documentation and source code search engine that spans across Rubygems. It was created by Tim Robertson and Montana Low and you can think of it a bit like the Google of Ruby code, but more focused and intelligent on the search results. Because the context is strictly Ruby and Rubygems, they can cross link, show related libraries, dependent libraries, syntax highlighting, documentation, and more. It’s pretty amazing that they can inline function documentation between Rubygems (for example how ActiveModel provides to_key for ActiveRecord objects), showing the original function and documentation.
What's Relevant in the Ruby Universe?

Thank You for Listening to Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

Context, Tooling and the Beginner Programmer

Posted 6 months back at Ruby flew too

Renée De Voursney talking at the AU Ruby Conf about the trials and tribulations of learning Ruby.

<iframe src="http://player.vimeo.com/video/61087286" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>

Renée De Voursney - Teaching Ruby for fun and profit from Ruby Australia on Vimeo.

She talks about context and how there are so many disparate moving parts to get to grips with before one can "become" part of the Ruby community. Gaining a basic understanding of all the moving parts that encompass not only the Ruby language itself, but the social norms of RSpec, Git, Gems, Github, RVM, *nix, Macs, TDD and various command line tools, is too much of a hurdle for many people to jump.

The biggest problem with a complete novice trying to get into programming is always some sort of feedback loop that will give them the justification to carry on. I'm a great believer in learning by debugging, but at the same time, giving the novice quick wins is important. Get them up and running quickly from nothing (and I mean nothing - no tools installed on their machine yet) to "hello world" in ten minutes.

It's a difficult task. People have a variety of operating systems from various flavours of Windows through Linux boxes and OSX machines. Providing a generic one-size-fits-all is nigh on impossible. Fragmentation sets in. People blog about their frustration and put up tutorials that work on their uniquely configured environments. They try to find work-arounds to annoyances like not having admin rights on computers to circumventing firewall rules that seem to get in the way of any sort of gem installation. It isn't long before someone who just wanted to take Ruby or Rails for a quick ride is hitting their head against a brick wall. And the brick wall is usually just tooling.....it isn't even the code.

Frosted In

Posted 6 months back at Mike Clark

Frosted In

In Like A Lion

Posted 6 months back at Mike Clark

Who Dat

Dense fog, high winds, cold temps, and big heavy snowflakes. Hello, March 1st.

Episode #444 – February 28th, 2014

Posted 6 months back at Ruby5

ActiveRecord Heatmaps, Atom Editor, Ruby Gotchas and Ruby Tempfiles. Guest hosts Karle Durante and Ken Collins

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic recently posted about Optimizing Your Global Digital Marketing with New Relic
New Relic

Thermometer

The Thermometer gem helps you build heat maps of your activerecord associations
Thermometer

Atom Editor

Github has released the atom editor. A hackable text editor for the 21st Century
Atom Editor

Rails 4.1 starter app with OmniAuth

Daniel Kehoe has released an example application showing how to set up authentication using OmniAuth with Rails 4.1
Rails 4.1 starter app with OmniAuth

Ruby Gotchas that will come back to haunt you

Karol Sarnacki wrote a blog listing popular Ruby gotchas and curiosities that developers should be aware of.
Ruby Gotchas that will come back to haunt you

Make Remote Files Local with Ruby Tempfile

We live in the age of remote resources. It's pretty rare to store uploaded files on the same machine as your server process. File storage these days is almost completely remote Using file storage services like S3 is awesome, but not having your files accessible locally can complicate the performance of file-oriented operations.
Make Remote Files Local with Ruby Tempfile

SICP Wasn’t Written for You

Posted 6 months back at Jake Scruggs

The number of software luminaries who sing the praises of “Structure and Interpretation of Computer Programs” (referred to as SICP) is such a long list that you might think only a crazy person would take issue with it. However, to ignore SICP’s problems and continue to blindly recommend it seems just as crazy.

SICP was the textbook for MIT’s introductory programming class and was a bit of a departure from other into to computer science textbooks at the time.  Wikipedia sums it up nicely:  “Before SICP, the introductory courses were almost always filled with learning the details of some programming language, while SICP focuses on finding general patterns from specific problems and building software tools that embody each pattern.”  Which sounds awesome, but does essentially say that abstract principles will be introduced before the nuts and bolts of a language.  If you think about that for a minute, you may see where the problems will be.

When I was training to be a teacher I took a bunch of education courses.  I got good grades but when I got into the classroom to actually teach I flailed around just trying to keep the class under control and mostly forgot to apply the principles I had learned.  The knowledge was in my head, but it floated, disconnected, from anything in particular.  When I learned these ideas I had no teaching experience, and so, nowhere to place these abstract principles.

SICP’s first chapter explains the basic form of Scheme (a Lisp), some basic operators (+, -, *, /, etc), defining/calling a function, different ways a compiler might evaluate code, and conditionals over the course of a few short pages.  That’s a bit much to swallow all at once, especially the comparative evaluation stuff but that should be easily sorted out with some examples. Right?  Well, that’s not really SICP’s thing. SICP will give you a few trivial examples and then toss you right into the deep end. Then first 2 problems for the reader are pretty easy, but it’s the 3rd that will let you know what yer in for: “Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.” Which seems pretty easy until you realize there are no variables.  You’ll need to figure out an algorithm that can take 3 numbers and, without any intermediate state storage, return the 2 biggest numbers in such a way that you can sum their squares.  I’ll be real honest here, after about 30 min of trying to do this (I have zero functional background so I’m a complete novice here) I gave up and tracked down the answer online.  Of course the answer was simple and concise and made me feel like a chump.  Which is fine, but not really what I was expecting in the first chapter, let alone the 3rd problem of the entire book.

But that’s what SICP is all about -- challenging problems. The rest of the chapter introduces Newton’s method for square/cube roots and lexical scoping just for fun. Chapter 2 is recursion vs iteration in terms of execution speed, resource usage, and transforming from one to the other.  Logarithmic, linear, and exponential growth are dealt with in a few paragraphs and then we’re off to Exponentiation, Greatest Common Divisors, Primality, and implementing Fermat's Little Theorem for probabilistic prime determination. My favorite question from chapter 2 asks the reader to formulate an inductive proof that Fib(n) is the closet integer to ((golden ratio)^n)/5.

Which brings me to another criticism of SICP:  It assumes a familiarity with math that most people just don’t have. A first year MIT student would probably be swimming in math classes so the book assumes that knowledge on the readers part.  Abstract programming principles can be very difficult to find examples for so I’m sympathetic to the plight of the authors, but when you just go straight at math you’re explaining an abstract thing with another abstract thing.

There’s a certain sort of person who gets excited by complicated abstract but internally consistent logic with no real connection to the concrete.  In my experience as a physics teacher, these students do exist but are very rare. Most people need a bit of connection to something tangible in order to have the ideas connect in their brain.

What then is my point about SICP?  Simply that its explanations are overly terse and its problems are large steps past what little is explained.  In light of those things I have recommendations for those who attempt to work through it.

  • If you intend to do every problem, realize that this will take a LONG time and involve a bunch of research.
  • Set a time-box for how long you’re going to spend on a problem before you go look up the answer.  If you’ve spent enough time trying to solve a problem you will still value the answer enough to remember it. 30 min is a good number.  Increase or decrease as your sanity allows.
  • If you feel like something hasn’t been explained:  You’re probably right.  After you find the answer, a close re-reading will reveal a cryptic sentence that you now realize was trying to tell you something. This will infuriate you and is perfectly normal.
  • Work through the book with a group.  This will hopefully allow you to commiserate about how lost you are and get some help.  If there’s someone in there that loves this book and thinks everything is explained perfectly, ignore them.  If they subtly imply that you’re stupid for not getting it:  Leave the group.  You don’t need that static in your life.
  • Do not feel bad about not knowing all this math stuff:  Remember that this book was written for students who would be surrounded by math at the time they read it.
  • Consider learning Lisp before starting this book.  The really important concepts in the book come easier if you’re not also learning Lisp at the same time

Form Filling is Formulaic

Posted 6 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

You probably have more than a few tests that look a bit like this:

fill_in ...
fill_in ...
fill_in ...
fill_in ...
select ...
choose ...
click_on 'Submit'

Filling out a form with Capybara can be very tedious. With Formulaic, we aim to make the process less repetitive and more fun.

fill_form(:user, name: 'Caleb', age: 24, city: 'Boston')
fill_form(:user, FactoryGirl.attributes_for(:user).slice(:name, :age, :city))

Literate Example

# The main entry point for Formulaic is the `fill_form` method.
fill_form(
  # Symbol representing the name of the class the form represents
  :dependent,

  # Pass a hash of attributes to be filled. Works great with
  # `FactoryGirl.attributes_for(:dependent)`.
  {
    # The attribute to set and the value. In this case, Formulaic will
    # `fill_in` the "Name" input with "My dependent".
    name: 'My dependent',

    # If the value of an attribute is a hash, Formulaic will look up
    # translations for the correct model. This is
    # `t('simple_form.labels.profile.zip_code')`.
    profile: { zip_code: '12345' },

    # Works with dates, too!
    date_of_birth: 8.years.ago,

    # When passed an array, it will `check` each of the elements.
    ethnicity: [Profile::ETHNICITY_OPTIONS.first],
  }
)

# Formulaic provides a simple way to look up the translation for the
# submit helper for your model and action. The default is `:create`, so
# you can leave that off.
click_on submit(:dependent, :create)

Formulaic uses I18n conventions to find the text of labels and assumes that you are using SimpleForm.

We hope that you enjoy using Formulaic as much as we do, and as always we encourage you to report any problems you might have and to contribute your improvements!

Redis Scripting with MRuby

Posted 6 months back at Luca Guidi - Home

MRuby is a lightweight Ruby. It was created by Matz with the purpose of having an embeddable version of the language. Even if it just reached the version 1.0, the hype around MRuby wasn’t high. However, there are already projects that are targeting Nginx, Go, iOS, V8, and even Arduino.

The direct competitor in this huge market is Lua: a lightweight scripting language. Since the version 2.6.0 Redis introduced scripting capabilities with Lua.

# redis-cli
> eval "return 5" 0
(integer) 5

Today is the 5th Redis birthday, and I’d like celebrate this event by embedding my favorite language.

Hello, MRuby

MRuby is shipped with an interpreter (mruby) to execute the code via a VM. This usage is equivalent to the well known Ruby interpreter ruby. MRuby can also generate a bytecode from a script, via the mrbc bin.

What’s important for our purpose are the C bindings. Let’s write an Hello World program.

We need a *NIX OS, gcc and bison. I’ve extracted the MRuby code into ~/Code/mruby and built it with make.

#include <mruby.h>
#include <mruby/compile.h>

int main(void) {
  mrb_state *mrb = mrb_open();
  char code[] = "p 'hello world!'";

  mrb_load_string(mrb, code);
  return 0;
}

The compiler needs to know where are the headers and the libs:

gcc -I/Users/luca/Code/mruby/include hello_world.c \
  /Users/luca/Code/mruby/build/host/lib/libmruby.a \
  -lm -o hello_world

This is a really basic example, we don’t have any control on the context where this code is executed. We can parse it and wrap into a Proc.

#include <mruby.h>
#include <mruby/proc.h>

int main(int argc, const char * argv[]) {
  mrb_state *mrb = mrb_open();
  mrbc_context *cxt;
  mrb_value val;
  struct mrb_parser_state *ps;
  struct RProc *proc;

  char code[] = "1 + 1";

  cxt = mrbc_context_new(mrb);
  ps = mrb_parse_string(mrb, code, cxt);
  proc = mrb_generate_code(mrb, ps);
  mrb_pool_close(ps->pool);

  val = mrb_run(mrb, proc, mrb_top_self(mrb));
  mrb_p(mrb, val);

  mrbc_context_free(mrb, cxt);
  return 0;
}

Hello, Redis

As first thing we need to make Redis dependend on MRuby libraries. We extract the language source code under deps/mruby and then we hook inside the deps/Makefile mechanisms:

mruby: .make-prerequisites
       @printf '%b %b\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)
       cd mruby && $(MAKE)

see the commit

During the startup, Redis initializes its features. We add our own mrScriptingInit(), where we initialize the interpreter and assign to server.mrb.

# src/mruby-scripting.c
void mrScriptingInit(void) {
  mrb_state *mrb = mrb_open();
  server.mrb = mrb;
}

see the commit

Then we can add another command REVAL with the same syntax of EVAL, but in our case MRuby will be in charge of execute it.

# src/redis.c
{"reval",mrEvalCommand,-3,"s",0,zunionInterGetKeys,0,0,0,0,0},

That mrEvalCommand function will be responsible to handle that command. It’s similar to the Hello World above, the only difference is that the code is passed as argument to the redis client (c->argv[1]->ptr).

# src/mruby-scripting.c
void mrEvalCommand(redisClient *c) {
  mrb_state *mrb = server.mrb;

  struct mrb_parser_state *ps;
  struct RProc *proc;
  mrbc_context *cxt;
  mrb_value val;

  cxt = mrbc_context_new(mrb);
  ps = mrb_parse_string(mrb, c->argv[1]->ptr, cxt);
  proc = mrb_generate_code(mrb, ps);
  mrb_pool_close(ps->pool);

  val = mrb_run(mrb, proc, mrb_top_self(mrb));
  mrAddReply(c, mrb, val);

  mrbc_context_free(mrb, cxt);
}

see the commit

Now we can compile the server and start it.

make && src/redis-server

From another shell, start the CLI.

src/redis-cli
> reval "2 + 3" 0
"5"

This was the first part of this implementation. In a future article, I’ll cover how to access Redis data within the MRuby context.

For the time being, feel free to play with my fork.