Arduino Sensor Network

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Previously, we looked at creating a Low Power Custom Arduino Sensor Board for use in a sensor network. Now, let's look at writing the software for our sensor network using that custom board. We will revisit the bathroom occupancy sensor as an example.

Low Power Using Interrupts

Last time we looked at optimizing power by sleeping the Arduino and waking it only when a door's state changed. To do this, we used the pin change interrupt on pins 2 and 3. Unfortunately, we can only monitor change on those pins when the processor is in idle mode which doesn't offer us max power savings. It would be nice to put the processor into its highest power savings mode: deep sleep. In deep sleep, those pins can still use interrupts but instead of interrupting on change, it would interrupt on a single state of HIGH or LOW. To use this type of interrupt we would have to remove the interrupt after it fires and then add it back to trigger on the opposite state every time the processor was woken. This is doable but there is a simpler solution.

The Watchdog Timer (WDT) is a timer that runs on microcontrollers as a safety feature. Its purpose is to notify the processor if a fault or exception occurs. Say for instance, that you have code in the main execution loop that you know takes no more than 100ms to execute. You could set the WDT to interrupt just over 100ms. Then at the end of every execution loop reset the WDT. Now if the WDT interrupt is ever reached, you know that the WDT timed out meaning the code took longer than expected to execute. You can then handle the failure accordingly in the interrupt callback.

We are going to use the WDT a little differently than its intended use. If we set the WDT to interrupt every second, then we can put the processor into deep sleep and it will wake up in second intervals. Every time it wakes up, we can check the doors and transmit their state if it has changed. This may not be ideal for optimal power savings but it doesn't cost much more power and it allows us to have a more general application for sensing and reporting anything. Our Arduino will wakeup every second, check some sensors, and then report their state if they changed. Using this model, we can have the sensor board be more than just a bathroom door detector. It could be placed around the office and report temperature, humidity, brightness, motion, etc.

Let's create a new Arduino project and setup the WDT.

// Import the interrupt library
#include <avr/interrupt.h>

volatile int __watch_dog_timer_flag = 1;

// Define WDT interrupt callback
ISR(WDT_vect)
{
  __watch_dog_timer_flag = 1;
}

void setup()
{
  // Disable processor reset on WDT time-out
  MCUSR &= ~(1<<WDRF);

  // Tell WDT we're going to change its prescaler
  WDTCSR |= (1<<WDCE);

  // Set prescaler to 1 second
  WDTCSR = 1 << WDP1 | 1 << WDP2;

  // Turn on the WDT
  WDTCSR |= (1 << WDIE);
}

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    // do things here ...
  }
}

First, we create a flag variable that we use to know which interrupt was fired. We use the volatile keyword to let the compiler know that this variable might change at any time. This is important for variables being modified within interrupt callbacks and the main execution loop. Next, we define the interrupt callback using the ISR, Interrupt Service Routine, function. We tell the ISR which interrupt callback we're defining, WDT_vect is for the Watchdog Timer Interrupt Vector. The only thing we need to do Inside the interrupt callback is set the flag.

Next, we setup the WDT using some register bit manipulation. The MCUSR register is the processor's status register and allows us to reset the processor when the WDT times out. We don't want this to happen so we use the bitwise & operator to set the WDRF, Watchdog Reset Flag, bit to 0. Then, we configure the WDT Control Register, WDTCSR. Setting the WDCE bit tells the processor that we are going to change the timer prescaler. Then, we set the timer prescaler with the WDP1 and WDP2 bits so the WDT will time-out around 1 second. Now, we can enable the WDT by setting the WDIE bit. You can find out more about these registers in the datasheet. Finally, in the execution loop, we check to see if the flag is set, meaning the WDT has triggered. If it has triggered, we reset the flag and execute the application specific code.

Revisiting the bathroom occupancy detector, the sensor board in charge of monitoring the downstairs bathrooms has to sense the input from 2 reed switches on the doors. We will use pins D2 and D3 for the reed switches. We also want to activate the internal pull-up resistor which adds a resistor to power inside the chip. This gives our door pins a default state if the door is not closed.

byte leftDoorStatus = 0;
byte rightDoorStatus = 0;

void setup()
{
  // WDT init ...

  pinMode(2, INPUT);
  digitalWrite(2, HIGH);
  
  pinMode(3, INPUT);
  digitalWrite(3, HIGH);
}

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    byte left = digitalRead(2);
  
    if (leftDoorStatus != left) {
      leftDoorStatus = left;
    }

    byte right = digitalRead(3);
    
    if (rightDoorStatus != right) {
      rightDoorStatus = right;
    }
  }
}

Here, we added two global status variables. Then, we set the pins 2 and 3 as INPUT and turn on their internal pull-up resistors using digitalWrite(x, HIGH);. In the loop function, we check and compare the doors' status with the global status. If the status has changed we set the global variable. Now, we can use the nRF24 board to communicate these changes to the hub.

#include <SPI.h>
#include <Mirf.h>
#include <nRF24L01.h>
#include <MirfHardwareSpiDriver.h>

// ...

void setup()
{
  // ...

  Mirf.csnPin = 10;
  Mirf.cePin = 9;
  Mirf.spi = &MirfHardwareSpi;
  Mirf.init();
  Mirf.setRADDR((byte *)"bath1");
  Mirf.payload = 32;
  Mirf.config();
}

Make sure to include the proper libraries. We can download the Mirf library and place it into our Arduino libraries folder. Setup the Mirf by setting the csnPin and cePin, which are on pins 10 and 9 respectively, telling it to use the hardware SPI, setting the address as bath1, and the payload as 32 bytes. Now in the execution loop we can transmit the data when a status has changed.

const String rightDoorID = "E1MLhY2yhH";
const String leftDoorID = "bEOr5qhMHY";

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    byte left = digitalRead(2);
  
    if (leftDoorStatus != left) {
      leftDoorStatus = left;
      sendDataWithIDAndStatus(leftDoorID, leftDoorStatus);
    }

    byte right = digitalRead(3);
    
    if (rightDoorStatus != right) {
      rightDoorStatus = right;
      sendDataWithIDAndStatus(rightDoorID, rightDoorStatus);
    }
  }
}

void sendDataWithIDAndStatus(String id, byte status)
{
  byte doorStatus[12];
  id.getBytes(doorStatus, 11);
  doorStatus[11] = status;

  Mirf.setTADDR((byte *)"tbhub");
  Mirf.send(doorStatus);
  while(Mirf.isSending()) ;
  Mirf.powerDown();

  free(doorStatus);
}

First, we add two IDs for our door sensors. These IDs correspond to their respective ID in the cloud storage service we are using to store the data (more on this later). When the status has changed we call sendDataWithIDAndStatus(id, status) which combines the ID and status of the door into a byte array and uses Mirf to transmit the array to the hub, tbhub. We wait for transmission to finish and then tell the Mirf board to sleep.

The last thing we have to do is sleep the processor after the application code has executed.

#include <avr/power.h>
#include <avr/sleep.h>

void loop()
{
  if (__watch_dog_timer_flag == 1) {
    __watch_dog_timer_flag = 0;
    
    // Application code ...

    enterSleepMode();
  }
}

void enterSleepMode()
{
  set_sleep_mode(SLEEP_MODE_PWR_DOWN);
  sleep_enable();
  sleep_mode();

  sleep_disable();
  power_all_enable();
}

Call enterSleepMode() after our application code in the main loop. This function sets and enables the sleep mode then tells the processor to enter sleep mode with sleep_mode(). When the processor wakes up from the interrupt, code execution begins where it left off, disabling sleep and turning on power for all peripherals.

A Library to Simplify

We have provided an Arduino library that we can use to make this much simpler. Add the thoughtbot directory into the Arduino libraries directory and restart the Arduino software.

The library provides the TBClient class and a wrapper file TBWrapper.

The TBClient class abstracts the communication to the hub. Initialize a client class by calling TBClient client((byte *)"cname", 32);. This will initialize the Mirf software. The first parameter is the name of the client device. This name will be used to receive transmissions meant just for this board. It's very important that this name be 5 characters long or else the wireless library won't work. The second parameter is the size of the transmission payload in bytes. The max is 32 bytes which we set above even though we might not use all that. TBClient also provides a sendData(byte *address, byte *data) function for transmitting. It takes in the 5 character address of the device to transmit to and the byte array of data to transmit.

TBWrapper is a file that wraps the standard Arduino setup() and loop() functions to setup the WDT and put the processor in deep sleep. If we wanted custom sleep and interrupt logic other than what we did above, we could remove this file. Keeping this file will simplify the code so that we can concern ourselves only with our application. With TBWrapper, use clientSetup() and clientLoop() instead of setup() and loop() respectively. Inside clientSetup(), we can setup any pins or modules we need for our sensing application. clientLoop() will be executed about every second when the processor comes out of sleep. In here, we should check our sensors and transmit their data if any have changed.

To use this library, create a new file with the Arduino software. In the menu, under Sketch select Import Library... and pick thoughtbot. Also import the Mirf and SPI libraries. The final code after refactoring the above code to use the libraries will look like this:

#include <SPI.h>
#include <Mirf.h>
#include <nRF24L01.h>
#include <MirfHardwareSpiDriver.h>
#include <MirfSpiDriver.h>

#include <TBClient.h>
#include <TBWrapper.h>

const String rightDoorID = "E1MLhY2yhH";
const String leftDoorID = "bEOr5qhMHY";

TBClient client((byte *) "bath1", 32);

byte leftDoorStatus = 0;
byte rightDoorStatus = 0;

void clientSetup()
{
  pinMode(2, INPUT);
  digitalWrite(2, HIGH);
  
  pinMode(3, INPUT);
  digitalWrite(3, HIGH);
}

void clientLoop()
{
  byte left = digitalRead(2);
  
  if (leftDoorStatus != left) {
    leftDoorStatus = left;
    sendDataWithIDAndStatus(leftDoorID, leftDoorStatus);
  }

  byte right = digitalRead(3);
  
  if (rightDoorStatus != right) {
    rightDoorStatus = right;
    sendDataWithIDAndStatus(rightDoorID, rightDoorStatus);
  }
}

void sendDataWithIDAndStatus(String id, byte status)
{
  byte doorStatus[12];
  id.getBytes(doorStatus, 11);
  doorStatus[11] = status;
  client.sendData((byte *)"tbhub", (byte *)doorStatus);
  free(doorStatus);
}

The Hub

The hub, our Arduino Yún, also has an nRF24 board attached and is receiving the transmissions. It will post the sensor data to an internet service so we can access that data from anywhere. We decided to use Parse as the internet service because of its ease to use and large data capacity for the free tier.

Let's look at how we can receive data from our sensor board and post it to the cloud.

#include <SPI.h>
#include <Mirf.h>
#include <nRF24L01.h>
#include <MirfHardwareSpiDriver.h>
#include <MirfSpiDriver.h>

#include <Bridge.h>
#include <Process.h>

void setup()
{
  Mirf.spi = &MirfHardwareSpi;
  Mirf.init();
  
  Mirf.setRADDR((byte *) "tbhub");
  Mirf.payload = 32;
  
  Mirf.config();
  
  Bridge.begin();
}

Here, we are setting up the Mirf library by giving it the name of our device, tbhub, and the payload size, 32 bytes. The Bridge.begin(); call is setting up the Arduino to be able to talk to the on-board Linux computer. Now we can monitor for received data in the loop() function.

void loop()
{
  if (Mirf.dataReady()) {
    byte data[32];
    Mirf.getData((byte *) &data);
    String id = String((char *)data);
    sendData(id, data[11]);
  }
}

When we receive data, we extract the sensor ID from the first bytes of the string and send it along with the status byte to the Parse API.

void sendData(String id, byte value)
{
  Process curl;
  curl.begin("curl");
  curl.addParameter("-k");
  curl.addParameter("-X");
  curl.addParameter("POST");
  curl.addParameter("-H");
  curl.addParameter("X-Parse-Application-Id:YOUR-APPLICATION-ID");
  curl.addParameter("-H");
  curl.addParameter("X-Parse-REST-API-Key:YOUR-PARSE-API-KEY");
  curl.addParameter("-H");
  curl.addParameter("Content-Type:application/json");
  curl.addParameter("-d");
  
  String data = "{\"sensor\":{\"__type\":\"Pointer\",\"className\":\"Sensor\",\"objectId\":\"";
  data += id;
  data += "\"},\"value\":";
  data += value;
  data += "}";
  
  curl.addParameter(data);
  curl.addParameter("https://api.parse.com/1/classes/SensorValue");
  curl.run();
}

Process is a class available on the Arduino Yún that sends a command to the Linux computer for execution. Unfortunately, the string parameter in the addParameter(String) method, must not contain any spaces, leaving the code to look messy and repetitive. We are using curl to POST the new sensor status to a Parse object called SensorValue. The string identifiers for each door on the sensor board correspond to a Sensor object on Parse. Above, we are creating a new SensorValue object in Parse that points to the appropriate Sensor object.

This code and the code for the client can be found in the GitHub repository.

Conclusion

Now we have the code to make our sensor board run, and with that we can start sensing and reporting anything we can imagine. The hardware and software is all open source, so make a sensor network at your office or home and report back to us with your awesome creations!

Episode #445 – March 4th, 2014

Posted 5 months back at Ruby5

It's pattern-mania this week with: interactors, adapters and components-based architectures. Omniref allows us to take a step back to look at dependencies between popular Ruby libraries and we learn about RubyMotion gotchas for Rails developers.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

Interactors

Yesterday, the team over at Grouper released the first part of a blog series they call Rails, the Missing Parts. In part one, they talk about using Interactors in your Rails application to detangle your ActiveRecord objects and business rules from your controllers. They have the benefit of encapsulating your business rules and model interactions in one, more easily testable place. And, they have the side benefit of allowing you compose new service objects with others to make more intricate interactions. I think it’s interesting to mention that David Heinemeier Hansson jumped into the Hacker News discussion to point out that this is a good practice, only when it needs to be done. It’s overkill to do it always, but if you’ve got a sign up form or something that manages multiple models, then maybe it makes sense.
Interactors

Component-based Architecture in Ruby and Rails

Speaking of Interactors or Service classes, there is a talk from Stephan Hagemann at MountainWest RubyConf 2013 that is a great overview on component-based architectures in Ruby and Rails. He shows with simple examples how you can extract self-contained business logic into modules, gems, engines, etc. He doesn’t actually use these as external gems. His central point seems to be that it’s easier to think about modules — even if you don’t fully extract them — when they have their own namespace. I tend to agree with him: clear naming tends to make it easier to see the edges of a class’s responsibility. As he demonstrates, the fact that a Rails app originally defines no namespaces sort of encourages a hodge podge mentality where responsibilities are mixed and it’s not clear what’s in charge of what exactly. Stephan shows how to create the gem structure without the need to run gem build or actually publish the gem itself. Instead it all stays within the Rails app despite. So he gets the benefits of a distinct interface and he can add the gem to the Gemfile using a local path. Ditto for mountable Rails engines.
Component-based Architecture in Ruby and Rails

Reflecting on RubyMotion Experiences

Last week, Jordan Maguire put together an article on his experiences using RubyMotion where he reflected on The Frontier Group’s 3000 or so collective hours of using it. It’s one part in what may become a series on how to work with RubyMotion from the perspective of a Ruby on Rails developer. In this article, he touches on quite a lot, but I appreciated the “don’t think of controllers in Rails when you’re working with controllers in Cocoa Touch,” “state and persistence are drastically different in a client application,” and most amusingly, the observation that “Obj-C looks like the syntax was derived at random from a bag of broken glass, barbed wire, and salt.”. Even though you’re working in Ruby at the end of the day you’re building Objective-C applications. As such, you should know Objective-C at least enough to be able to convert Objective-C code to RubyMotion.
Reflecting on RubyMotion Experiences

Reading Rails: The Adapter Pattern

Last week Adam Sanderson wrote up a blog post about how adapters are used for the MultiJSON gem, ActiveRecord and even the DateTime and Time classes. Quite a few people will find inspiration looking at ActiveRecord’s AbstractAdapter. It contains the basic database functionality while the MysqlAdapter for instance inherits from it and includes more stuff specific to MySQL databases, and the chain goes on all the way down to PostgreSQL. These patterns are very handy when building an adapter for external APIs for instance. Not to mention give you the ability to make a testing adapter that makes no network calls. Sounds like a fun read. The last example in the post is the way Rails (through ActiveSupport) basically patches DateTime to play nice with the Time class by adding a consistent #to_i method to it. As with any foray into Rails source code, you’re likely to pick up some nifty trick or discover some impressive hacks along the way.
Reading Rails: The Adapter Pattern

What's Relevant in the Ruby Universe?

Last month, Omniref released a major update to their Ruby source code indexing system, adding cross library reference inference and inline documentation from included modules, among other things. Omniref is a bit like a Ruby documentation and source code search engine that spans across Rubygems. It was created by Tim Robertson and Montana Low and you can think of it a bit like the Google of Ruby code, but more focused and intelligent on the search results. Because the context is strictly Ruby and Rubygems, they can cross link, show related libraries, dependent libraries, syntax highlighting, documentation, and more. It’s pretty amazing that they can inline function documentation between Rubygems (for example how ActiveModel provides to_key for ActiveRecord objects), showing the original function and documentation.
What's Relevant in the Ruby Universe?

Thank You for Listening to Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

Context, Tooling and the Beginner Programmer

Posted 5 months back at Ruby flew too

Renée De Voursney talking at the AU Ruby Conf about the trials and tribulations of learning Ruby.

<iframe src="http://player.vimeo.com/video/61087286" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>

Renée De Voursney - Teaching Ruby for fun and profit from Ruby Australia on Vimeo.

She talks about context and how there are so many disparate moving parts to get to grips with before one can "become" part of the Ruby community. Gaining a basic understanding of all the moving parts that encompass not only the Ruby language itself, but the social norms of RSpec, Git, Gems, Github, RVM, *nix, Macs, TDD and various command line tools, is too much of a hurdle for many people to jump.

The biggest problem with a complete novice trying to get into programming is always some sort of feedback loop that will give them the justification to carry on. I'm a great believer in learning by debugging, but at the same time, giving the novice quick wins is important. Get them up and running quickly from nothing (and I mean nothing - no tools installed on their machine yet) to "hello world" in ten minutes.

It's a difficult task. People have a variety of operating systems from various flavours of Windows through Linux boxes and OSX machines. Providing a generic one-size-fits-all is nigh on impossible. Fragmentation sets in. People blog about their frustration and put up tutorials that work on their uniquely configured environments. They try to find work-arounds to annoyances like not having admin rights on computers to circumventing firewall rules that seem to get in the way of any sort of gem installation. It isn't long before someone who just wanted to take Ruby or Rails for a quick ride is hitting their head against a brick wall. And the brick wall is usually just tooling.....it isn't even the code.

Frosted In

Posted 5 months back at Mike Clark

Frosted In

In Like A Lion

Posted 5 months back at Mike Clark

Who Dat

Dense fog, high winds, cold temps, and big heavy snowflakes. Hello, March 1st.

Episode #444 – February 28th, 2014

Posted 5 months back at Ruby5

ActiveRecord Heatmaps, Atom Editor, Ruby Gotchas and Ruby Tempfiles. Guest hosts Karle Durante and Ken Collins

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic recently posted about Optimizing Your Global Digital Marketing with New Relic
New Relic

Thermometer

The Thermometer gem helps you build heat maps of your activerecord associations
Thermometer

Atom Editor

Github has released the atom editor. A hackable text editor for the 21st Century
Atom Editor

Rails 4.1 starter app with OmniAuth

Daniel Kehoe has released an example application showing how to set up authentication using OmniAuth with Rails 4.1
Rails 4.1 starter app with OmniAuth

Ruby Gotchas that will come back to haunt you

Karol Sarnacki wrote a blog listing popular Ruby gotchas and curiosities that developers should be aware of.
Ruby Gotchas that will come back to haunt you

Make Remote Files Local with Ruby Tempfile

We live in the age of remote resources. It's pretty rare to store uploaded files on the same machine as your server process. File storage these days is almost completely remote Using file storage services like S3 is awesome, but not having your files accessible locally can complicate the performance of file-oriented operations.
Make Remote Files Local with Ruby Tempfile

SICP Wasn’t Written for You

Posted 5 months back at Jake Scruggs

The number of software luminaries who sing the praises of “Structure and Interpretation of Computer Programs” (referred to as SICP) is such a long list that you might think only a crazy person would take issue with it. However, to ignore SICP’s problems and continue to blindly recommend it seems just as crazy.

SICP was the textbook for MIT’s introductory programming class and was a bit of a departure from other into to computer science textbooks at the time.  Wikipedia sums it up nicely:  “Before SICP, the introductory courses were almost always filled with learning the details of some programming language, while SICP focuses on finding general patterns from specific problems and building software tools that embody each pattern.”  Which sounds awesome, but does essentially say that abstract principles will be introduced before the nuts and bolts of a language.  If you think about that for a minute, you may see where the problems will be.

When I was training to be a teacher I took a bunch of education courses.  I got good grades but when I got into the classroom to actually teach I flailed around just trying to keep the class under control and mostly forgot to apply the principles I had learned.  The knowledge was in my head, but it floated, disconnected, from anything in particular.  When I learned these ideas I had no teaching experience, and so, nowhere to place these abstract principles.

SICP’s first chapter explains the basic form of Scheme (a Lisp), some basic operators (+, -, *, /, etc), defining/calling a function, different ways a compiler might evaluate code, and conditionals over the course of a few short pages.  That’s a bit much to swallow all at once, especially the comparative evaluation stuff but that should be easily sorted out with some examples. Right?  Well, that’s not really SICP’s thing. SICP will give you a few trivial examples and then toss you right into the deep end. Then first 2 problems for the reader are pretty easy, but it’s the 3rd that will let you know what yer in for: “Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.” Which seems pretty easy until you realize there are no variables.  You’ll need to figure out an algorithm that can take 3 numbers and, without any intermediate state storage, return the 2 biggest numbers in such a way that you can sum their squares.  I’ll be real honest here, after about 30 min of trying to do this (I have zero functional background so I’m a complete novice here) I gave up and tracked down the answer online.  Of course the answer was simple and concise and made me feel like a chump.  Which is fine, but not really what I was expecting in the first chapter, let alone the 3rd problem of the entire book.

But that’s what SICP is all about -- challenging problems. The rest of the chapter introduces Newton’s method for square/cube roots and lexical scoping just for fun. Chapter 2 is recursion vs iteration in terms of execution speed, resource usage, and transforming from one to the other.  Logarithmic, linear, and exponential growth are dealt with in a few paragraphs and then we’re off to Exponentiation, Greatest Common Divisors, Primality, and implementing Fermat's Little Theorem for probabilistic prime determination. My favorite question from chapter 2 asks the reader to formulate an inductive proof that Fib(n) is the closet integer to ((golden ratio)^n)/5.

Which brings me to another criticism of SICP:  It assumes a familiarity with math that most people just don’t have. A first year MIT student would probably be swimming in math classes so the book assumes that knowledge on the readers part.  Abstract programming principles can be very difficult to find examples for so I’m sympathetic to the plight of the authors, but when you just go straight at math you’re explaining an abstract thing with another abstract thing.

There’s a certain sort of person who gets excited by complicated abstract but internally consistent logic with no real connection to the concrete.  In my experience as a physics teacher, these students do exist but are very rare. Most people need a bit of connection to something tangible in order to have the ideas connect in their brain.

What then is my point about SICP?  Simply that its explanations are overly terse and its problems are large steps past what little is explained.  In light of those things I have recommendations for those who attempt to work through it.

  • If you intend to do every problem, realize that this will take a LONG time and involve a bunch of research.
  • Set a time-box for how long you’re going to spend on a problem before you go look up the answer.  If you’ve spent enough time trying to solve a problem you will still value the answer enough to remember it. 30 min is a good number.  Increase or decrease as your sanity allows.
  • If you feel like something hasn’t been explained:  You’re probably right.  After you find the answer, a close re-reading will reveal a cryptic sentence that you now realize was trying to tell you something. This will infuriate you and is perfectly normal.
  • Work through the book with a group.  This will hopefully allow you to commiserate about how lost you are and get some help.  If there’s someone in there that loves this book and thinks everything is explained perfectly, ignore them.  If they subtly imply that you’re stupid for not getting it:  Leave the group.  You don’t need that static in your life.
  • Do not feel bad about not knowing all this math stuff:  Remember that this book was written for students who would be surrounded by math at the time they read it.
  • Consider learning Lisp before starting this book.  The really important concepts in the book come easier if you’re not also learning Lisp at the same time

Form Filling is Formulaic

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

You probably have more than a few tests that look a bit like this:

fill_in ...
fill_in ...
fill_in ...
fill_in ...
select ...
choose ...
click_on 'Submit'

Filling out a form with Capybara can be very tedious. With Formulaic, we aim to make the process less repetitive and more fun.

fill_form(:user, name: 'Caleb', age: 24, city: 'Boston')
fill_form(:user, FactoryGirl.attributes_for(:user).slice(:name, :age, :city))

Literate Example

# The main entry point for Formulaic is the `fill_form` method.
fill_form(
  # Symbol representing the name of the class the form represents
  :dependent,

  # Pass a hash of attributes to be filled. Works great with
  # `FactoryGirl.attributes_for(:dependent)`.
  {
    # The attribute to set and the value. In this case, Formulaic will
    # `fill_in` the "Name" input with "My dependent".
    name: 'My dependent',

    # If the value of an attribute is a hash, Formulaic will look up
    # translations for the correct model. This is
    # `t('simple_form.labels.profile.zip_code')`.
    profile: { zip_code: '12345' },

    # Works with dates, too!
    date_of_birth: 8.years.ago,

    # When passed an array, it will `check` each of the elements.
    ethnicity: [Profile::ETHNICITY_OPTIONS.first],
  }
)

# Formulaic provides a simple way to look up the translation for the
# submit helper for your model and action. The default is `:create`, so
# you can leave that off.
click_on submit(:dependent, :create)

Formulaic uses I18n conventions to find the text of labels and assumes that you are using SimpleForm.

We hope that you enjoy using Formulaic as much as we do, and as always we encourage you to report any problems you might have and to contribute your improvements!

Redis Scripting with MRuby

Posted 5 months back at Luca Guidi - Home

MRuby is a lightweight Ruby. It was created by Matz with the purpose of having an embeddable version of the language. Even if it just reached the version 1.0, the hype around MRuby wasn’t high. However, there are already projects that are targeting Nginx, Go, iOS, V8, and even Arduino.

The direct competitor in this huge market is Lua: a lightweight scripting language. Since the version 2.6.0 Redis introduced scripting capabilities with Lua.

# redis-cli
> eval "return 5" 0
(integer) 5

Today is the 5th Redis birthday, and I’d like celebrate this event by embedding my favorite language.

Hello, MRuby

MRuby is shipped with an interpreter (mruby) to execute the code via a VM. This usage is equivalent to the well known Ruby interpreter ruby. MRuby can also generate a bytecode from a script, via the mrbc bin.

What’s important for our purpose are the C bindings. Let’s write an Hello World program.

We need a *NIX OS, gcc and bison. I’ve extracted the MRuby code into ~/Code/mruby and built it with make.

#include <mruby.h>
#include <mruby/compile.h>

int main(void) {
  mrb_state *mrb = mrb_open();
  char code[] = "p 'hello world!'";

  mrb_load_string(mrb, code);
  return 0;
}

The compiler needs to know where are the headers and the libs:

gcc -I/Users/luca/Code/mruby/include hello_world.c \
  /Users/luca/Code/mruby/build/host/lib/libmruby.a \
  -lm -o hello_world

This is a really basic example, we don’t have any control on the context where this code is executed. We can parse it and wrap into a Proc.

#include <mruby.h>
#include <mruby/proc.h>

int main(int argc, const char * argv[]) {
  mrb_state *mrb = mrb_open();
  mrbc_context *cxt;
  mrb_value val;
  struct mrb_parser_state *ps;
  struct RProc *proc;

  char code[] = "1 + 1";

  cxt = mrbc_context_new(mrb);
  ps = mrb_parse_string(mrb, code, cxt);
  proc = mrb_generate_code(mrb, ps);
  mrb_pool_close(ps->pool);

  val = mrb_run(mrb, proc, mrb_top_self(mrb));
  mrb_p(mrb, val);

  mrbc_context_free(mrb, cxt);
  return 0;
}

Hello, Redis

As first thing we need to make Redis dependend on MRuby libraries. We extract the language source code under deps/mruby and then we hook inside the deps/Makefile mechanisms:

mruby: .make-prerequisites
       @printf '%b %b\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)
       cd mruby && $(MAKE)

see the commit

During the startup, Redis initializes its features. We add our own mrScriptingInit(), where we initialize the interpreter and assign to server.mrb.

# src/mruby-scripting.c
void mrScriptingInit(void) {
  mrb_state *mrb = mrb_open();
  server.mrb = mrb;
}

see the commit

Then we can add another command REVAL with the same syntax of EVAL, but in our case MRuby will be in charge of execute it.

# src/redis.c
{"reval",mrEvalCommand,-3,"s",0,zunionInterGetKeys,0,0,0,0,0},

That mrEvalCommand function will be responsible to handle that command. It’s similar to the Hello World above, the only difference is that the code is passed as argument to the redis client (c->argv[1]->ptr).

# src/mruby-scripting.c
void mrEvalCommand(redisClient *c) {
  mrb_state *mrb = server.mrb;

  struct mrb_parser_state *ps;
  struct RProc *proc;
  mrbc_context *cxt;
  mrb_value val;

  cxt = mrbc_context_new(mrb);
  ps = mrb_parse_string(mrb, c->argv[1]->ptr, cxt);
  proc = mrb_generate_code(mrb, ps);
  mrb_pool_close(ps->pool);

  val = mrb_run(mrb, proc, mrb_top_self(mrb));
  mrAddReply(c, mrb, val);

  mrbc_context_free(mrb, cxt);
}

see the commit

Now we can compile the server and start it.

make && src/redis-server

From another shell, start the CLI.

src/redis-cli
> reval "2 + 3" 0
"5"

This was the first part of this implementation. In a future article, I’ll cover how to access Redis data within the MRuby context.

For the time being, feel free to play with my fork.

A GIF is worth a thousand screenshots

Posted 5 months back at opensoul.org - Home

Did you know that GIFs have productive uses? Yep, deal with it! Lately, I have been including GIF screencasts on pull requests that include user interface changes as a way to clearly demonstrate the changes.

It's an extermely effective way to communicate what has changed. These screencasts have other uses, such as showing how to use a feature or demonstrating a bug.

There are a lot of ways to make a GIF screencast, but LICEcap is the best tool I have found found. It is not the prettiest piece of software that I have ever laid eyes on, but it just works and makes fantastic GIFs. The GIF above is 26 seconds long, great quailty, but only 500 KB.

Episode #443 – February 25th, 2014

Posted 5 months back at Ruby5

In this episode we cover new Rubies and rSpec, Ruby’s Demise, AdequateRecord, and a Ruby Heroes reminder.

Listen to this episode on Ruby5

Sponsored by TopRubyJobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

RSpec 2.99 and 3.0 beta 2

Late last week Myron Marston and the RSpec team released versions 3.0.0beta2 and 2.99.0.beta2.
RSpec 2.99 and 3.0 beta 2

Ruby is Legal (2.1.1)

Our Ruby is all grown up. Yesterday was ruby's 21st birthday. To celebrate they released version 2.1.1 along with patch releases for 2.0.0 and 1.9.3
Ruby is Legal (2.1.1)

Rumors of Ruby’s Demise

Avdi Grimm wrote a blog post about the 'Rumors of Ruby's Demise' where he talks about the hype around other languages, specially ones with built-in support for concurrency like Erlang or Scala, and how some people in the community see that as sort of a threat to Ruby.
Rumors of Ruby’s Demise

AdequateRecord

Last week Aaron Paterson released a fork of ActiveRecord that can handle twice as many requests per second.
AdequateRecord

Ruby Heroes

Please take a moment to nominate someone that's significantly contributed to our community this past year for a Ruby Hero Award. The awards with be given at RailsConf in Chicago.
Ruby Heroes

Thank You for Listening to Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

EmberJS with a Separate Rails API

Posted 5 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We just wrapped up a large client project using EmberJS and we learned a few things that are interesting to share.

Ember made this project easier. There are times that a JavaScript framework is unnecessary and there are times that it makes the code much cleaner. This was the latter.

Split Development

We built our API and our JavaScript application as two completely separate applications. We had one repo that held a very basic Rails application with Ember on top and another repo that held the API built in Rails.

Rails instead of Yeoman, Grunt, Brunch, etc

There are a lot of front end development tools that will allow you to build an EmberJS application using CoffeeScript, Sass and the other tools that we like to use on projects. After evaluating them we settled on using a basic Rails application instead; primarily for simplicity. The project had a short timeline and we didn't want to have to worry about another tool that we were not familiar with. In the future I would love to try building an Ember UI using a front end tool such as Tapas with Ember but we didn't have any complaints with using Rails in this case and it made our stack a bit simpler to use.

For Ember in our Rails app we used the ember-rails gem. It provides a basic folder structure for your Ember application inside the app/assets/javascripts directory. The directory structure is similar to a Rails application as you can see below.

controllers/
helpers/
components/
models/
routes/
templates/
templates/components
views/

The one thing that is strange when using the gem for a UI only application is that your app/ directory in Rails is basically unused except for the app/assets/javascripts/ where all the actual work will happen. Another project, EmberAppkitRails, solves this issue by putting the app/ directory into asset pipeline. This is an interesting idea. The gem is pre-1.0 so the API could change.

Ember-rails also provides configuration variables for using the development or production version of Ember depending on your current enviroment. This is nice so that your Ember debug information is automatically removed in production.

Fixtures in Development

To allow rapid development, we built the UI in Ember using only fixtures in Ember Data. This allowed us to very quickly build out complex interactions without having to worry about the API being in place. This was a huge help in moving fast and later we backfilled the API. Being able to change property names without having to worry about migrations or outside API changes was very efficient. An Ember Data fixture is a simple JSON object and you can quickly modify it to your needs. It also handles has many and belongs to references using the IDs of other elements.

App.User.FIXTURES = [
  {
    id: 1
    email: 'user@example.com'
    posts: [1, 3]
  }
  {
    id: 2
    email: 'secondUser@example.com'
    posts: [2,4]
  }
]

App.Post.Fixtures = [
  {
    id: 1
    title: 'The Art of Vim'
    user: 1
  }
  {
    id: 2
    title: '15 Minute Blog in EmberJS'
    user: 2
  }
]

There are downsides to this approach. The first one is the backfill process. We waited too long in the project to connect our API to Ember and ran into issues.

The other problem is that by having the applications in two seperate repos, you can't easily have a full integration test. In order to do this you would need to run both applications on the same machine simultaneously. You would also need some way to make sure that both of the applications were on the same revision for the test.

We decided to test the apps seperately and trust that the API is what we've said it was. This can be frustrating but in our case, it didn't turn out to be a real problem. Once you have wired your API to the UI, you should never change your UI without also changing the API. This was enforced in code review only.

CoffeeScript

I love CoffeeScript and as a company we have embraced it for our all our projects. Ember is no exception to that. CoffeeScript made our Ember application more readable and easier to work with objects. The only thing that is odd is the syntax for a computed property, but that is a minor issue and we quickly adjusted to seeing it as normal.

fullName: (->
  "#{@get('firstName')} #{@get('lastName')}"
).property('firstName', 'lastName')

Fast Tests!

By removing the API from the UI application, we were able to write feature specs entirely in CoffeeScript. This was a huge benefit to the overall success of the project. We could test every interaction in our app precisely and not have to worry about the normal overhead associated with those types of feature specs. The specs only had to deal with JavaScript so it was really fast. A full rake for our UI application was 32.770 seconds including starting the Rails environment. The suite had a total of 71 examples, most of which were feature specs.

Testing in General

We found Ember to be very easy to test in general. Most things break down to Ember.Object and it was easy to grab a controller in a test and verify that a property works as expected. Because we wanted to use Mocha with Chai BDD matchers instead of QUnit, the initial test setup was a bit complex but after using Konacha with a Mocha adapter, it was smooth sailing. The extra setup time for Mocha over QUnit was definitely worth it. The syntax has a much more readable format.

describe 'AggregateStatsController', ->
  describe 'summed properties', ->
    beforeEach ->
      stats = []
      stats.push Ember.Object.create
        clicks: 2
        cost: 1.99
      stats.push Ember.Object.create
        clicks: 4
        cost: 2.00

      model = Ember.Object.create(dailyStats: stats)

      controller = App.__container__.lookup('controller:aggregate_stat')
      controller.set('model', model)

    it 'will sum the number of clicks in the model', ->
      expect(controller.get('clicks')).to.equal(6)

    it 'will sum the cost in the model', ->
      expect(controller.get('cost')).to.equal(3.99)

Feature specs were also very easy to handle. Ember has built in integration test helpers that worked for most of our needs and we used jQuery to augment them in our expectations. The specs were fast enough that we could test small details in the interface that we might otherwise want to omit. Being able to test all the UI interactions gave us a lot of faith in our codebase.

describe 'Navigating SEM Campaigns', ->
  before ->
    App.DailyStat.FIXTURES = [
      {
        id: 1
        clicks: 11
      }
      {
        id: 2
        clicks: 10
      }
    ]

    App.SemCampaign.FIXTURES = [
      {
        id: 1
        name: 'Opening Campaign'
        status: 'active'
        dailystats: [1]
      }
      {
        id: 2
        name: 'Final Sale'
        status: 'active'
        dailyStats: [2]
      }
    ]


it 'shows the daily stats information for campaign', ->
  visit('/').then ->
    clickLink('SEM Campaigns').then ->
      expect(pageTitleText()).to.equal('SEM Campaigns')
      expect(pageHasCampaignWithTitle('Opening Campaign')).to.be.true
      expect(statusFor('Opening Campaign')).to.equal('icon-active')
      expect(clicksFor('Opening Campaign')).to.equal('11')
      expect(pageHasCampaignWithTitle('Final Sale')).to.be.true

Naming your tests

Konacha and Teaspoon both have the downside of not showing a filename when a spec fails. This caused us a lot of pain when we first started so we decided on the convention of using the first describe docstring as the name of the file. In the case above our file would be named navigating_sem_campaigns_spec.js.coffee. This worked out great and made it much easier to find a failing spec.

Overall

Ember is far stabler than I would have imagined given that 1.0 was just released 6 months ago. If you have a project that is highly interactive and requires a lot of data binding, I recommend trying it out. The Ember community has been incredibly helpful on Stack Overflow, their forums and their IRC channel.

Spam improvements

Posted 5 months back at entp hoth blog - Home

Howdy,

Today we deployed a number of improvements to our spam engine. As a result you should see a big decrease of false positives (real discussion incorrectly marked as spam). It is possible that you will also notice a slight increase of false negatives (real spam, not caught). As you continue marking those as spam the situation should rapidly improve.

We also added a new option: you can now ignore spam checking for a particular category. This is useful when you receive emails from an automated source (form, etc), which contains a lot of (sometimes malformed) HTML which may trigger some of our spam rules.

You will find this setting when editing a category:

Don't check for spam

As usual, if you have problems with spam on your site, please contact us. This includes:

  • Emptying a very large spam folder
  • Too many false positives
  • Too many false negatives

Thanks!

Of Late

Posted 5 months back at RailsTips.org - Home

A lot has changed over the years. I now do a lot more than just rails and having railstips as my domain seems to mentally put me in a corner.

As such, I have revived johnnunemaker.com. While I may still post a rails topic here once in a while, I’ll be posting a lot more varied topics over there.

In fact, I just published my first post of any length, titled Analytics at GitHub. Head on over and give it a read.

Introducing Lotus::Controller

Posted 5 months back at Luca Guidi - Home

Lotus development is going well. The experiment of open source a framework per month is sustainable. I have the time to cleanup the code, write a good documentation and deliver great solutions.

This month, I’m proud to announce Lotus::Controller.

It’s a small but powerful and fast framework. It works standalone or with Lotus::Router and it implements the Rack protocol.

Actions

The core of Lotus::Controller are the actions. An action is an HTTP endpoint. This is the biggest difference with other frameworks where they use huge classes as controllers. Think of Rails, where a single controller is responsible of many actions and holds too much informations. Lotus is simple: one class per action.

require 'rubygems'
require 'lotus/controller'

class Show
  include Lotus::Action

  def call(params)
    @article = Article.find params[:id]
  end
end

With this design I wanted to solve a some annoying problems.

An action is an object, whose ownership belongs to its author. She or he, should be free to build their own hierarchy between classes. Lotus offers Ruby modules to be included instead of superclasses to be inherited.

Smaller classes are high cohesive components, where the instance variables have a strong relationship between them. This level of isolation prevents accidental data leaks and less moving parts.

A tiny API of one method makes straightforward the usage of Lotus::Controller. Its argument (params), makes it easy to integrate with existing Rack applications. It returns automatically a serialized Rack response.

A side benefit of this architecture is to take over the control of instantiate an action.

require 'rubygems'
require 'lotus/controller'

class Show
  include Lotus::Action

  def initialize(repository = Article)
    @repository = repository
  end

  def call(params)
    @article = @repository.find params[:id]
  end
end

action   = Show.new(MemoryArticleRepository)
response = action.call({ id: 23 })

assert_equal response[0], 200

In the example above we define Article as the default repository, but during the testing we’re using a stub. In this way we can avoid hairy setup steps for our tests, and avoid to hit the database. Also notice that we’re not simulating HTTP requests, but only calling the method that we want to examine. Imagine how fast can be a unit test like this.

Exposures

Instance variables represent the internal state of an object. From an outside perspective we don’t know which is that state. The simplest and recommended way to get this information is to ask for it. This mechanism is called Encapsulation. It’s one of the pillars of Object Oriented Programming.

The instance variables of an action are necessary for returning the body of an HTTP response. While we’re creating that result from the inside of an action, we can access these informations directly. External objects can retrieve them with getters. These getters are defined with a simple DSL: #expose.

require 'rubygems'
require 'lotus/controller'

class Show
  include Lotus::Action

  expose :article

  def call(params)
    @article = Article.find params[:id]
  end
end

action = Show.new
action.call({ id: 23 })

assert_equal 23, action.article.id

puts action.exposures
  # => { article: <Article:0x007f965c1d0318 @id=23> }

No Rendering, Please

Lotus::Controller helps to build pure HTTP endpoints, rendering belongs to other layers of MVC. It provides a private setter for the body of the response.

require 'rubygems'
require 'lotus/controller'

class Show
  include Lotus::Action

  def call(params)
    self.body = 'Hello, World!'
  end
end

Views and presenters can manipulate the body of the returned response.

require 'rubygems'
require 'lotus/controller'

class Show
  include Lotus::Action

  expose :article

  def call(params)
    @article = Article.find params[:id]
  end
end

action      = Show.new
response    = action.call({ id: 23 })
response[2] = ArticlePresenter.new(action.article).render

Other features

Lotus::Controller offers a set of powerful features: callbacks, automatic management for exceptions and mime types. It also supports redirects, cookies and sessions. They are explained in detail in the README and the API documentation.

Roadmap

On March 23rd I will release Lotus::View.

To stay updated with the latest releases, to receive code examples, implementation details and announcements, please consider to subscribe to the Lotus mailing list.

<link href="//cdn-images.mailchimp.com/embedcode/slim-081711.css" rel="stylesheet" type="text/css"/>