Writing Stupid Simple Unit Tests

In this post, we'll cover the basics of unit tests and walk through the process determining what types of tests you should write for your code. We'll also write a few tests to show just how simple most unit tests are.

The aim here is to make tests a bit more familiar and less daunting and mysterious. Unit tests are good. I'm by no means an expert on software testing. I'm just someone that's discovered the joy and freedom that writing tests provides.

If you are an expert on software testing and notice I'm wrong, please tell me!


What are unit tests?

If you’re not familiar with unit tests, they’re essentially software written to test discrete units of functionality in a system.

Unit tests test the expected behavior of your software as it exists (or existed) at a specific point in time. They make sure that everything works as expected based on the rules established for the software at the time it was written.

If those rules change -- and your software along with them -- your tests will fail. When they fail, you'll gain insight into how the change affects the rest of the system.

For example, if a change is made in code for your site’s user profile management, you'll want to make sure that the change does not affect the behavior of your site’s payment processing code.

Tests can save time and money by acting as the first line of defense in catching problems before they result in a late night emergency or catastrophic failure. When things go wrong, they can also serve as the first response team. You can run your unit tests to see if any rules/expectations have been broken by some recent change, either in your code or some external API dependency, and fix the bug introduced by the change.

Sometimes the tests become outdated. Again, the tests will fail, letting you know that you’re no longer testing for the right thing. When this happens, you should update your tests to make sure they are in line with the new rules for your software.

Getting started

Unit tests are rarely difficult to write or run if you’re using a testing framework (e.g., phpunit).

You can write your tests before you write your business code if Test-driven Development (TDD) is your thing or you can write them after.

It doesn’t matter much when you write them as long as you have them.

The brilliant thing is that almost all tests you write will be stupid simple.

Tests usually assert some condition is true (or false, depending on what outcome you expect).

On top of that, most testing frameworks have convenience functions built in for performing these types of tests and several others, including testing the expected output of a function (i.e., what the function prints to the screen).

The three test functions we've used most frequently when writing tests for Largo are assertEquals, assertTrue and expectOutputString.

Our stupid simple example plugin

So, let’s say that we want to write tests for a WordPress plugin. Our plugin simply prints the message “Hello world!” in the top corner of the WordPress dashboard (sound familiar?).

This is the extent of the code:

 * @package Hello_World
 * @version 0.1
 * Plugin Name: Hello World!
 * Plugin URI: https://gist.github.com/rnagle/d561bd58504a644e9657
 * Description: Just a simple WordPress plugin that prints "Hello world!" in the top corner of the WordPress dashboard.
 * Author: Ryan Nagle
 * Version: 0.1

function hello_world() {
  return "Hello world!";

function hello_world_markup() {
	echo "<p id='hello-world'>" . hello_world() . "</p>";
add_action('admin_notices', 'hello_world_markup');

function hello_world_css() {
	$x = is_rtl() ? 'left' : 'right';

	echo "
	<style type='text/css'>
	#hello-world {
		float: $x;
		padding-$x: 15px;
		padding-top: 5px;
		margin: 0;
		font-size: 11px;
add_action('admin_head', 'hello_world_css');

Couple of things to note:

  1. We have one function -- hello_world -- that returns a string.
  2. Two other functions -- hello_world_markup and hello_world_css -- that echo strings but have no return statements. This means we'll have to check the output of these functions rather than the return value to properly test them.
  3. Note that hello_world_markup relies on hello_world to provide the the “Hello world!” message.

So, how do we test this plugin? Well, if you don’t have your test framework installed and configured, you’ll want to get that ready and raring. The details are beyond the scope of what we'll cover here, but for an overview of setting up unit tests for WordPress plugins and themes, see Will Haynes’ Unit Testing Themes and Plugins in WordPress.

The INN Nerds also have a collection of deploy tools to make developing WordPress sites and writing tests much easier. You can read about using the deploy tools for testing themes and plugins here: Updates to INN’s Deploy Tools.

Disclaimer: getting unit tests working WordPress can be daunting at first. It’s a lot to digest and may take some time to set up if you’re coming to this cold, so be patient. It’ll be worth it.

If you’re really dedicated to the cause and need a hand, reach out to us.

How do I write a unit test?

With that out of the way, let’s design some unit tests for our “Hello World!” plugin.

The first thing we’ll want to do is enumerate the features of the code which we need to test.

There are many ways you can approach this. My preference is to create a single test file per file in my source code and have my tests directory mirror the structure of my source directory.

Each test file has tests for each function (or member functions of classes) in the corresponding source file.

So, the directory structure for our plugin would look like:


We’ll be doing our work in test-hello-world.php where we’ll set up the skeleton of our test case, which is as simple as extending WP_UnitTestCase:


class HelloWorldTest extends WP_UnitTestCase {}

We can then stub out test functions for each function:

 class HelloWorldTest extends WP_UnitTestCase {
  function test_hello_world() {
    // Test hello_world()

  function test_hello_world_markup() {
    // Test hello_world_markup()

  function test_hello_world_css() {
    // Test hello_world_css()

Now, let's look at each function and consider what we're testing:

1. For hello_world, we want to verify that the string returned is "Hello World!":

function test_hello_world() {
  $this->assertEquals("Hello World!" == hello_world());

Easy enough. Now if for some reason someone changes the return value to "Hello world!" -- capitalization be damned -- the test will fail.

I can hear you now, "This is stupid, no one writes code like this." Yes, the example is stupid and that's the point. Don't get caught up focusing on the wrong thing.

It makes no difference how complex the function is, the only concern is verifying that the return value or the outcome is what is expected.

So, if instead you're testing some_made_up_function which returns an object, you may want to verify that it is actually a PHP Object:

$this->assertEquals(gettype(some_made_up_function()), "object");

Or that the object has a specific member attribute:

$test_obj = some_made_up_function();
$this->assertTrue(property_exists($test_obj, 'name_of_attribute_goes_here'));

2. For hello_world_markup, we want to verify that the function prints "<p id='hello-world'>Hello World!</p>":

function test_hello_world_markup() {
  $this->expectOutputString("<p id='hello-world'>Hello World!</p>");

Notice that we're expecting "Hello World!" to be part of the output. This might not be a good thing. If the return value of hello_world changes, this test will fail, too.

For the sake of example, let's say we only care about testing the markup and not the message. We can take this test a step further and decouple the two so that we're only testing the markup by changing it to read:

function test_hello_world_markup() {
  $result = ob_get_contents();

  $this->assertTrue(!empty(preg_match("/^<p\s+id='hello-world'>.*<\/p>$/", $result)));

Essentially what we're saying is, we don't care what the message is as long as it is wrapped in <p id="hello-world" /> tag.

Simple, right?

3. For hello_world_css, we want to verify that the function prints the CSS rules for our "Hello World!" markup:

function test_hello_world_css() {
  $result = ob_get_contents();

  // Make sure there are style tags being printed. Duh.
  $this->assertTrue(!empty(preg_match("/.*?.*<\/style>/s", $result)));

  // Make sure we're using the right selector
  $this->assertTrue((bool) strpos('#hello-world', $result)));

And with that, we're done! You can see the entirety of test-hello-world.php here.

When to write tests

As mentioned earlier, you may want to write your tests first. This is called Test-driven Development.

Writing your tests first has lots of awesome benefits. When you write tests first you are forced to think about the design of your system and how best to structure the software so that it is actually testable. It’s a good practice and will help increase the orthogonality of your code.

However, it's never too late to start writing tests. When you write tests for existing code, you'll find places where things need to be refactored (sometimes completely rewritten or redesigned) to clean up the spaghetti code.

If you really can't afford the time it takes to write tests for all your code (or aren't being allotted the time to do so), you might still be able to lobby for time.

It's extremely important that you can verify your code works, especially in cases where you're using code to make an assertion about the world. For example, if you're writing code that processes a data set which will later be used in a story or investigation, it's crucial that your code produces results that are precise and correct.

The other case where you should absolutely be writing tests is for critical customer interactions. Saving sensitive information or processing payment information are things that you don't want to get wrong.

Updates to INN’s Deploy Tools

We've made a lot of changes and added a bunch of new commands to our deploy tools since the last time we checked in.

Most notably, we've added commands to help scaffold and run unit tests for your plugins AND your themes. Behind the scenes, our scaffolding and tests works as described by Will Haynes in this post.

Other changes include migration to Fabric's new-style task declaration and updated documentation.

Unit testing tools

To scaffold/set up tests for your theme:

$ fab dev wp.tests.setup:largo-dev

This command:

  • Drops and recreates your testing database (which should always be throw-away, ephemeral)
  • Copies essential test files (e.g. phpunit.xml and a tests directory with test-sample.php and bootstrap.php files) to your theme or plugin directory if they are not already present
  • Installs the WordPress test framework to /tmp/wordpress-tests-lib/includes
  • Configure a wp-tests-config.php file and place it in /tmp/wordpress-tests-lib

With those files in place, you can run tests for your theme or plugin:

$ fab dev wp.tests.run:largo-dev

Example output:

Screen Shot 2014-11-25 at 9.45.54 AM

Namespaces and aliases

The other big change we made was moving all commands to Fabric's new-style task declaration. Using new-style tasks gives us all the benefits described in Fabric's documentation on the subject.

While the deploy tools are not taking advantage of every feature of new-style tasks, they are better organized thanks to namespacing and aliases.

Here's some example output from fab -l:

Screen Shot 2014-11-25 at 10.00.07 AM

Here you can see each module (and submodules) along with defined tasks. Tasks use dot-syntax, which should feel more explicit and intuitive if you're familiar with Python (which is what Fabric is built on).

Also note that the command wp.migrations.s2m is an alias for the much-longer, sometimes-harder-to-remember command wp.migrations.single_to_multisite.

I'm very lazy, so anytime I can save some mental energy and type a bit less, I consider it a win. This is as true for commands I use very frequently as it is for commands I use once in a great while.


We also expanded the deploy tools' documentation to include all commands. You can find the documentation on Github.


Remember, the deploy tools were built with our situation in mind. We deploy to WP Engine, so some stuff is very WP Engine specific.

For example, the wp.fetch_sql_dump command only knows where WP Engine stores recent SQL dumps. The wp.maintenance.start and stop commands assume you are deploying to a host that uses Apache and that you have a .htaccess file in the root of your FTP server.

With that said, much of toolkit will work with any host provided you have (S)FTP access, including the push-button deployment rig I wrote about previously.

If you are using or thinking about using the deploy tools, have questions or suggestions, let us know.

Stealing the NPR App Template for Fun and (Non-)Profit

What we learned building an election app for INN members with the NPR visuals app template

Screen Shot 2014-11-13 at 5.40.59 PMLast week, just in time for the election, our team at the Investigative News Network (INN) launched Power Players — a state-by-state exploration of campaign finance and top political donors across the country. The project is a collaboration between thirteen INN member organizations who did their own reporting and analysis of the data we provided to them. To support this reporting, our team built a national app with easy to embed components.

As this was one of the first editorial projects we built as a team, we decided to start things off on a solid foundation by creating an app template — something that contains a library of components we might want to reuse for future projects and allows us to create those projects more easily.

Fortunately for us, NPR’s Visuals team has generously open sourced their app template, which we used as the foundation for our own. We shamelessly stole NPR’s code and set up an INN-specific template by following the steps outlined in Tyler Fisher’s excellent post “How to Setup the NPR App Template for You and Your News Org.”

It was a (mostly) painless experience but we learned some things along the way and wanted to share our experience to help others who might tackle this process in the future.

Why use NPR’s app template?

For one, we didn’t want to build our own toolkit for deploying news apps from the ground up.

When someone (or some team) builds and open sources a tool that addresses a problem you’re facing, using it will almost certainly save you time, money, blood, sweat, tears and heartbreak. Do yourself a favor and try really hard to seek out and use stuff that other smart people have already built.

The other motivating factor was that we’ve never had the chance to use NPR’s app template and a number of us have been curious. Being that this was INN’s first news app and we had a short amount of time to get something up and running, we thought this might make the perfect opportunity to take it for a test drive.

Setting up the app template for INN

Tyler acknowledges in his post that the process “seems like a lot.” But we discovered it’s just a matter of knowing where to find all the NPR bits so you can make the template your own.

In fact, it took just seven commits to completely scrub NPR specific stuff from the app template.

For your reference, here is our fork of the NPR app template and the code for the Power Players app itself.

Building with the app template

The structure of the Power Players app

Our project concept was relatively simple. There are a total of five views and their corresponding templates.

An example of a individual power player embed.

They consist of:

One of the goals for us was to create a resource that any of our member organizations could use to bolster their coverage of the elections — either by hosting information for an entire state, or including individual power player cards in articles covering campaign finance and the election.

Thus the embeddable versions of the state and power player pages. These are essentially the same as the normal templates, with a simple full-width layout and simplified INN branding (a credit link at the bottom of each embed).

The Kentucky Center for Investigative Reporting is one INN member making great use of the app. Here is a complete list of all the members organizations participating in the project and the stories they've published (so far).

The entire project is responsive (thanks to pym.js, yet another NPR project), made to look great no matter what size container the embeddable elements get placed in.

Snags and snafus

In working with the NPR app template we encountered some things that aren’t well documented (yet).

CSS and JS pseudo-tags

Tyler covers this briefly in another blog post on the app template.

What wasn’t clear at first was how the JS and CSS pseudo-tags interact across templates.

We ran into a problem where, in separate templates, we “pushed” or queued different sets of javascript files to be rendered. In both templates, we were passing the same file name to the render function, which resulted in the second file overwriting the first when deploying.

Here’s what NOT to do:

{# Template file no. 1 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/underscore.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_1.js') }}
{{ JS.render('js/app-footer.min.js') }}
{# Template file no. 2 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_2.js') }}
{{ JS.render('js/app-footer.min.js') }}

Once you realize that JS.render outputs a file, the contents of which are determined by preceding calls to JS.push, you realize that having different calls to JS.push before rendering to the same file just won’t work.

In this case, if template 2 is rendered after template 1, “js/app-footer.min.js” will be missing “underscore.js”, potentially breaking functionality in template 1.

Do this instead:

{# Template file no. 1 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/underscore.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_1.js') }}
{{ JS.render('js/app-footer-1.min.js') }}
{# Template file no. 2 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_2.js') }}
{{ JS.render('js/app-footer-2.min.js') }}

By making the filename passed to JS.render unique to each template, we can be sure we’re not clobbering any javascript files.

Flask’s url_for function and your project’s path prefix

Another issue we encountered was that the app template, using Flask’s default url_for function, doesn’t take into consideration your project’s path. That is, when you deploy your app to S3, it is meant to live at something like http://apps.yourdomainname.org/project-slug/ whereas the development server uses something like http://localhost:8000/ without the project slug.

For example:

<a href="{{ url_for('some_view') }}">Hey, a link to a page</a>

Renders as:

<a href="/some-view/">Hey, a link to a page</a>

What we want when deploying to staging or production is an URL that includes the project slug:

<a href="/project-slug/some-view/">Hey, a link to a page</a>

To remedy this, we created an app_template_url_for function to replace Flask’s standard url_for. The app_template_url_for figures out the current target environment (i.e. development, staging or production) and inserts the project slug as necessary.

View the source code here and here.

Another change we made to INN’s version of the app template is modifying the Flask app’s static_folder:

app = Flask(__name__, static_folder='www/assets')

View this change in context here.

What this does is allow you to use url_for to build urls for static assets kept in www/assets.

<link rel="shortcut icon" href="{{ url_for("static", filename="icons/favicon.ico") }}" />

This provides the flexibility to include assets outside of the CSS and JS pseudo-tag framework if you find yourself with the need or desire to do so.

Working with Member Organizations

The Power Players project presented other challenges, too, because of the number and diversity of INN member newsrooms participating.

First, the data needed to be analyzed and populated in the app template for about a dozen states. Campaign finance data is notoriously dirty (especially when trying to look at individual donors, as we were), plus we were combining data from different sources, so the cleaning and analyzing stage took longer than expected. Some member organizations also wanted more custom analysis, such as looking at local data only rather than statewide data as most members were. This forced us to make some compromises in the design of the app like using the state outline for California but labelling it “San Diego.”

A project such as this also takes a great deal of communication. INN’s director of data services, Denise Malan, held a kick-off conference call with members and stayed in contact with them as she was analyzing the data and the tech team was designing and building the app. We also provided members with instructions on how to add photos and customized content to the app while it was still a work in progress.

As is the case with all our editorial collaborations, INN members can publish stories according to their own timetables, and several ran their stories before the app was finished because we were working right up to the deadline on Election Day. Others have yet to publish because they choose to wait for updated data with post-election numbers.

Ideally, we would love for all members participating in a project to use our work, and we learned some valuable lessons in the process of building our first editorial app.

In future projects, we would like to have mockups of the news app ready as early as possible so our members are able to visualize what we will be providing and how this will fit with their reporting. We also want to provide a firmer deadline for launching an app so members can plan their publishing dates accordingly and leave time to implement the stuff we build. We’ll also do a better job providing the partners with documentation to make it as easy as possible for them to adopt our work.


We learned a lot in the process of building our first app, both about the NPR app template and also what it takes to manage a complex project working with so many partner organizations.

Will we use our fork of the NPR app template for everything? Probably not. We’ll continue to experiment and try out different things before settling on our default set of tools and templates. For projects where it’s a good fit or where we need to deploy something quick and easy, we definitely plan to use it as a solid starting point in building future apps.

Since this is our first app and we’re still learning, we’d love to hear your thoughts and feedback. You can find our team on Twitter @INNnerds or send us email at nerds@inn.org.

How We Make Remote Work Work

The INN nerds are a distributed team, which means we work in different locations and time zones and sometimes in our pajamas. Working from home full time also means we have to think intentionally about communication, structuring our days and setting boundaries around our work.

Here are a few of the tips and resources we've found helpful as we learn how to work effectively as a remote team.

Use Your Words

Remote work requires varsity-level communication. Not only do you need to keep your colleagues informed about what you're working on, but you also need to speak up about challenges and frustrations. Lacking the ambient awareness of a shared physical space, your coworkers can't see if you're struggling or confused, or if you have bruised feelings because of a miscommunication. It's on you to proactively reach out and address things sooner rather than later.

To help facilitate this sort of openness, we start every day with a short standup meeting using Google Hangouts. This allows us to review what we're working on, set priorities and address obstacles. Our weekly team meeting gives us space to discuss the bigger picture stuff, review current projects and set priorities for the following week.

We use HipChat, Google Hangouts and other tools to stay in touch throughout the day. (See the full list of our favorite tools below.)


Remote work offers the flexibility of setting your own schedule, but it also means you can feel like you're working 24/7. We think it's important to set boundaries around our work each day. Work reasonable hours. Don't send or expect responses to non-emergency email after hours. When working in different time zones, don't feel bad about reminding a team member that a late afternoon meeting in their time zone might be well into the evening for you. Taking time to not work makes our working time more productive.

During the day, it's all too easy to get sucked into our screens and not blink for hours on end. To counteract this sort of faux-productivity, we take a lot of walks and other short breaks away from our screens. Snacks and coffee are important, too.


Building habits and routines can help prevent feelings of disconnect or isolation — and feelings of wanting to stay in bed and catch up on The Vampire Diaries. The advice is almost rote at this point, but for good reason: Have a dedicated space in your home where you "go to work." Take a shower. Put on pants.

Sometimes a change of scenery can help reset your focus. Take advantage of coworking spaces in your town or the tried-and-true coffee shop work session.

And most importantly: If something isn't working, or you start to struggle with lack of direction or motivation, speak up. You may work by yourself but you're not alone.

Face Time

While we're primarily remote workers, we like to see each other's faces in person a few times a year. There are some things that are just easier to do when we're all in the same room. These IRL meetups are essential for keeping us connected us a team. We tackle long-term planning and major projects, and build camaraderie over good meals and music and conversation.

Resources and Tools

  • There are great remote work tips in this Source article by Christopher Groskopf
  • Helpful tips on remote productivity
  • The books on remote work
  • HipChat: We use this as our group chat tool and always-on back channel
  • GitHub: For versioning and hosting our open source projects
  • Bitbucket: Versioning and hosting for Largo child themes for all the member sites we host. This allows us to add devs at member organizations to a repo just for their child theme so they can commit changes to a theme for us to review and push to production.
  • JIRA: For project management, planning sprints and iteration cycles, time tracking, and service desk features
  • Bee: Combines issues and tickets from JIRA, GitHub and others into a streamlined interface. Also offers time tracking and task prioritization.
  • Screenhero: Remote pair programming software
  • Google Hangouts: For meetings and daily scrum
  • Dropbox: For file sharing
  • 1password: For password management (synced to everyone's computers/devices using Dropbox)

Also, make sure to check out our growing list of tools and services (including many of the above) that offer discounts to nonprofits.

Real Life

We can spout best practices and advice all day, but the real life application can vary drastically depending on barometric pressure, how early you woke up to a crying baby (or puking cat), and countless other variables. One of the joys of remote work is that it makes room for the realities of life. As a team, we're choosing to trust each other and extend enough flexibility that a work day doesn't have to be an immutable cage.

[Full disclosure, I wrote this post in pajamas, curled up on my couch under a blanket.]

To illuminate more of the real-life applications of remote work advice, we'll soon be launching an occasional series of interviews with remote workers — starting with our own team — that will explore how different people make remote work work: what their set-up looks like, how they structure their days, and what they do in the face of frustrations or flagging motivation. We hope to collect honest portrayals of our modern working life and learn from each other in the process. Watch for more in this space soon.

Want to share your remote work experiences? Get in touch at nerds@inn.org.

Batch Processing Data With WordPress via HTTP

For a recent project, we found ourselves in need of a way to verify the integrity of an entire site’s worth of posts after a mistake in the initial migration meant some of the posts in the database were truncated/cut-off.

Our options were limited. Since our host (WP Engine) doesn’t offer shell access and we can’t connect directly to our database, we would have to write a script to accomplish via HTTP requests.

We wanted something that was at least semi-unattended. We’d also need to be careful to avoid exhausting PHP’s memory limit or running up against any PHP or Apache timeout settings.

Given the constraints, processing of posts would have to happen over several HTTP requests.

Enter batchProcessHelper a class that provides ways to accomplish all of the above.

How it works

You can find all the details on usage at the project’s Github page.

batchProcessHelper is meant to be extended. There are two methods you must override: load_data and process_item.

The first method, load_data, is responsible for accessing the data you plan to process and returning an array of serializable data that will be the data queue. The array can consist of objects or associative arrays or be something as simple as a list of IDs or range of numbers. This is entirely up to you.

The only constraints is that load_data must return a data structure that can be serialized for storage in the database as WordPress transient. This is will be your data queue.

The second method, process_item, is where you put the business code to process individual items in the queue.

This method should return true if it succeeds in processing an item, false if it fails.

How to use it

Here’s a super simple example of a class extending batchProcessHelper. You can see and learn how to run the full example on the project Github page.


class userImportProcess extends batchProcessHelper {

    function load_data() {
        return csv_to_array(ABSPATH . 'example_data.csv');

    function process_item($item) {
        $this->log('Processing item: ' . var_export($item, true));
        return true;

This example doesn’t actually do anything. In load_data, we simply load the example data from csv, converting it to an array of associative arrays.

The process_item method logs each of the associative arrays to the batchProcessHelper debug/error log.

However, this code could easily modified to create a new user for each row in example_data.csv.

function process_item($item) {
    $result = wp_insert_user($item);
    if (!is_wp_error($result)) {
        $this->log(‘Successfully created new user: ‘ . $item[‘user_email’]);
        return true;
    return false;

Instantiating and running

So, we’ve defined our userImportProcess class, perhaps in a file named userImportProcess.php. How do we run it?

$process = new userImportProcess(array(
    'blog_id' => 99,
    'batch_size' => 10,
    'batch_identifier' => 'User Import’,
    'log_file' => '/path/to/your/file.log'


The only requirement is a batch_identifier — a name for your process.

Optionally specify the batch_size. If you don’t, batch_size defaults to 10.

Also, optionally specify a log_file — a path where debug and error messages will be sent. If you don’t specify log_file, batchProcessHelper will try to write one in /tmp.

If you want your process to run in the context of a specific blog, specify a blog_id as well.

Finally, call the process method to start.

Where do I put this stuff?

You’ll need to create a new directory in the root of your WordPress install. It doesn’t matter what it’s called, it just has to be in the root of your WordPress install.

Let’s say you create a directory named wp-scripts. You’ll want to put batchProcessHelper.php and userImportProcess.php in that directory.

Also, if you’re loading data from the filesystem (as in the example above and on the project’s Github page), you’ll want to place the data file in the appropriate location.


Visit userImportProcess.php in your browser.

If all goes well, you should see a message along the lines:

“Finished processing 10 items. There are 390 items remaining. Processing next batch momentarily…"

At this point, the page will refresh automatically, kicking off work for the next batch.

Once all batches are done, you’ll see the message:

“Finished processing all items.”

It should all go something like this:


If you have a multisite install, you'll need a Super Admin account to run code that extends batchProcessHelper. If you have a standalone install, you'll need an account with Administrator privileges.

Also, if you plan on keeping your wp-scripts directory around for any length of time, you should consider allowing only a select few IP addresses to access the directory. Refer to the project's README for more information.

Unit Testing Themes and Plugins in WordPress


Over the past few weeks, we’ve been investigating the best way to incorporate a WordPress testing framework into our development process. With few developers out there writing tests for their plugins (and even fewer testing themes), we want to share our tribulations as we figure the process out.

Fortunately, WordPress ships and supports a unit testing framework. Unfortunately, this framework is designed mainly for development on WordPress core and not directly for WordPress plugin and theme developers. However, with enough prodding, it is possible to write your own tests on top of this library.

Step one: Installing PHPUnit

The PHPUnit library provides a base framework for writing tests in PHP. WordPress develops its own testing platform on top of this library.

Assuming you’re on a *nix system, installing PHPUnit is as simple as:

wget https://phar.phpunit.de/phpunit.phar
chmod +x phpunit.phar
mv phpunit.phar /usr/local/bin/phpunit

See the PHPUnit installation notes for installing the framework on Windows or including it with composer.

Step two: Pull in the WordPress framework

Before we go any further, I should note that wp-cli ships with a handy command that does a lot of this grunt work for you. However, it currently supports tests for plugins only (not themes). If that’s all you need, it’s the fastest way to get started, but the steps below should give you a better understanding of what’s happening behind the scenes and give you a setup that can be customized for your specific development environment.

WordPress bundles its testing framework in with its core development tools svn repo. We can download just the tools we want by checking out the includes directory. It’s up to you where you want to organize your testing library. For the sake of this tutorial, we’ll assume you clone it into your root WordPress directory.

cd <wproot>
svn co http://develop.svn.wordpress.org/trunk/tests/phpunit/includes/

You’ll need a second WordPress config file named wp-tests-config.php as defined here:


Configure the database details as you would a normal wp-config.php. It’s important to use a secondary database, as WordPress drops and recreates tables as part of its tests. In addition, ensure that ABSPATH is pointed to the correct WordPress install to use during testing.

Step three: Hook it all together

What we do now is create a phpunit.xml file. This file is what PHPUnit uses to load the testing environment and run its tests. It doesn’t matter where you create this file, but for the sake of this tutorial we’ll add it to the WordPress root directory (the same place we downloaded the WordPress test framework).

			<directory prefix="test-" suffix=".php">./tests/</directory>

Now, if you run PHPUnit from the command line —

cd <wproot>

— you should see this on the command line.


If errors are thrown, check that your paths all point to the correct place.

Step four: Building tests

What we’ve got so far is the included framework that ships with WordPress. First, our phpunit.xml file is specifying a bootstrap.php file to load and install WordPress. Then, it's looking in ./tests/ for files in the format test-*.php to run actual tests. Since we don’t have any tests yet, no tests are executed.

This base configuration is only running tests on WordPress core functions. In order to test our own plugin and theme functions, we must create our own bootstrap.php file to specify which theme and plugins to activate before installing WordPress. Create a new file called ./tests/bootstrap.php. In this file, include the following:


require_once dirname( dirname( __FILE__ ) ) . '/includes/functions.php';

function _manually_load_environment() {
	// Add your theme …
	// Update array with plugins to include ...
	$plugins_to_active = array(

	update_option( 'active_plugins', $plugins_to_active );

tests_add_filter( 'muplugins_loaded', '_manually_load_environment' );

require dirname( dirname( __FILE__ ) ) . '/includes/bootstrap.php';

Update your phpunit.xml file to include this new bootstrap file, instead of the WordPress default.

Now you have everything you need to write unit tests on your theme. Create a new file named ./tests/test-sample.php and write your first test:

class SampleTest extends WP_UnitTestCase {
	function testSample() {
		$this->assertTrue( 'Your Theme' == wp_get_theme() );
		$this->assertTrue( is_plugin_active('your-plugin/your-plugin.php') );

Running phpunit now, you should see the following:


That’s it! You’re now able to write your own test functions to test your code.

Final words

It’s important to note that while this is one way to organize your testing code, it might not be the best for your development environment. You might, for example, prefer to keep your test suite (./tests/) in the root of a theme or plugin directory, so that it is included in version control. Or you might want to keep tests completely outside of the WordPress directory.

Diving deeper, you might want to activate and deactivate plugins directly in the testing classes to test what happens when plugins aren’t loaded. You might want to maintain multiple test suites for different plugins or themes.

Lastly, now that you are set up to write unit tests, you might be asking how you should go about writing a test. We’ll cover some unit testing best practices over the coming weeks as we write our own tests for our codebase. For now, take a look at some test classes that WordPress has written to get a feel for how tests are built.

How To Hold An Event Using Google Hangouts That Anyone Can Attend

After scrambling to find a way to set up a public Google Hangout for our last Book Club meeting, I thought it might be helpful to share my initial frustrations and how I finally managed to set it up.

First, I tried creating a new Google Hangout via Google Plus, inviting the public and saving the URL to return to later. Turns out that Hangouts created this way are ephemeral. After about five minutes with no attendees, the Hangout ends.

So the second thing I tried was adding an event to my Google calendar, since I knew that calendar events with video calls (Google Hangouts) attached are persistent (we use the same Google Hangouts link for our scrum every day). Unfortunately, I found that I was unable to invite the general public to a Google Calendar event.

The third thing I tried, after stumbling upon this documentation, was creating a Google Plus event. Not only was I able to create a public Google Hangout with a persistent link, but this also gives you the added benefit of having a place where people can RSVP and/or find more info about the event.

From what we've been able to tell, you're still limited to ten attendees on video at any one time and 100 concurrent users total in the chat but this at least gives you a link you can share without anyone in advance of the event and a place where people can RSVP.

I created a quick how-to for this so that I don’t have to spend time head-scratching next time I need to schedule an event. You can find the complete how-to in the INN docs repo on Github.

Hopefully, you find it helpful, too.

Showing How We Work

We're excited to announce the release of a new collection of documents that show how our team works.

In putting this collection together, we wanted to go beyond a style guide (also important, and we're working on that, too) to try to explain as much as possible about the process and values that inform our work: What makes our team unique, how we recruit and hire new team members, our internal process, and how outside members or clients interface with our process.

Opening up our process has a number of benefits for us as a team and, we hope, for others, as well.

Codifying existing processes in one place makes them easier to reference and helps keep the team on the same page. It allows new hires to get up to speed faster and gives prospective employees insight into how we work, our mission and values, and whether working with us would be a good fit.

It also helps external partners, like INN members and our consulting clients, learn how to work with us most effectively.

Above all, we hope that collecting this information in one place will be useful to other organizations who are building and managing similar teams.

This is especially important for us because INN's mission includes a strong educational component and we want to do everything we can to help our members make smart technology decisions.

By showing not only our process, mission and values, but also expanding the scope of this collection to include things like job descriptions, how we run meetings and the tools we use to work effectively as a remote team, we are attempting to create a sort of "missing manual" for running a news apps and technology team.

We hope, over time, to make this manual even more comprehensive as we refine our process and our thinking. And we hope that providing this model will make it a little easier on organizations and managers traveling down this road in the future.

We're grateful to the teams that have come before us who have written and released documentation that served as a source of inspiration for various parts of our team docs, particularly:

- ProPublica's News App and Data Style Guides
- The NPR Visuals Team's app template, coding best practices and manifesto
- Guides and process docs from The Chicago Tribune's News Apps Team
- MinnPost's style guide

This is a work in progress and we plan on updating frequently so we'd really value your feedback and contributions. Feel free to contribute to the project on GitHub or send us suggestions to help us improve our existing docs (or to propose new sections you'd like to see us add).

How To Migrate A Standalone WordPress Blog To A Multisite Install

Sigh. Database migrations — often nerve-wracking, always tedious, never a task I’ve been fond of. However, as you know, they’re often necessary. For us, they come up frequently.

One of the benefits available to INN members is the option to have their Largo-powered WordPress website hosted on our platform, largoproject.org. As such, we often find we need to migrate their standalone blog to our multisite rig.

To ease the process, I wrote a single_to_multisite_migration command that’s included in our deploy tools repo.

Note: if you’re using WP Engine, our deploy tools include a bunch of helpful goodies. You can read more about them here.


  • You have mysql installed on your computer
  • You’re using INN’s deploy tools with your multisite project
  • The standalone blog uses the standard "wp_" WordPress database prefix

What it takes to migrate a standalone blog

My teammate Adam Schweigert put together this gist that describes the changes required to prepare a standalone blog’s database tables for a new home amongst other blogs in a multisite install.

The process involves renaming all of the blog’s tables from wp_[tablename] to wp_[newblogid]_[tablename]. For example, wp_posts might become wp_53_posts.

This is true for all tables except for wp_users and wp_usermeta. More on this later.

This brings up the question of where the value for new_blog_id comes from. The easiest way to get this value is to create a new blog in your multisite install. By doing this, you’re creating a skeleton of database tables that your standalone blog will fill in.

Step one: create a new blog in your multisite install

After creating a new blog, find it in your Network > Sites list. Hover the site’s link and you’ll see an url in your status bar.

Screen Shot 2014-08-18 at 4.19.28 PM

The “id=53” is the part you want. Make note of the site ID as you’ll need it later.

Step two: retrieving a database dump and running the single_to_multisite_migration command

Let’s look at the signature of the single_to_multisite_migration command:

def single_to_multisite_migration(name=None, new_blog_id=None, ftp_host=None, ftp_user=None, ftp_pass=None):

The name and new_blog_id parameters are required. The name will be used when creating a database in your local mysql server. This is where we’ll load the single blog’s database dump. It doesn’t matter much what name is, but it should conform to mysql’s rules for identifiers.

The new_blog_id is the ID that you made note of earlier.

If you’re using WP Engine to host the standalone blog, deploy tools can retrieve a recent database dump for you automatically. For this to work, you’ll need to provide your FTP credentials when running single_to_multisite_migration.

Here’s an example:

$ fab single_to_multisite_migration:blogname,53,myinstallname.wpengine.com,ftpusername,ftppassword

If you’re not using WP Engine, you’ll need to get a database dump by some other means. Once you have it, place it in the root directory of your multisite project repo. The single_to_multisite_migration command expects a mysql.sql file in this location, so you may need to rename your dump file to meet this expectation.

After you have the mysql.sql dump in the root of your multisite project repo:

$ fab single_to_multisite_migration:blogname,53
Example of the output from the single_to_multisite_migration command
Example output from the single_to_multisite_migration command

Step three: wait… rejoice! Your multisite_migration.sql file is ready!

Depending how big your standalone blog’s database is, it may take a while for the command to finish.

Message indicating the single_to_multisite_migration command finished properly.
Message indicating the single_to_multisite_migration command finished properly

Step four: apply the multisite_migration.sql file to your multisite database

I leave it up to you to decide how to best to apply the migration file to your database. You may be able to import the sql using phpMyAdmin, or, if you’re using WP Engine, you might contact their lovely support staff and ask them to apply it for you. Be clear that you DO NOT want to drop all tables in your multisite database before importing the multisite_migration.sql file.

Aside from renaming the standalone blog’s tables, what does single_to_multisite_migration do?

Great question. Here’s the short list:

  • Finds the maximum user ID in your multisite database and uses that value to offset the ID’s of users in your standalone blog’s wp_users table so that they can be inserted into the multisite database without duplicate primary key errors.
  • Finds the maximum user meta ID in your multisite database and uses that value to offset the umeta_id's in your standalone blogs wp_usermeta table so that user meta can be inserted into the multisite database without duplicate primary key errors.
  • Retains the siteurl and home values in your multisite “skeleton” site’s wp_[new_blog_id]_options table. This means that you won’t have to (re)set your new multisite blog’s site url and home url values after applying the multisite_migration.sql file.
  • Looks through all of the standalone blog's posts to find "wp-content/uploads/" and replace it with "wp-content/blogs.dir/[new_blog_id]/files/" to help with migrating uploads to the multisite install.
  • Uses REPLACE, UPDATE and some tricky subqueries to insert or update rows. This means you can apply the multisite_migration.sql file to your multisite database and avoid duplicate wp_users and wp_usermeta entries. Helpful if you need to run several incremental migrations to get all of a blog’s content before making the official transition to the multisite install.

Migrating uploads

The other thing you'll need to do is move your standalone blog's uploads directory. The single_to_multisite_migration command doesn't do this for you. You'll have to manually move the contents of the standalone blog's "wp-content/uploads/" to the multisite install "wp-content/blogs.dir/[new_blog_id]/files/" directory.

Test your migrations thoroughly before deploying.

We’ve tested and used this method several times with great success. It works well for us.

That said, remember to thoroughly test migrations before applying them to any production environment.

Break your local development site or your staging server first and be happy about it! Then fix any mistakes and you'll be ready to apply to your production database.

Responsive Tables

A few weeks back, I spent time researching how best to build responsive tables.

Our requirements were pretty simple. We wanted the ability to load data from Google Drive, the ability to embed the tables using an iframe and a dead simple way of handling (re)publication.

Of course, the other essential requirement for responsive tables is that they work well on a variety of devices. In my opinion, the best solutions to this problem transform your table into an easily scrollable list of data on smaller devices.

To handle the transformation of the table on smaller screens, I started by using jQuery DataTables and writing my own code, but came across Tablesaw.js, which already solved the problem.

For the other requirements, I settled on Tabletop.js to handle loading the data, Pym.js for embedding via iframe and a tiny render script, written in Python, to make generating a ready-to-deploy responsive table quick and easy.

How it works

Once you’ve set up a spreadsheet in Google Drive and grabbed the spreadsheet’s public key (described in the Tabletop.js documentation), clone the responsive-tables repository.

First, you’ll want to install requirements, optionally creating a virtualenv:

$ mkvirtualenv tables
$ pip install -r requirements.txt

If you’re not familiar with Python, pip and virtualenv, read more about them here.


Next, you’ll create a config file for your table. You can get started by copying the example:

$ cp config-example.json config.json

Edit config.json, filling in each of the blanks with your information.

The ua_code field is your Google Analytics UA code in the format “UA-XXXXXXXX-X”. If you leave it blank or remove it, Google Analytics will be disabled.

The columns field determines which parts of the data from Google Drive should be included in the table. The data structure defining the columns is an array of arrays. Each array within the array represents a column and contains two items.

The first is the column identifier. This is the lowercase column title stripped of punctuation. For example, if your column title is “Must be 501(c)(3)?” your column identifier would be “mustbe501c3.”

The second is how you’d like the column title to appear in your rendered table. You can stick with the title you used in Google Drive or change it entirely. For example, you could use “Must be 501(c)(3)?” or instead choose “Need 501(c)(3).”

The config.json file for the example looks like:

  "ua_code": "UA-20502211-1",
  "title": "Discounts for nonprofits",
  "key": "10yccwbMYeIHdcRQazaNOaHSkpoSa1SUJEtWBfWPsgx0",
  "columns": [
    ["service", "Service"],
    ["whatisit", "What is it?"],
    ["whatsthediscount", "What's the discount?"],
    ["mustbe501c3", "Must be 501(c)(3)?"],
    ["moreinfo", "More info"]

Rendering your table

Once you’ve created a config file, you can render the table by running:

$ ./render.py

This will create a build directory adjacent to render.py and populate it with all of the files required to deploy the table. Note that if a build directory exists, anything within it will be deleted when calling render.py.

Once that’s finished, upload the contents of the build directory to your host of choice.

Other notes

The table’s breakpoint is set to 60em (or 960px). There is currently no simple way to change this value, which is why the README states this rig is best suited for tables with 5-7 columns. We may work out a simple way to adjust this in the future.

Also note, the file assets/responsiveTable.js contains some code that identifies long urls in your table’s data, converts them to actual anchor tags and uses css to truncate their text to avoid ruining the table’s lovely responsiveness. You can adjust the width at which the text is truncated in style.css:

#data a.ellipsis-link {
  display: inline-block;
  max-width: 180px; /* adjust the width of truncated urls */
  white-space: nowrap;
  overflow: hidden;
  text-overflow: ellipsis;

You can see an example of a table rendered with our rig here: http://nerds.inn.org/wp-content/uploads/static/discounts/

And an example of that same table embedded via iframe: http://nerds.inn.org/discounts/

View and fork the code here: https://github.com/INN/responsive-tables