A Helpful Command For Searching Git Submodules

Sometimes you need to search through a git repository containing sixty directories. Each directory is a separate git submodule, meaning that they're fully-fleshed-out git repositories in their own right, unconcerned about their siblings or parent repositories.

I was filing a pull request affecting a CSS file in one of the repositories, so I needed to check all the other repositories. They're WordPress child themes; the pull request is to their parent. Past changes to the parent have conflicted with CSS in some of the other repositories.

The easiest way to check for clashing styles was to search for selectors in each theme, and the easiest way to do that is with a tool like grep.

git-grep is specifically built to search within git repositories, but it doesn't search within submodules, and it doesn't have a way to limit the search by filename or filetype.

ack and ag are popular grep replacements, but I'm not familiar with either.

Here's my solution:

#! /bin/bash

if [ -z $1 ] || [ -z $2 ];
	echo "Usage: deepgrep <filetype> <string to search for>"

find . -type f -name "*.$1" -print0 | xargs -0 grep --color=auto -0 $2

Save the file as 'deepgrep' somewhere in your $PATH. Mine's in a folder in my dotfiles.

With  deepgrep css .is-video I was able to see that there were indeed no conflicts, and it was safe to submit the pull request.

If you don't want to bother with saving the file, you could set an alias in your terminal, or run this in the terminal directly:

find . -type f -name "*.css" -print0 | xargs -0 grep --color=auto -0 "The string you want to search for"

If you prefer ag, try ag -UG <filetype> <string to search for>

Happy searching!

What You Don’t Know Can’t Hurt You…Unless You Don’t Ask

We were talking with a respected INN member during the Nerds’ open office hours last week. While asking a question about how to do something on his site, he said a couple of times that he doesn’t know much about website coding. But it struck me that he clearly does know a lot, he just didn’t know the answer to this particular question.

I have seen this behavior in many other people, and also in myself. When talking with people we believe know much more than us about a given topic, we sometimes minimize our knowledge up front.

I suspect we do this because we have learned from past experience that people sometimes use their status as experts to belittle us. This kind of behavior is common, especially in technical fields. Saying “I don’t know much” is a smart strategy if we suspect the expert will act like a jerk in response to our question. For many of us it's a defense reflex.

I can safely say that none of the INN Nerds will ever treat you this way. We welcome questions from all members and constituents from any level of technical knowledge, and it’s in our DNA to not act like jerks.

Not acting like a jerk is also hard-coded in the INN technology team manifesto, which outlines how and why we work. We hold ourselves accountable to this, and you should, too. Here are a few excerpts:

  • We’ll invest in education: creating curriculum and training for members, investing in our apprentices/students, and pursuing continuing education opportunities for ourselves.
  • We will be open to new tools and processes, resisting the stale comfort of “this is how we’ve always done it.”
  • We won't use snark or pedantry to exclude people from conversations.
  • We’ll never judge you or shame you for not knowing something.
  • We won’t feign surprise or jump into conversations with Well, actually...
  • In emergencies, we will send pie.

Because news technology is changing so rapidly, there are many reasons for each of us to feel we don’t know as much as we should. The pace of change is also precisely why we should ask many questions, even at the risk of exposing what we don’t know. Our guest during office hours did exactly that, and deserves to have his question (and his many other contributions as a professional) treated with respect. We will always do that.

When it comes to the web and digital technology, each of us is somewhere on the learning curve. The value of a community like the one we’ve got is that we can help each other gain the knowledge we need to improve and sustain our work. At a time like this, we should make extra efforts to communicate and collaborate.

So please use the Largo Help Desk for any site problems or requests, email us at nerds@inn.org for anything general, and sign up any time for open office hours. We’ll never shame you for not knowing something, and might even have some dumb questions ourselves.

An Apprenticeship Assessed

It’s been an exciting ten months working alongside the Nerds at INN. I’ve learned a lot since I started and had the opportunity to work on a lot of exciting projects in the process.

In particular, I built out two WordPress plugins designed to interface with Google’s tools for publishers. The first, DoubleClick for WordPress, serves DoubleClick for Publisher ads without dealing with the underlying ad codes. I later added built in targeting, making it easier for a small organization to sell and manage inventory without dealing with code.

The second, an Analytic Bridge for WordPress, connects with Google Analytics to pull and cache metric data for use. It’s currently used to run the popular posts widget on member site Nonprofit Quarterly.

I also did some early work integrating unit tests into our development process and built another WordPress plugin to serve quizzes in post sidebars. More recently, I rewrote a significant amount of our Link Roundups plugin (formerly known as Argo Links) to get it ready to submit to the WordPress plugin directory.

Between these projects I spent some time learning the nuts and bolts of Largo and work on a number of bug fixes and new features for the most recent releases. I also worked on a handful of help desk tickets and consulting related projects as the team got busier.

In the process I learned an invaluable amount about WordPress best practices, especially when creating plugins and building extendable themes. I also learned a significant amount about Google’s APIs, which offer an incredible amount of functionality, but not without a lot of time investment.

When I started my apprenticeship or daily scrum hangout consisted of only myself, Adam and Ryan. There’s at least eight faces in our morning meeting now, an incredible indication of how quickly the team I’m leaving today has grown.

As I start a full-time job building healthcare software for Epic Systems in July, I have every bit of confidence this team will continue the fantastic work happening now and deliver some truly powerful tools to non-profit newsrooms of all kinds.

I won’t be joining those morning scrum meetings anymore, but hope to still contribute a pull request or two from now and then. You can follow me on twitter @willhaynes, or send me an email at haynes24@gmail.com. I’m always happy to talk.

Data Visualization and Photo Resources For Your Visual Journalism Toolkit

Improving the visuals on your site can have a dramatic difference in how your stories are received and how they spread, but we find that many INN members do not have photographers on staff or the budget to invest in complex data-driven news applications and visualizations.

One of our goals with Largo, the open source WordPress framework we've developed for INN members, is to make it much easier for members to have websites that look as good and function as well as the best for-profit and larger non-profit publishers.

This summer we're planning to do some work on making it easier to tell stories visually in Largo and as part of that process we wanted to do a survey of tools that members are already using in the hopes that we can identify some best practices and develop tools, resources and training to make it easy as possible to integrate them with your website.

Our goal is to help journalists to use data and other visual elements to enhance their investigations and storytelling.

A quick side-note: if you use Largo and/or are interested in helping us to figure out how to improve the framework and build tools to support your data and visual storytelling, we're putting together a working group to help us define the work we need to do to make your lives easier. Please drop us a line if you're willing to help us out this summer as we work through this process.

Here are some of the tools we've found so far that may end up as part of the toolkit that we recommend to our members and other journalists.

For quick, basic plots

Check out Visualizer, a WordPress plugin, and Datawrapper, an open-source tool that will provide you code to embed visualizations in your posts. Both tools have the basic types of visualizations (pie chart, bar graph, scatterplot, map, etc.) which you can create by importing a CSV or Excel file. Both are also easy to use, even if you're not very tech savvy and they incorporate some nice default design patterns.

Choose Visualizer if you would like to work with a WordPress plugin or Datawrapper if you don’t mind working with embedded code.

Another great tool for simple charts is ChartBuilder, an open source tool developed by Quartz. This tool allows you to create simple charts and then either copy the html for the chart or export it as an image to use in your stories.

For the data ninja

If you have complex data or want to showcase your data in ways other than with simple graphs and charts, spend some time with tools such as StoryMapJS, TimelineJS, Vis, or Kumu - which are all already compatible with WordPress/Largo by using an embed code (usually an iframe) within your stories.

As their names imply, StoryMapJS and TimelineJS help you create maps and timelines, respectively, to illustrate your data through space and time. Both were developed by the Knight Lab at Northwestern University so they're both designed with journalism applications in mind.

For network and relationship data, there is Vis and Kumu. Both tools are interactive and flexible and easy to use and Vis was designed particularly with journalists in mind.

For the code-savvy

Many of our members do not have an in-house developer, but for those team members interested in learning to code and to use one of the hottest data visualization tools today, you might want to try the d3.js WordPress plugin, Wp-D3.

With D3 you can create any type of visualization you can possibly imagine and make it interactive, too. You might also want to check out NVD3, which also has a WordPress plugin. The developers of this tool were inspired by D3 to create re-usable visualizations.

Recommendations from members

We also heard from some INN member organizations about a few other data visualization tools they like to use.

Canva.com  is great, free, way to make infographics. I've used it to create a graphic on a health care report card. It took about 10 minutes. I'm playing with it to make a customized NCHN template for when I have data like bar charts or graphs (make the bar chart background transparent in photoshop and drop it on the template background. (Rose Hoban, North Carolina Health News)

Another one that's extremely easy to use  for infographics is http://infogr.am. They have many templates to choose from. It's free also but to remove their branding and attach your logo you have to upgrade and pay a little monthly fee.  (Jeremy Chapman, Montana Center for Investigative Reporting)

http://piktochart.com/ has more flexibility than Infogr.am and the professional account is just $40 a year for nonprofits. I like Infogr.am for straightforward graphics; Piktochart for everything else. (Pam Dempsey, Midwest Center for Investigative Reporting)

Photo resources

It can also be difficult to find photos to use if you don't have a photographer on your staff or freelance budget to create original photography to go with your stories. Here are some photo resources members recommended if you need to find photos that are free to use.

Getty Images allows many of its photos to be embedded. (Jason Alcorn, Investigate West) They've got gorgeous photos. It doesn't work all the time, but when it does it's a great money saver. (Diane Schemo, 100 Reporters)

Another good source for stock photography is Free Images (formerly Stock.xchng) and you can search Creative Commons images from various sources here: http://search.creativecommons.org/(Trevor Aaronson, Florida Center for Investigative Reporting)

I love using U.S. government images, which almost never have copyright or licensing requirements. The portal I go through is here, which will also get you to state photo archive pages.  You'll find subject and agency links there. (Naomi Schalit, Pine Tree Watchdog)

The Library of Congress has a nice collection of digital images you can browse. Useful for historical photos or #TBT. Most of the images are free to use but check the copyrights to be sure (Pam Dempsey, Midwest Center for Investigative Reporting)

Many Flickr users share their photos under a Creative Commons license, which allows anyone to use those photos under certain conditions (have to attribute, no derivative works, and non-commercial use only are the main three). Use Flickr advanced search to search only Creative Commons-licensed photos then look at individual photos to check for any of those conditions. No pre-clearance from photographers required. (Jason Alcorn, Investigate West)

Flickr also has this site called "The Commons" which includes a repository of public photography from all over the world. (Luis Gomez, INN)

Your suggestions

We hope you found some new tools and resources in the list above to help with your work. Let us know if you have other tools or resources for visual journalism that you would recommend to other INN members!

How to debug email-related theme and plugin functionality in WordPress

Let's say you're having trouble with a WordPress theme or plugin that uses wp_mail. How can you inspect the email that wp_mail composes or verify that it is actually sending (or at least attempting to send)?

Well, one simple way is to tell WordPress to use Python's smtpd DebuggingServer to "send" email.

The DebuggingServer doesn't actually send email, so don't go checking your inbox. It's only meant to show you the email that would be sent, headers included, if it were an actual SMTP server.

Note that this guide assumes you're debugging wp_mail issues during local development.

Let's get started.

Set up the smtpd DebuggingServer

If you have Python installed (comes with Mac OS X and most distributions of Linux by default), this is the one-liner you can use to get the debugging mail server running. From the command line, run:

$ python -m smtpd -n -c DebuggingServer localhost:1025

So that you don't have to remember that command, you can add an alias to your shell profile (e.g., ~/.profile), making it super easy to run the debugging mail server at a moment's notice.

To do this, open your shell profile in your favorite text editor and add the following line:

alias mailserve='python -m smtpd -n -c DebuggingServer localhost:1025'

Save your shell profile and source it in your shell to make sure the new mailserve alias is available:

$ source ~/.profile

Note: ~/.profile is probably the most common shell profile location. If you don't have this file, you can create one by running:

$ touch ~/.profile

Keep in mind that you might already have a shell profile for your specific shell. For example, ~/.basrhc for bash or ~/.zshrc for zsh. If you have a ~/.bashrc or ~/.zshrc, you can try adding the mailserve alias to one of them instead.

Once you have the mailserve alias defined and your profile sourced, running the server is as simple as:

$ mailserve

Note: there won't be any output from running this command initially. The debugging server runs, waiting for an application to connect and attempt to send a message.

Tell WordPress to send mail via the DebuggingServer

Now, in your WordPress theme or plugin, you can add some debugging code that will tell WordPress to send email via the debugging server you have running.

To accomplish this, add the following code to your theme's functions.php or to your plugin's main file:

function phpmailer_debug_settings($phpmailer) {
    $phpmailer->Host = 'localhost';
    $phpmailer->Port = 1025;
add_action('phpmailer_init', 'phpmailer_debug_settings');

This code changes the configuration of the $phpmailer object used by wp_mail, telling it to use the SMTP server on localhost, port number 1025. If you look back at the Python command used to fire up the debugging mail server, you'll see the $phpmailer settings correspond to the arguments passed in that command:

$ python -m smtpd -n -c DebuggingServer localhost:1025

Once you have the debugging mail server running and the code above included in your theme/plugin, you can try sending mail with WordPress and see the entire message contents, SMTP headers, etc in your shell. Here's some example output:

vagrant@precise64:~$ mailserve
---------- MESSAGE FOLLOWS ----------
Date: Thu, 12 Mar 2015 16:21:54 +0000
Return-Path: <ryan@inn.org>
To: "\"Investigative News Network\"" <webmaster@investigativenewsnetwork.org>
From: Ryan <ryan@inn.org>
Cc: info@investigativenewsnetwork.org
Subject: [INN Website Contact] This is a test email subject line
Message-ID: <e538a998dbba308e2e6437a0b3ca4a50@vagrant.dev>
X-Priority: 3
X-Mailer: PHPMailer 5.2.7 (https://github.com/PHPMailer/PHPMailer/)
X-Mailer: WP Clean-Contact (vagrant.dev)
MIME-Version: 1.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is the test email body.

------------ END MESSAGE ------------

Why do I need this?

This can be helpful if you're trying to track down missing parts of an email (e.g., hey, where'd my "from" address go?) or need to verify the contents or formatting of an email that your theme/plugin sends to users.

Keep in mind that, although this post describes how to use the Python smtpd DebuggingServer with WordPress, you can also use this guide with other applications as long as you can configure said applications to connect to the DebuggingServer.

New Plugin — Easier DoubleClick Ads for WordPress

This week we’re happy to announce a new WordPress plugin to make serving DoubleClick ads easy.

In the past we’ve used (and will continue to use) Automattic’s ad code manager to control ad tags. This plugin comes with a plethora of options, including support for different providers. While its fine for some users, for most of our applications it’s downright unwieldy.

We designed our plugin to be lighter-weight and to work out of the box with as little setup as possible. The minimum needed to start serving ads is to define your network code and place a widget in a sidebar. Then you set options for the DFP identifier and size of the ad spot and that’s all there is to it.

Responsive ads

It’s easy to serve different size creatives at different breakpoints. For example, in a leaderboard spot, it’s possible to add one widget to display 300x50 sized creatives on mobile and 728x90 size creative on tablets and desktops. Only one will be loaded and counted as viewed in DFP, depending on the breakpoint.


This plugin sends information about the page to DFP to make targeting specific sections and pages of a site easy.

  1. inURL: Target a piece of the page path. (eg. targeting the string /dvd would match example.com/dvds/, example.com/dvds/page1 and example.com/dvds/page2)
  2. URLIs: Target the entire page path. (eg. targeting the string /books will only match example.com/books and not example.com/books/page1)
  3. Domain: Target based on the domain. (eg. run different advertising on staging.example.com and example.com.)
  4. Query: Target a ?query var. (eg. target the url example.com/?movie=12 with the targeting string movie:12)

See the documentation for more information on how to set these values up in DFP. We’ll be adding targeting criteria for post category, post type, template type and other criteria specific to WordPress in the future.


If you don’t maintain widget areas in your theme, you can hard code breakpoints and places ad units directly in your template files.

Defining breakpoints is simple. We suggest you do this for users in your theme’s functions.php since most users will not know pixel values for the theme's breakpoints.

function ad_setup() {

   global $DoubleClick;

   // Optionally define the network code directly in functions.php.
   // $DoubleClick->networkCode = "xxxxxxx";

   /* Define Breakpoints */
   $DoubleClick->register_breakpoint('phone', array('minWidth'=> 0,'maxWidth'=>720));
   $DoubleClick->register_breakpoint('tablet', array('minWidth'=>760,'maxWidth'=>1040));
   $DoubleClick->register_breakpoint('desktop', array('minWidth'=>1040,'maxWidth'=>1220));
   $DoubleClick->register_breakpoint('xl', array('minWidth'=>1220,'maxWidth'=>9999));


To place an ad in a given location, just call place_ad on the global $DoubleClick object.

   // Places a 728x90 leaderboard ad for all breakpoints but mobile.

That’s it. We’re excited to see how you use it! Download the plugin from Github here.

Design Feedback Tools for Remote Teams

Good design doesn't happen in a vaccuum. As a remote team, though, we don't get to sketch around a table together, see each other's whiteboards, or have other daily opportunities for in-person collaboration.

Instead, we mostly share screenshots in group chat and, sadly, a lot of ideas don't even make it that far. It can feel like a high barrier to entry to post something online for the team to review — just by uploading, it becomes *Important* even if it's a simple sketch.

But if we're not able to share work in progress, we miss out on the value and ideas of our teammates and can end up working toward disparate goals. I want to dismantle barriers and make feedback and conversation about design a regular, fun part of our team process. It's essential to share half-baked designs, interface sketches, and unpolished ideas — even more so because we don't inhabit the same physical space.

Everybody agrees our products and apps will be better for it, but like all things with remote work, it takes an intentional commitment. You have to build even casual feedback into your workflow. With that in mind, I've been testing a few design tools meant to help facilitate asynchronous design feedback and communication. Here are my notes and thoughts on the three products we've tried so far.

Red Pen


Overall, Red Pen was the fastest and most intuitive tool. This was also the service that everyone on the team actually tried and used successfully — the other two had much lower participation. This, more than anything else, is an indicator of its value. If nobody uses it, it's useless.


  • It's easy to share and comment on designs without having to create an account (plus the workflow for account creation is smart).
  • Easy to navigate through designs using keyboard.
  • Simple and fast commenting. All our team members contributed with ease.
  • Tells you who has seen a comment (e.g., "read by meredith") and a few other nice interface features like that.
  • Retains all versions of a design.
  • Browser and email notifications tell you when there are unread comments.


  • When we tested it there was no way to customize notification settings — some of us got email updates, some of us didn't, and it wasn't clear why. While the notifications were fairly intuitive, it would be nice to be able to adjust preferences.
  • No "view all comments" option, yet. They say they're working on this feature. Without it, there's no way to get an aggragate view of all feedback for a project.
  • No way to see all versions of a design at once.
  • There doesn't seem to be a way to easily download files (not a huge deal for us).
  • You can only upload png files.

Not seeing all the comments is actually a pretty big deal for me. As the lead designer, I want to be able to take all the feedback, consolidate and translate it into tasks (which live as tickets in GitHub or JIRA). Red Pen would work better for quick feedback on sketches and design ideas, less so for long conversations or contentious feature decisions.

Red Pen is also the most expensive of the tools we tested. I sent them a couple of emails about nonprofit rates and haven't heard back.



InVision is like the Photoshop of design feedback tools. It can do a lot of different things, and feels a bit bloated as a product (when looking solely for design feedback, at least). But they have put a lot of thought into the design and functionality of their suite of tools, and you can tell that this was created by and for designers.


  • You can draw/sketch on designs and toggle comments on and off.
  • Notification options can be set at a user level and changed with each comment.
  • You can build clickable prototypes using wireframe images.
  • Ablility to upload all the file types (or at least a lot of them) and vector handling. There is also a separate repo for assets.
  • There is a conference call feature for live design walkthroughs. We tested this recently with wireframes for a new site and it worked well.
  • The project history page has rich data — I'm not sure how practical any of it is, but it was fun to see.


  • Conversations are harder to access (a few clicks to see full thread).
  • Inviting people to comment takes a few more steps, and the sign up process is not intuitive.
  • Navigating between designs within a project, and between different projects, takes quite a bit of menu searching and clicking.

This is not a lightweight product, and while there are a lot of fun features, our team didn't consistently use — or even try — most of them. If we're attempting to cultivate a lower barrier to entry for feedback, this is not the tool I would choose.

InVision does offer nonprofit discounts for the more expensive payment tiers, and has been responsive and helpful when I've reached out.



Conjure fell somewhere in between InVision and Red Pen for me. It wasn't as feature heavy as InVision, but wasn't as fast or intuitive as Red Pen. There are a lot of nice elements, but it was the least used by our team during testing.


  • A nice way of highlighting particular areas of a design to comment on (drag to select).
  • Pro level is currently free during beta.
  • You are able to approve a project when the feedback period has ended.


  • There's a separate menu you have to click to see the full thread of a comment. You can't see responses to a primary comment on the design itself.
  • Adding collaborators is more complicated than other tools we tried.
  • Navigating between projects and designs is clunky.

Overall it comes down to what our team will actually use. InVision has so many great features, but it also feels needlessly complicated for the purposes of fast feedback. We don't need every single customization option when looking for quick opinions on a design direction. Red Pen, on the other hand, had the most intuitive interface and was the product everyone actually used while testing. It is opinionated in its simplicity and that works to its advantage here.

Despite the higher price and some interface limitations, Red Pen will likely be what we use for sharing sketches and mockups. As with so many things, the right tool is the one that people will use.

For clickable prototypes and more formal design presentations and walkthroughs, I will continue to use InVision. To me it feels more like a protoyping and client-services tool than a home for internal feedback. (For a detailed comparison chart of other prototyping tools, check out http://prototypingtools.co.)

Red Pen Conjure InVision
Pricing $30/month for 10 projects $25/month for unlimited projects (currently free in beta) $22/month for unlimited projects (one designer)

Updates to INN’s Deploy Tools

We've made a lot of changes and added a bunch of new commands to our deploy tools since the last time we checked in.

Most notably, we've added commands to help scaffold and run unit tests for your plugins AND your themes. Behind the scenes, our scaffolding and tests works as described by Will Haynes in this post.

Other changes include migration to Fabric's new-style task declaration and updated documentation.

Unit testing tools

To scaffold/set up tests for your theme:

$ fab dev wp.tests.setup:largo-dev

This command:

  • Drops and recreates your testing database (which should always be throw-away, ephemeral)
  • Copies essential test files (e.g. phpunit.xml and a tests directory with test-sample.php and bootstrap.php files) to your theme or plugin directory if they are not already present
  • Installs the WordPress test framework to /tmp/wordpress-tests-lib/includes
  • Configure a wp-tests-config.php file and place it in /tmp/wordpress-tests-lib

With those files in place, you can run tests for your theme or plugin:

$ fab dev wp.tests.run:largo-dev

Example output:

Screen Shot 2014-11-25 at 9.45.54 AM

Namespaces and aliases

The other big change we made was moving all commands to Fabric's new-style task declaration. Using new-style tasks gives us all the benefits described in Fabric's documentation on the subject.

While the deploy tools are not taking advantage of every feature of new-style tasks, they are better organized thanks to namespacing and aliases.

Here's some example output from fab -l:

Screen Shot 2014-11-25 at 10.00.07 AM

Here you can see each module (and submodules) along with defined tasks. Tasks use dot-syntax, which should feel more explicit and intuitive if you're familiar with Python (which is what Fabric is built on).

Also note that the command wp.migrations.s2m is an alias for the much-longer, sometimes-harder-to-remember command wp.migrations.single_to_multisite.

I'm very lazy, so anytime I can save some mental energy and type a bit less, I consider it a win. This is as true for commands I use very frequently as it is for commands I use once in a great while.


We also expanded the deploy tools' documentation to include all commands. You can find the documentation on Github.


Remember, the deploy tools were built with our situation in mind. We deploy to WP Engine, so some stuff is very WP Engine specific.

For example, the wp.fetch_sql_dump command only knows where WP Engine stores recent SQL dumps. The wp.maintenance.start and stop commands assume you are deploying to a host that uses Apache and that you have a .htaccess file in the root of your FTP server.

With that said, much of toolkit will work with any host provided you have (S)FTP access, including the push-button deployment rig I wrote about previously.

If you are using or thinking about using the deploy tools, have questions or suggestions, let us know.

Stealing the NPR App Template for Fun and (Non-)Profit

What we learned building an election app for INN members with the NPR visuals app template

Screen Shot 2014-11-13 at 5.40.59 PMLast week, just in time for the election, our team at the Investigative News Network (INN) launched Power Players — a state-by-state exploration of campaign finance and top political donors across the country. The project is a collaboration between thirteen INN member organizations who did their own reporting and analysis of the data we provided to them. To support this reporting, our team built a national app with easy to embed components.

As this was one of the first editorial projects we built as a team, we decided to start things off on a solid foundation by creating an app template — something that contains a library of components we might want to reuse for future projects and allows us to create those projects more easily.

Fortunately for us, NPR’s Visuals team has generously open sourced their app template, which we used as the foundation for our own. We shamelessly stole NPR’s code and set up an INN-specific template by following the steps outlined in Tyler Fisher’s excellent post “How to Setup the NPR App Template for You and Your News Org.”

It was a (mostly) painless experience but we learned some things along the way and wanted to share our experience to help others who might tackle this process in the future.

Why use NPR’s app template?

For one, we didn’t want to build our own toolkit for deploying news apps from the ground up.

When someone (or some team) builds and open sources a tool that addresses a problem you’re facing, using it will almost certainly save you time, money, blood, sweat, tears and heartbreak. Do yourself a favor and try really hard to seek out and use stuff that other smart people have already built.

The other motivating factor was that we’ve never had the chance to use NPR’s app template and a number of us have been curious. Being that this was INN’s first news app and we had a short amount of time to get something up and running, we thought this might make the perfect opportunity to take it for a test drive.

Setting up the app template for INN

Tyler acknowledges in his post that the process “seems like a lot.” But we discovered it’s just a matter of knowing where to find all the NPR bits so you can make the template your own.

In fact, it took just seven commits to completely scrub NPR specific stuff from the app template.

For your reference, here is our fork of the NPR app template and the code for the Power Players app itself.

Building with the app template

The structure of the Power Players app

Our project concept was relatively simple. There are a total of five views and their corresponding templates.

An example of a individual power player embed.

They consist of:

One of the goals for us was to create a resource that any of our member organizations could use to bolster their coverage of the elections — either by hosting information for an entire state, or including individual power player cards in articles covering campaign finance and the election.

Thus the embeddable versions of the state and power player pages. These are essentially the same as the normal templates, with a simple full-width layout and simplified INN branding (a credit link at the bottom of each embed).

The Kentucky Center for Investigative Reporting is one INN member making great use of the app. Here is a complete list of all the members organizations participating in the project and the stories they've published (so far).

The entire project is responsive (thanks to pym.js, yet another NPR project), made to look great no matter what size container the embeddable elements get placed in.

Snags and snafus

In working with the NPR app template we encountered some things that aren’t well documented (yet).

CSS and JS pseudo-tags

Tyler covers this briefly in another blog post on the app template.

What wasn’t clear at first was how the JS and CSS pseudo-tags interact across templates.

We ran into a problem where, in separate templates, we “pushed” or queued different sets of javascript files to be rendered. In both templates, we were passing the same file name to the render function, which resulted in the second file overwriting the first when deploying.

Here’s what NOT to do:

{# Template file no. 1 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/underscore.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_1.js') }}
{{ JS.render('js/app-footer.min.js') }}
{# Template file no. 2 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_2.js') }}
{{ JS.render('js/app-footer.min.js') }}

Once you realize that JS.render outputs a file, the contents of which are determined by preceding calls to JS.push, you realize that having different calls to JS.push before rendering to the same file just won’t work.

In this case, if template 2 is rendered after template 1, “js/app-footer.min.js” will be missing “underscore.js”, potentially breaking functionality in template 1.

Do this instead:

{# Template file no. 1 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/underscore.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_1.js') }}
{{ JS.render('js/app-footer-1.min.js') }}
{# Template file no. 2 #}
{{ JS.push('js/lib/jquery.js') }}
{{ JS.push('js/lib/bootstrap.js') }}
{{ JS.push('js/lib/template_2.js') }}
{{ JS.render('js/app-footer-2.min.js') }}

By making the filename passed to JS.render unique to each template, we can be sure we’re not clobbering any javascript files.

Flask’s url_for function and your project’s path prefix

Another issue we encountered was that the app template, using Flask’s default url_for function, doesn’t take into consideration your project’s path. That is, when you deploy your app to S3, it is meant to live at something like http://apps.yourdomainname.org/project-slug/ whereas the development server uses something like http://localhost:8000/ without the project slug.

For example:

<a href="{{ url_for('some_view') }}">Hey, a link to a page</a>

Renders as:

<a href="/some-view/">Hey, a link to a page</a>

What we want when deploying to staging or production is an URL that includes the project slug:

<a href="/project-slug/some-view/">Hey, a link to a page</a>

To remedy this, we created an app_template_url_for function to replace Flask’s standard url_for. The app_template_url_for figures out the current target environment (i.e. development, staging or production) and inserts the project slug as necessary.

View the source code here and here.

Another change we made to INN’s version of the app template is modifying the Flask app’s static_folder:

app = Flask(__name__, static_folder='www/assets')

View this change in context here.

What this does is allow you to use url_for to build urls for static assets kept in www/assets.

<link rel="shortcut icon" href="{{ url_for("static", filename="icons/favicon.ico") }}" />

This provides the flexibility to include assets outside of the CSS and JS pseudo-tag framework if you find yourself with the need or desire to do so.

Working with Member Organizations

The Power Players project presented other challenges, too, because of the number and diversity of INN member newsrooms participating.

First, the data needed to be analyzed and populated in the app template for about a dozen states. Campaign finance data is notoriously dirty (especially when trying to look at individual donors, as we were), plus we were combining data from different sources, so the cleaning and analyzing stage took longer than expected. Some member organizations also wanted more custom analysis, such as looking at local data only rather than statewide data as most members were. This forced us to make some compromises in the design of the app like using the state outline for California but labelling it “San Diego.”

A project such as this also takes a great deal of communication. INN’s director of data services, Denise Malan, held a kick-off conference call with members and stayed in contact with them as she was analyzing the data and the tech team was designing and building the app. We also provided members with instructions on how to add photos and customized content to the app while it was still a work in progress.

As is the case with all our editorial collaborations, INN members can publish stories according to their own timetables, and several ran their stories before the app was finished because we were working right up to the deadline on Election Day. Others have yet to publish because they choose to wait for updated data with post-election numbers.

Ideally, we would love for all members participating in a project to use our work, and we learned some valuable lessons in the process of building our first editorial app.

In future projects, we would like to have mockups of the news app ready as early as possible so our members are able to visualize what we will be providing and how this will fit with their reporting. We also want to provide a firmer deadline for launching an app so members can plan their publishing dates accordingly and leave time to implement the stuff we build. We’ll also do a better job providing the partners with documentation to make it as easy as possible for them to adopt our work.


We learned a lot in the process of building our first app, both about the NPR app template and also what it takes to manage a complex project working with so many partner organizations.

Will we use our fork of the NPR app template for everything? Probably not. We’ll continue to experiment and try out different things before settling on our default set of tools and templates. For projects where it’s a good fit or where we need to deploy something quick and easy, we definitely plan to use it as a solid starting point in building future apps.

Since this is our first app and we’re still learning, we’d love to hear your thoughts and feedback. You can find our team on Twitter @INNnerds or send us email at nerds@inn.org.

Batch Processing Data With WordPress via HTTP

For a recent project, we found ourselves in need of a way to verify the integrity of an entire site’s worth of posts after a mistake in the initial migration meant some of the posts in the database were truncated/cut-off.

Our options were limited. Since our host (WP Engine) doesn’t offer shell access and we can’t connect directly to our database, we would have to write a script to accomplish via HTTP requests.

We wanted something that was at least semi-unattended. We’d also need to be careful to avoid exhausting PHP’s memory limit or running up against any PHP or Apache timeout settings.

Given the constraints, processing of posts would have to happen over several HTTP requests.

Enter batchProcessHelper a class that provides ways to accomplish all of the above.

How it works

You can find all the details on usage at the project’s Github page.

batchProcessHelper is meant to be extended. There are two methods you must override: load_data and process_item.

The first method, load_data, is responsible for accessing the data you plan to process and returning an array of serializable data that will be the data queue. The array can consist of objects or associative arrays or be something as simple as a list of IDs or range of numbers. This is entirely up to you.

The only constraints is that load_data must return a data structure that can be serialized for storage in the database as WordPress transient. This is will be your data queue.

The second method, process_item, is where you put the business code to process individual items in the queue.

This method should return true if it succeeds in processing an item, false if it fails.

How to use it

Here’s a super simple example of a class extending batchProcessHelper. You can see and learn how to run the full example on the project Github page.


class userImportProcess extends batchProcessHelper {

    function load_data() {
        return csv_to_array(ABSPATH . 'example_data.csv');

    function process_item($item) {
        $this->log('Processing item: ' . var_export($item, true));
        return true;

This example doesn’t actually do anything. In load_data, we simply load the example data from csv, converting it to an array of associative arrays.

The process_item method logs each of the associative arrays to the batchProcessHelper debug/error log.

However, this code could easily modified to create a new user for each row in example_data.csv.

function process_item($item) {
    $result = wp_insert_user($item);
    if (!is_wp_error($result)) {
        $this->log(‘Successfully created new user: ‘ . $item[‘user_email’]);
        return true;
    return false;

Instantiating and running

So, we’ve defined our userImportProcess class, perhaps in a file named userImportProcess.php. How do we run it?

$process = new userImportProcess(array(
    'blog_id' => 99,
    'batch_size' => 10,
    'batch_identifier' => 'User Import’,
    'log_file' => '/path/to/your/file.log'


The only requirement is a batch_identifier — a name for your process.

Optionally specify the batch_size. If you don’t, batch_size defaults to 10.

Also, optionally specify a log_file — a path where debug and error messages will be sent. If you don’t specify log_file, batchProcessHelper will try to write one in /tmp.

If you want your process to run in the context of a specific blog, specify a blog_id as well.

Finally, call the process method to start.

Where do I put this stuff?

You’ll need to create a new directory in the root of your WordPress install. It doesn’t matter what it’s called, it just has to be in the root of your WordPress install.

Let’s say you create a directory named wp-scripts. You’ll want to put batchProcessHelper.php and userImportProcess.php in that directory.

Also, if you’re loading data from the filesystem (as in the example above and on the project’s Github page), you’ll want to place the data file in the appropriate location.


Visit userImportProcess.php in your browser.

If all goes well, you should see a message along the lines:

“Finished processing 10 items. There are 390 items remaining. Processing next batch momentarily…"

At this point, the page will refresh automatically, kicking off work for the next batch.

Once all batches are done, you’ll see the message:

“Finished processing all items.”

It should all go something like this:


If you have a multisite install, you'll need a Super Admin account to run code that extends batchProcessHelper. If you have a standalone install, you'll need an account with Administrator privileges.

Also, if you plan on keeping your wp-scripts directory around for any length of time, you should consider allowing only a select few IP addresses to access the directory. Refer to the project's README for more information.