Building A WordPress Donation Form Using Gravity Forms, MailChimp and Stripe

Please note: The following article contains outdated information has since been updated. Please use the up-to-date guide here

Here at INN, we've built a lot of donation forms for our members. Over the last year, we've created a standard set of forms and integrations that you may find useful as a starting point for building your own donation forms.

Note that this is a loooooong post and if it seems overwhelming, we'd encourage you to stick with it and experiment with these tools.

If you get stuck or just want to hire us to help, we're also happy to work with you to get you up and running. Now that we've standardized a lot of this work, we're often able to get a site using our Largo WordPress platform up and running with a donation form that looks and works great in just a few hours!

Get in touch if you'd like to discuss that option.

Here we go!

We start with WordPress and Gravity Forms, and add the following plugins:

We typically use Stripe to process payments because the fees are lower and it offers more flexibility, but Gravity Forms has add-ons for PayPal and several other payment gateways if you'd prefer to use one of those.

Stripe charges 2.9% of the transaction total, plus a $0.30 flat fee per transaction (with no monthly or annual fee). We use the subtotal merge tag plugin to create a checkbox that gives users the option of paying the fees so that their entire donation amount goes to the organization they're choosing to support.

In addition to collecting money, donation forms are an excellent place to ask donors if they want to become newsletter subscribers. We add a checkbox to our default donation form and conditional logic and the Gravity Forms MailChimp Add-On to automatically add donors to an organization's mailing list.

Gravity Forms also allows you to automate the sending of receipts and donation acknowledgements and also to send notifications via email or Slack when a new donation is received.

For our sites, we've created some boilerplate code containing default styles and a clean WordPress template for use on these donation pages. We've also included importable form templates, if you'd like to use our form designs.

Does this sound like something you want to use? Read on - the full set of instructions are below.


First, get the following things:

  • Gravity Forms Developer license in order to download two of the plugins from Gravity Forms. A developer license costs $199/year and will allow you to use the plugin and any add-ons on as many sites as you would like. For INN members, we pay for a developer key for all sites we host. If you want to use our group license, just contact our support team and we'll help you get that set up.
  • Stripe API key for handling purchases. (Stripe charges a transaction fee for most accounts; following this instruction set will help you cover that transaction fee)
  • MailChimp API key for sending a newsletter to your subscribers. (MailChimp's pricing varies based on mail volume)
  • An SSL certificate (if you want to accept payments directly on your site). For sites we host, these are available for free through our hosting company (via Let's Encrypt). If you'd like us to request a certificate on your behalf, just open a support ticket and we can take care of this for you. For sites using other hosting you'll need to check with your hosting provider to see what options are available.
  • (optional) a reCaptcha site key, to help prevent spam submissions

Once you have your credentials in hand, install these plugins:

All of those must be installed manually, because they're not available in the official plugin repository. If you have FTP/SFTP access to your site, you can upload them to the plugins directory of your site using an SFTP client or similar program. has instructions on how to do this.

You can also install them from the WordPress dashboard by going to Plugins > Add New. Click on the Upload Plugin link next to the page title and you can upload the zip file of the plugin you'd like to add.

Once you've installed the plugins, you'll need to also activate them. To do this, go to the Plugins screen and find the newly uploaded plugin in the list. Then, click Activate to activate it.

For sites we host, we have to install and activate the plugins for you. If you'd like these plugins enabled on your site, just contact our support team and we'd be happy to assist.


Each add-on has a configuration section in the WordPress dashboard, at Forms > Settings.

Gravity Forms

You'll need to enter your Gravity Forms support license key in the Gravity Forms plugin settings.

Gravity Forms Stripe Add-On

The Stripe add-on has four keys that you'll need to enter:

  • Test Secret Key
  • Test Publishable Key
  • Live Secret Key
  • Live Publishable Key

You can create/access these from the API Keys tab of the Account Settings page in Stripe.

There's also a set of radio buttons for switching your site from using the Test API to the Live API. Note that you'll need to switch this both here on the settings page and in the Stripe dashboard itself. If you're using Stripe for the first time, note that you'll also need to have provided your bank information before Stripe will allow you to switch to Live mode.

Typically, when you're setting up a new form, you'll use the Test API to run some test transactions using their testing credit cards to make sure everything works before switching your form/site to Live mode.

Follow the instructions on the Stripe add-on settings page to add your site's Gravity Forms callback hook to your Stripe account. If you are testing the form on a staging site, note that the callback URL should be your live site or else the callbacks will not work properly (this is something to also double check on your site before making the new form live).

Gravity Forms MailChimp Add-On

If you're going to use your donation form to collect email addresses for your MailChimp newsletter, you need to add your MailChimp API key.

You can get a MailChimp API key by logging in to MailChimp and going to Account > Extras > API Keys. There, click on Create A Key and give it a name (something like "Gravity Forms" should suffice) and then copy the newly created key and add it to the Gravity Forms MailChimp Add-On settings in the field provided.

Create your donation form

We've created a couple of sample forms based on the forms we've developed for a few of our members/clients. We recommend importing one of these forms first so that you can examine how things are put together before attempting to create a form of your own. You might even find that one of the example forms will work well for you with very little modification!

Create form by importing

Gravity Forms allows you to import forms from JSON files, so we've provided two example donation forms that you can download and import to get started.

To import a form, save the importable .json file from either of the above links, then upload it following Gravity Forms' import instructions.

Once you've imported a form, find the form in the list of forms and click "Edit." From here you'll see the various form fields and how they're configured. Note that each of these examples uses conditional logic to show/hide certain form fields depending on the options a donor selects in the form. For more on how conditional logic works in Gravity Forms, see the relevant section of their documentation.

If you like how one of the example forms works, you can move on to setting up notifications, confirmations and the MailChimp and Stripe feeds to ensure they are set up correctly with your site's information.

Refer to the Gravity Forms documentation if you have specific questions.

Create a form from scratch

After you've imported one of the forms above and examined it to see how it's put together, you may want to try your hand at creating a form that's more tailored to the individual needs of your site.

A few general best practices for donation forms:

  • Include a default donation amount (we find that $20 is a good default for one-time donations and $10 is a better default for recurring donations) but make it easy for people to specific their own amount.
  • Provide the option to make the donation a recurring donation on a monthly or yearly basis. Both Gravity Forms and PayPal have options to set up recurring donations and this is a great way to save yourself a lot of time and effort in trying to get donors to renew their membership in the future.
  • If you're a nonprofit, you'll need a way to send a tax receipt to the donor: email addresses and postal mailing addresses both work. Personalized notes are best, particularly for first-time donors or those who have consistently supported you, but Gravity Forms also allows you to automate the sending of receipts (see the section below on setting up notifications).
  • Request as little information about a donor as possible. You want to make it as easy as possible for people to give you money and asking for too much information or having a very long form will cause many donors to give up and never complete their donation. Make sure you are only collecting information that you actually intend to use, and wherever possible, explain how that information will be used and why you're collecting it.
  • Collect the user's email address for confirmation and thank-you emails related to this transaction, but do not add them to your mailing list without their explicit permission. In most of our forms, we' do this by adding a checkbox (usually checked by default) to opt a donor into receiving further messages from the organization.
  • In general, single page donation forms are best unless the form is very long. With Stripe, you can add the necessary credit card fields at the bottom of the form directly before the submit button. With PayPal, you'll have to send a donor off to PayPal to complete the transaction. This is one of the reasons we prefer Stripe. Note that if you handle the credit card information directly on your site, you will need to have an SSL certificate that is configured correctly to serve your donation page over https.
Set up notifications

You'll probably want to receive notifications from your website when someone submits a response to the form. To do this, find your form in the list of forms, hover over it, and then click "Settings" when that text appears.

Under the "Settings" tab are at least three tabs: "Form Settings," "Confirmations," and "Notifications" are the default ones.

Under Confirmations you can set what happens when a donor submits the donation form. Gravity Forms gives you a couple options here but we typically prefer sending people to a thank you page after the form is successfully submitted. If you don't want to create a thank you page, you can also just choose to display some confirmation text.

If you want to send users to a thank you page, first you need to first create a WordPress page to send them to. Here's an example on INN's website. Once the page is created, go back to the settings for the form you're working on and click on "confirmations". Now, select the "Page" option, choose your page from the drop-down list of pages, and save the confirmation:

A screenshot of the Gravity Forms Confirmation settings, showing the choice of a "Page" confirmation with the "Thank you!" page chosen.

Under the Notifications tab, you can set which email notifications are sent when a form is submitted. Typically, you'll want to send an email to one or more members of your staff to let you know a new donation has been received and also send an email to the donor with the details of their donation.

In the INN example form, two notification emails are sent on submission:

The first one is sent to the site administrator, with a subject of {form_title} and message body of {all_fields}. These form fields surrounded by curly brackets are Gravity Forms' merge tags, and allow you to put information from the form into the email. Note that you can send notifications to multiple email addresses by adding each address separated by commas in the "send to email" field.

The second confirmation is sent to the email address provided by the donor in the form, thanking them for their donation and again using the {all_fields} merge tag so that the donor has a receipt for their donation. Note that you can use conditional logic here if you'd like to send different confirmation messages to different segments of donors based on the information they submitted. So, for example, you could send a different confirmation to one-time vs. recurring donors, or an automated response to low-dollar donors but send yourself a notification for large-dollar donors so you can send them a personalized note.

Set up the link to Stripe

The Stripe connection for each form is configured in the form's settings area, in a "Stripe" tab that appears underneath the "Notifications" tab. The Stripe settings area allows you to create different "Feeds" for this form.

A "Feed" is a type of transaction and a single form can have multiple types of transactions. In the INN example, there are three feeds: a monthly recurring donation, a yearly recurring donation, and a one-time transaction. These donation types are respectively categorized by Stripe as "subscription," "subscription," and "products and services."

You don't have to have multiple feeds. Sometimes all you want to do is create a one-time donation form. If that's the case, you can skip ahead to the "One Time Donation" example below, and ignore the "Conditional Logic" box.

When creating a donation form with multiple donation types, add a "Radio Buttons" element to let the donor choose between "Per Month," "Per Year," and "One Time".

A screenshot of a selection box for choosing between "Per Month", "Per Year", and "One Time"
An example form element with radio buttons.

Then, in the Settings area for this form, create three Stripe feeds, matching the donation types.

A screenshot of the settings page for a Monthly Recurring Stripe feed, showing the options as they are set when the INN form is imported.
A screenshot of the settings page for a Monthly Recurring Stripe feed, showing the options as they are set when the INN form is imported.
A screenshot of the settings page for a One Time Donation Stripe feed, showing the options as they are set when the INN form is imported.
A screenshot of the settings page for a One Time Donation Stripe feed, showing the options as they are set when the INN form is imported.

If you have multiple feeds set up (because you have multiple types of donation) make sure to check the "Conditional Logic" box and set it to only process that feed if the value of the "Donation Type" form element you chose is set to the radio button label that corresponds with this donation feed.

The payment amount should be "Form Total" so that you capture the processing fee, if that option is enabled.

You may also want to send over other information to Stripe so that you have other metadata associated with the "customer" record that will be created when a payment is processed. The "metadata" section here allows you to provide the name(s) of the fields as you want them to appear in Stripe and then map those to the form fields submitted on your site.

Once you have this set up, run some test transactions. Make sure that the Gravity Forms Stripe Add-On is in "Test" mode by going to Forms > Settings > Stripe and checking the "Test" option for the API. Make sure your API keys are entered. Press the "Update Settings" button to make sure. Then grab some testing credit card numbers from the Stripe website and test every option on the form, confirming with in the Stripe dashboard that the test transactions completed successfully.

Set up the link to MailChimp

Most of the sites we work with want to also give new donors the opportunity to sign up for one or more mailing lists. We use the Gravity Forms MailChimp add-on to enable this option.

Before you set up a MailChimp feed in Gravity Forms, you'll need to have already created the mailing list you want to add people to and also make sure the fields you want to send over are configured correctly in MailChimp.

Once you have the Gravity Forms MailChimp Add-On installed and enabled, a MailChimp tab will appear in the form's settings section. Here, you can configure which of the submitted form fields match up to MailChimp's fields in its database for subscribers.

There is a checkbox to enable double opt-in, so that the donor receives an email to confirm that they would like to sign up to receive your newsletter, but it's commonly assumed that by checking the "Sign me up" checkbox, your donor has consented to receiving your newsletter.

At the very bottom of the MailChimp settings tab for the form is a checkbox labeled "Conditional Logic." Check this box to show the conditional configuration options. We're going to change the conditions so that the MailChimp integration only sends the email address to MailChimp when a box is checked.

a gif showing how to change the conditional logic works
Choose the field name from the left box, the "is" equivalency option from the middle box, and the text of the checkbox from the right box.

Setting up styles for forms

The forms that you've set up above will work as-is, but they're not particularly pretty. Gravity Forms' form HTML is complex and contains some weird opinions. The stylesheets that Gravity Forms includes are similar: weirdly scoped, curiously specific, and often a bit perplexing.

There are several ways to dequeue Gravity Forms' default styles; Zack Katz has an excellent blog post detailing the ways to dequeue them.

However, because Gravity Forms' styles are at least partially useful, we keep them enqueued and modify them. Here's the modifications we make:

These example styles are scoped to the form IDs and they also depend on a wrapper class that can be added to your form settings but they may not be immediately applicable to your form if the form ID or CSS class is different. We hope that these are at least useful as a starting point.

Good luck!

We hope this was a helpful introduction on how to setup donation forms using WordPress, Gravity Forms, MailChimp and Stripe.

Again, if you found all of that overwhelming and would like to hire us to help, just reach out and we'd be happy to help get you up and running!

Twitter Removed Counts From Share Buttons, Here’s What You Can Do About It

We've been getting lots of questions about the disappearance of the numerical count of tweets on story pages. For sites using the tweet button provided by Twitter, here's what that looked like until November 20th:

old Tweet button with counter

On November 20th, the tweet count disappeared and it's not coming back. Why? Twitter shut down that feature.

In truth, the value of this particular feature was always rather limited. It was an overly simplistic metric that showed how many people clicked the Tweet button, but didn't include a count of retweets, likes, or replies which can be much more important in measuring reach and impact of any given story. As Twitter explained in their announcement of the changes:

The Tweet button counts the number of Tweets that have been Tweeted with the exact URL specified in the button. This count does not reflect the impact on Twitter of conversation about your content — it doesn’t count replies, quote Tweets, variants of your URLs, nor does it reflect the fact that some people Tweeting these URLs might have many more followers than others.

In our own work, we have also been trying to reduce the number of third-party scripts that are loaded on any given page in the interest of improving load time and protecting users' privacy.

That said, we know that understanding the reach and impact of stories on social media is increasingly important to the publishers we work with, so here are some ways of digging into Twitter analytics that will give you a much better picture than a simple count of how many times a story has been tweeted.

Better Ways to Measure Impact on Twitter

Twitter search

Copy and paste the url of a story page into the search box on Twitter, and you can see who tweeted the story, when they tweeted it, and how many likes and retweets each tweet got. Twitter search now also lets you filter results to see "top" tweets or a "live" stream of all tweets for a particular search.

For each account that tweeted the story, you can then dig a bit deeper to discover how many followers the account has, how many of those followers you know and whether this is someone you might want to reach out to as you try to build a more engaged base of readers.

If you find someone consistently tweeting your stories, you might want to follow them back, add them to a Twitter list, invite them to subscribe to your newsletter or attend an event or just take a minute to say thanks.

Here's an example of such a Twitter search for a recent story on Frontline.

Topsy Social Search

Topsy provides similar functionality in several languages (again, just copy and paste the URL for your story into their search box). If you really just want a numerical count of tweets it gives you that up front, but it also lets you dive deeper to get real insight into your story's reach and impact. Here's a search for tweets and retweets about the same story from Frontline.

Google Analytics

A tweet about your story is nice, but it's even nicer when people who see the tweet click through to your story page. Google Analytics gives you this kind of data and much more.

For an easy overview of all incoming traffic to your site from Twitter, click Acquisition in the Google Analytics reporting sidebar, then on Social -> Network Referrals. You'll probably see Facebook on top, followed by Twitter, Reddit, etc. Click on Twitter and you'll see a list of shared urls from your website. You can see the number of sessions and pageviews for each URL, and importantly the average session duration which tells you something about how people actually engaged with your story and site.

You can drill down much further by tinkering with the various secondary dimension options to see the geographical location of your page visitors, how many used mobile or desktop browsers and many other dimensions too numerous to cover here.

If you want to look up social network referrals for a specific story, click on Behavior in the Google Analytics reporting sidebar, then Site Content -> All Pages. In the search box, paste in the story URL but only include the part of the URL after your site domain name.

Google Analytics data

For example, if the full URL to your story is:

Paste this into the search box:


Hit enter and you'll see the number of pageviews and other traffic data for that story. Click on secondary dimension, and in the dropdown select Social Network. You'll see how many pageviews etc. came from Facebook, Twitter, and any other social sources.

This is Work but It's Important

The above methods give you tons more useful information than the now-defunct simple numerical count. No question some of this is more work, but it can really pay off.

If you know who is reading and sharing your content, you have a chance to more deeply engage with them. And if you know what kind of traffic is coming to which stories from where, you might be able to discern how to better reach different audiences.

It takes time and good judgement to work effectively with the rich data available through these tools, and it can be difficult to fit all this into your other work.

But at the end of the day, it's a lot more useful than a Tweet button.

What are you using to measure your reach and impact on Twitter? Leave a comment and let us know what's worked well for you.

Improve Your Website’s Performance With These Photo Optimization Tips

Much has been written lately about slow page loading times on news websites. People are increasingly consuming news on mobile devices, often with limited bandwidth.

Earlier this year, Google announced that they now use "mobile-friendliness" as a ranking signal in mobile search results and even adding an extra second or two of load time has been shown to increase abandonment rates on websites.

Sites that aren't optimizing for performance on all devices and connection speeds are limiting their own audience growth. Every time someone can't find your site or they're too impatient to wait for a page to load, you're losing a potential reader.

Fortunately, the INN Nerds aren't content to just complain about it, we're here to help fix it!

Let's Start with Photos

The average web page now weighs in at just under 2 MB, and images are the main culprit. Photos on the web are essential elements of storytelling and connecting with your audience. But if your photos aren’t optimized, they can also weigh down your web pages and make them slow to load. To improve the overall performance of your website, photo optimization is a great place to start.

What is Photo Optimization

Photo optimization involves compressing the file size of photo using a tool like Adobe Photoshop. We want the highest quality photo with the smallest possible file size. Too much compression can impair the quality of the image. Too little compression can result in a large photo file size which slows the performance of our web page. Optimization is finding the right balance between quality and file size.

Consider these two images:

Photo of Delicate Arch
Not Optimized. Width: 1200px, Height: 800px, File Size: 939 Kilobytes
Delicate Arch in Arches National Park
Optimized. Width: 1200px, Height: 800px, File Size: 107 Kilobytes

The second photo has a file size of less than 12 percent of the first. You can probably see a slight degradation in the photo quality. But most people would not notice the difference between these two on a web page.

On the web we should never use any photo with a file size like 939 Kilobytes. This will slow the loading of the page, especially on slower connections and mobile devices. We want to keep website photos under 100 KB if we can, and much lower for smaller images. For example, here’s the same photo reduced in dimensions:

Delicate Archive in Arches National Park
Not Optimized. Width: 300px, Height: 200px, File Size: 192 Kilobytes
Delicate Arch in Arches National Park
Optimized. Width: 300px, Height: 200px, File Size: 14 Kilobytes

The file size of the second photo is less that 10 percent of the first image, yet most people would see no difference in photo quality. If you have a web page displaying a number of similar-sized images, for example a gallery page or a series of stories with thumbnail images, smaller photo file sizes can add up a huge reduction in page loading time.

How to Optimize Photos in Photoshop

Best practice for optimization is to start with the highest-quality source photo, then resize and compress it for the web. Start by cropping and resizing the photo for the space it will fill on your web page. If the photo will be displayed in a sidebar widget that’s 300px wide, there’s no reason to upload a photo wider than 300px for that space. Reducing the size of the photo by itself will reduce its file size.

After the photo is cropped and sized, in the File menu go to Export -> Save for Web:

Save for Web dialogue box in Photoshop

Here you can select which photo format to export (always use JPEG for photos), and how much compression to apply. Medium is often the optimum setting, but this is a judgement call. If you don’t see a preview of both the Original photo and the JPEG export, click the 2-Up tab at the top. Now you can try different compression settings and see a preview of the results, including the file size:

Optimized image in Save for Web dialogue in Photoshop

Once you're happy with the image quality and file size reduction, click Save to create your web-optimized photo. This will not affect your original image, which should be archived for possible use in the future.

More Tutorials on Photoshop's Save for Web

You can of course find lots of great Photoshop tutorials online.

Here’s a video from that explains how to use Save for Web in Photoshop.

Here’s another really good tutorial on Photoshop’s Save For Web that walks through the process.

Tip: If you like keyboard shortcuts, in Photoshop you can launch Save for Web like this:

  • Command + Shift + Option + s (Mac)
  • Control + Shift + Alt + s (Windows)

Optimizing Photos without Photoshop

If you don’t use Photoshop, there are any number of other tools for optimizing website images. is a free online tool. You can drag and drop a source photo into it, and download a compressed version of the image. doesn’t have any cropping or resizing tools, and you can’t adjust the amount of compression. In our tests, Photoshop does a better job of balancing photo quality and file size. But if you have a photo sized correctly for your website, it’ll do in a pinch.

If you're comfortable using the command line, there are a number of tools available to you for optimizing different image types.

Your Photo Workflow

If you've produced photos for print, you know it's important to maintain the highest quality photo throughout the process. But with today's cameras, the highest quality photo is likely to be 5000 pixels wide, and more than 20 Megabytes in file size. Such a photo is great for print, but a problem on the web.

Best practice is to safely store the original photo files in their highest resolution, for the day when you need to resize or reuse them in another context. Use the original photos to crop, size, and export for the web, then keep the originals safe for future use.

Help Improve Our Docs

If you have some favorite tips or tricks for dealing with photos online, or would like to suggest other tools and resources, please leave a comment here!

INN Member Website Review: October 2015

In the realm of nonprofit news, the websites of INN members represent the front end of our digital presence and impact. As the newest member of the Products and Technology team — aka the Nerds— I’m working to get acquainted with our members and a site review seemed a good way to start. It’s also a useful every so often to see what we’re collectively doing on the web as a benchmark for future progress.

My review this month of the 100+ INN Member websites shows a very healthy community. I found thousands of examples of insightful reporting, excellent storytelling, and engaging design. As with any sample of 100 websites there are bound to be things we might improve.

I’d like to suggest three priorities we could work on together over the next year:

  1. Responding to the Mobile Challenge
  2. Going Social
  3. What is good design?

Responding to the Mobile Challenge

In State of the News Media 2015, Pew Research Center reports that “39 of the top 50 digital news websites have more traffic...coming from mobile devices than from desktop computers.” Yet a significant number of nonprofit news sites that excel in every other way are not optimized for mobile.

Converting a non-responsive website to cross-device friendliness can be very challenging. The solution used to be providing a “mobile” version along with the “desktop” version of the site. But now with so many different types and sizes of devices and displays, the better practice is to publish a single site for all devices using the techniques of Responsive Web Design.

The speed with which mobile devices have become part of our daily lives is unprecedented in the history of technology. In 1995 there were 80 million mobile phones users worldwide. By 2014 the number of mobile phones reached 5.2 billion, including 2 billion smart phones. The number of smart phone users worldwide is projected to reach 4 billion by 2020.

The smart phone is changing the way people engage with media and each other. In a recent Zogby Analytics survey of millennials, 87 percent said “my smart phone never leaves my side.” 78 percent spend more than two hours a day using their smart phone and 68 percent prefer using their phone over a laptop or desktop computer.

But it’s not just younger demographics who are increasingly going mobile. Since 2008 time spent per day with digital media has more than doubled for all U.S. age groups. As highlighted by Mary Meeker in her Internet Trends 2015 report, almost all of this increase is from media consumption on smart phones.

The integration of smart phones with everyday life is rapidly changing the way people discover, consume, and share news. The urgency of addressing any mobile gap can’t be minimized.

Going Social

Social media have become increasingly important for discovery and sharing of content, with nearly half of digital news-consuming adults saying they use Facebook every week to get news about government and politics. But in some cases social media integration on news sites remains problematic, with bloated tracking scripts or missing Open Graph metadata needed for effective engagement.

I suspect many of us are concerned about the intrusiveness of the big social media players. It’s in their interest to make it easy to share our content on their platforms. This helps us reach new audiences and expand our news impact. But we also understand that their business model is predicated on harvesting as much personal information as possible about the people who visit our websites.

Many of the free widgets we embed on our sites make it easy for people to share our content, at the cost of exposing data about their interests and behavior. Social widgets can also slow website performance. The leading social media players and technologies keep changing. In this environment, developing best practices around social media is very challenging.

What is Good Design?

I’ve been a news professional for 28 years, and a web designer for the past 15. I think design without good content is wasted space. Good reporting on a flawed website can have great impact. But good design applied to great content can make a huge difference.

Ideas about what constitutes “good web design” have changed dramatically over the past decade, and will continue to evolve over the next. Fashions aside, we have learned fundamental lessons about what works for website users. We know people don’t like feeling lost or confused. They don’t enjoy struggling past obstacles to simply read a story.

Website designs can inflict many distractions on visitors in an effort to control their attention. Sometimes it’s important to get across (e.g.) the idea that our organization needs their support. But if we do this in a way that frustrates our users, we’re designing at cross purposes.

Each of us understands this from our own experience. We decide every moment whether to stay on a web page or direct our attention somewhere else. Something is always competing for our attention. As storytellers and designers, our job is to win that competition.

We can help our audiences by providing a distraction-free space to engage with our content. I like the phrase “radical clarity” as an aspiration for our websites, especially story pages. Mobile has forced us to rethink designs that present too much information for a small screen, and we need to carry that thinking over to larger displays as well.

Solving everything now

Building anything of enduring value almost always takes more time than you want it to. The corpus of INN Member websites represents a tremendous amount of work by their creators, and great value to their audiences. As a website builder I know that work is never done.

My hope is that a year from now we can repeat this review and see clear signs of progress, especially in the areas of mobile friendliness, social media optimization, and clarity of design. The INN Nerds will do what we can to help. And I'll be writing with more details and actions we can take to address these priorities in the coming weeks.

A Helpful Command For Searching Git Submodules

Sometimes you need to search through a git repository containing sixty directories. Each directory is a separate git submodule, meaning that they're fully-fleshed-out git repositories in their own right, unconcerned about their siblings or parent repositories.

I was filing a pull request affecting a CSS file in one of the repositories, so I needed to check all the other repositories. They're WordPress child themes; the pull request is to their parent. Past changes to the parent have conflicted with CSS in some of the other repositories.

The easiest way to check for clashing styles was to search for selectors in each theme, and the easiest way to do that is with a tool like grep.

git-grep is specifically built to search within git repositories, but it doesn't search within submodules, and it doesn't have a way to limit the search by filename or filetype.

ack and ag are popular grep replacements, but I'm not familiar with either.

Here's my solution:

#! /bin/bash

if [ -z $1 ] || [ -z $2 ];
	echo "Usage: deepgrep <filetype> <string to search for>"

find . -type f -name "*.$1" -print0 | xargs -0 grep --color=auto -0 $2

Save the file as 'deepgrep' somewhere in your $PATH. Mine's in a folder in my dotfiles.

With  deepgrep css .is-video I was able to see that there were indeed no conflicts, and it was safe to submit the pull request.

If you don't want to bother with saving the file, you could set an alias in your terminal, or run this in the terminal directly:

find . -type f -name "*.css" -print0 | xargs -0 grep --color=auto -0 "The string you want to search for"

If you prefer ag, try ag -UG <filetype> <string to search for>

Happy searching!

What You Don’t Know Can’t Hurt You…Unless You Don’t Ask

We were talking with a respected INN member during the Nerds’ open office hours last week. While asking a question about how to do something on his site, he said a couple of times that he doesn’t know much about website coding. But it struck me that he clearly does know a lot, he just didn’t know the answer to this particular question.

I have seen this behavior in many other people, and also in myself. When talking with people we believe know much more than us about a given topic, we sometimes minimize our knowledge up front.

I suspect we do this because we have learned from past experience that people sometimes use their status as experts to belittle us. This kind of behavior is common, especially in technical fields. Saying “I don’t know much” is a smart strategy if we suspect the expert will act like a jerk in response to our question. For many of us it's a defense reflex.

I can safely say that none of the INN Nerds will ever treat you this way. We welcome questions from all members and constituents from any level of technical knowledge, and it’s in our DNA to not act like jerks.

Not acting like a jerk is also hard-coded in the INN technology team manifesto, which outlines how and why we work. We hold ourselves accountable to this, and you should, too. Here are a few excerpts:

  • We’ll invest in education: creating curriculum and training for members, investing in our apprentices/students, and pursuing continuing education opportunities for ourselves.
  • We will be open to new tools and processes, resisting the stale comfort of “this is how we’ve always done it.”
  • We won't use snark or pedantry to exclude people from conversations.
  • We’ll never judge you or shame you for not knowing something.
  • We won’t feign surprise or jump into conversations with Well, actually...
  • In emergencies, we will send pie.

Because news technology is changing so rapidly, there are many reasons for each of us to feel we don’t know as much as we should. The pace of change is also precisely why we should ask many questions, even at the risk of exposing what we don’t know. Our guest during office hours did exactly that, and deserves to have his question (and his many other contributions as a professional) treated with respect. We will always do that.

When it comes to the web and digital technology, each of us is somewhere on the learning curve. The value of a community like the one we’ve got is that we can help each other gain the knowledge we need to improve and sustain our work. At a time like this, we should make extra efforts to communicate and collaborate.

So please use the Largo Help Desk for any site problems or requests, email us at for anything general, and sign up any time for open office hours. We’ll never shame you for not knowing something, and might even have some dumb questions ourselves.

How I Learned to Stop Worrying and Backup the Server

Server system administration is not my primary interest. For most of our services, we prefer to offload the headache of system administration to the fine folks at WP Engine. We utilize web services outside of WordPress, including our Help Desk and Knowledge Base running on Jira and Confluence hosted on an EC2 instance provided by Amazon Web Services.

We gain control of what we can run on the server, but the price is lost sleep because we need to manage our own backups. EC2 makes it fairly easy to do backups manually. Every EC2 instance uses a virtual disk volume with their Elastic Block Store service. The EBS volume can be backed up at a block level (like a full hard disk clone) using the Snapshots feature. Manual backups are not what we want because they still rely on one of us remembering to do it, which is error-prone and annoying.

Instead, we set up the EC2 instance to snapshot itself automatically, every day using cron, and email us a summary of the task.

Who's behind the camera?

The server may be snapshotting itself, but it requires an AWS user account with access to the server and snapshotting. In the identity management section of AWS called IAM, I added a group called "Snapshotters":

Groups in AWS IAM Console

To this group, I added an inline policy called CreateDeleteSnapshots where I define what permissions are available to users who will belong to this group:

IAM Group Policy - CreateDeleteSnapshots

The actual content of the policy is:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1426541828000",
            "Effect": "Allow",
            "Action": [
            "Resource": [

The Action and Resource are the most important bits.  The Actions are the smallest number of permissions needed to perform snapshots automatically. The Resource is set to * enabling it to work on all servers.

I created a user called "snapshots" and added it to the Snapshotters group, so the "snapshots" user has all of those privileges I set up for the group.

AWS IAM User list of groups

Robots Taking Snapshots

It's all well and good to have a user to do the snapshotting, but we need a snapshotting mechanism too. Doc Brown isn't time traveling without the Delorean. We can create a script to take care of the snapshots for us, and then set it to run periodically using cron.

The AWS CLI tools are a great set of utilities to administer all kinds of AWS services from the command line, including snapshotting. I installed them easily on our Ubuntu server using sudo pip install awscli which places the single executable in /usr/local/bin/aws.

Breaking down the steps, we need to:

  1. Identify the current EC2 instance ID. One strategy is to use  the ec2metadata command that's built into all Ubuntu EC2 servers since Ubuntu 12.04. Here's another way to do it.
  2. Identify the volume ID of the EBS volume attached to this instance and acting as the main drive (i.e. /dev/sda1). We can use the ec2 describe-volumes command here.
  3. Create the snapshot, using the predictably-named ec2 create-snapshot command.
  4. Identify the snapshot IDs for all snapshots associated with this EBS volume and sort them by date. We can use the command ec2 describe-snapshots.
  5. We want to delete all but the 3 most recent snapshots. We can do this using ec2 delete-snapshot.

The meat of the script is:

export DATE_STR=`date +%y.%m.%d.%I`;
export INSTANCE_ID=`ec2metadata --instance-id`;
# Get the ID of the volume mounted as the root device on this instance
export VOLUME_ID=`/usr/local/bin/aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=$INSTANCE_ID Name=attachment.device,Values=/dev/sda1 --query 'Volumes[*].{ID:VolumeId}' | grep ID | awk '{print $2}' | tr -d '"'`
echo "Initiating EBS volume snapshot of volume $VOLUME_ID attached to instance ID $INSTANCE_ID...";
    /usr/local/bin/aws ec2 create-snapshot --volume-id $VOLUME_ID --description $VOLUME_ID;
echo "Done.";
echo "Deleting old snapshots...";
# Get any snapshots older than the last $NUMBER_OF_SNAPSHOTS_TO_KEEP
for SNAPSHOT_ID in `/usr/local/bin/aws ec2 describe-snapshots --filters Name=volume-id,Values=$VOLUME_ID --query 'Snapshots[*].{ID:SnapshotId}' | grep ID | head -n -$NUMBER_OF_SNAPSHOTS_TO_KEEP | awk '{print $2}' | tr -d '"'` ; do
    echo "Deleting snapshot $SNAPSHOT_ID...";
    /usr/local/bin/aws ec2 delete-snapshot --snapshot-id $SNAPSHOT_ID;
echo "Done.";

In order for this to work in the context of a cron job, we need to set the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION (this last one because our EC2 region, us-west-2, is different from the default of us-east-1) according to this AWS CLI guide. I also need to set the PATH to include /usr/local/bin, the location of the aws command.

Scheduling for Peace of Mind

This script I saved in a folder full of cron scripts in our home folder, /home/newsapps/cron/ and made it executable with chmod +x ~/cron/ I scheduled this with cron to run every day at midnight server time using crontab -e and by adding the following lines (this crontab generator helped greatly):

# EC2 EBS Snapshot -- run once a day
0 0 * * * /home/newsapps/cron/

The extra MAILTO= is what emails us the output of the script. The only trick with getting that to work is that I had to install a mail server. I innocently installed the mail program by doing sudo apt-get install mail and in the process installed the mail server postfix and configured for our Fully-Qualified Domain Name

iPhone Screenshot of email from cron job

Et voilà! I can check from my phone that the server backed itself up, and I can go run carefree through a field full of daisies in my dreams.


Contributing to the INN Nerds docs repo using

This guide is for those of us who are not regular Terminal/shell users and those who do not want to or can't use the Gitbub client to make changes to a Github repository.

While this is written specifically for folks that want to contribute to our docs repo, the steps are the same for any repo.

Keep in mind that our docs repo is just a collection of Markdown files, making it an ideal candidate for this approach to contributing.

Since you can't run your code on to test changes, the potential to introduce bugs by editing code directly on seems extraordinarily high, in my opinion. If you have a repo with actual code, I'd suggest learning command line git or dive into the Github client (for Windows, for Mac). The learning curve is steep, but the tools are more powerful.

Getting started

First of all, you'll need a account if you don't already have one. It only takes a minute. We'll wait for you.

After you have your account set up, go to the INN docs repository.

From there, you can use the "Fork" button in the top right to create a full copy of the repository under your Github account.

Fork this repo

If you are a member of multiple organizations on Github, you will be asked where you'd like to fork the repository. In this case, I want my fork of the docs repository to live under my personal Github account:

Choose account for fork

After you choose an account, Github creates your new fork and drops you on its homepage. Notice the URL and the message just below the title of the fork indicate that this is a derivative of another repository.

Fork homepage

Step one: complete. Not so bad, right?

Editing a file

Now let's make a simple change to one of the files in our fork of the repository. In this case, we're making a superficial change to the file as an example.

From the homepage of the fork, find the link and click:

Once the has loaded, look for the edit icon near the top on the right side of the page:


Once you click the edit icon, you are presented with a text editor where you can make changes to


Towards the bottom of the file, I notice that the spacing between the heading Version 0.1 and the list that follows is not consistent with the style of the rest of the document. So, I add a new line after Version 0.1.

Before and after:

Before and after

Next, we must add a message to go along with our change. At the bottom of the page, you'll see an area labeled "Commit changes". In the first input, you should provide a brief but sufficiently descriptive message regarding your change, so that anyone browsing the history of changes made to the repository will have an idea of what changed and when. For example:

Commit message complete

The secondary input is optional. I'm going to skip it here.

Once your message is in place, click "Commit changes".

Submitting a pull request

Great! So far you have your own fork of the repository and you're making changes left and right. But how do we let the maintainers of the original repository know that we have lots of good stuff for them to incorporate in the original?

The answer: submit a pull request.

To do so, go to the homepage of your fork. In my case, Towards the top, right side of the page find the "Pull request" link and click.

Pull request

On the page that follows, you'll see something like:

Pull request details

If you've followed these instructions, the pull request page should default to settings that essentially say, "Submit a request to the maintainers of the original repository, asking them to pull in the changes made to my fork of this repository."

We'll gloss over the details of what each of the dropdowns at the top of the page can do for the pull request's settings. Let's stick with Github's sensible defaults for now.

Towards the bottom of the page, you can see the "diff" (i.e., difference) between the original version of the file (left) and the changes made to the file in your fork.

Now, click the big green "Create pull request" button at the top.

You'll be presented with another dialog, asking you to include any relevant details about your changes, consequences it might have for other areas of the repository, why your change is the "right way" to do it, etc.

Pull request details

Once you add a sufficient message, click "Create pull request" (this time, with feeling!).

Hooray! We now have an open pull request on the original repository's page:

Open pull request

Note that the URL and the repository title no longer reflect our Github account or fork of the repository.

What happens next?

When you submit a pull request, a notification is sent to the maintainer(s) of the repository. They can then review your proposed changes and either merge (e.g., pull) your changes back into the original repo or reject them.

In this case, since I am a maintainer for the original repository, the pull request page in the screenshot above displays a "Merge pull request" button. This will not appear if you do not have permission to make changes directly to the original repository.

Congratulations, you're now well on your way to contributing to our docs repo (or any other repo, really). Drop us a line in the comments or @INNnerds with your questions and pro tips for contributing to projects using

OS X Setup for News Apps Development

I have the good fortune to be working with a team that values productivity by providing me with an Apple laptop. OS X works really well for what we do and matches the way my brain works. I like to have the power of Unix under the hood, along with the inspiring design of the Apple operating system signature look and feel.

Starting with a fresh Apple MacBook Pro, delivered Monday morning while I was introducing myself in our daily video scrum, here's what I immediately installed to get to 95% of what I need to contribute to  the INN Nerds projects.


It's a really good idea to encrypt the hard drive using the FileVault feature, and it's offered by default on a new OS X setup. By default, this uses your iCloud password for encryption. Set your password to something challenging, which you should be doing anyway.


After you encrypt your drive, it's imperative that you have a regular backup strategy. I worked for 11 years in tech support, helping people recover data from their crashed laptop hard drives after accidentally running over their laptops in the car, and it was only possible if the drive wasn't encrypted. The odds of data recovery were still pretty bad, but the people who were already backing up didn't skip a beat in getting back to work. Your laptop is not your work, it's just a handy set of tools.

If nothing else, get an external hard drive that you leave at home and set up Time Machine to back up to it. Set a calendar reminder to do this regularly (aim for doing this daily). You could also back up to a Network Attached Storage on your home network, which would work over the wireless connection. Apple sells one called an AirPort Time Capsule.

Any files you're using that are shared interest to the company, work on them in Dropbox. For your coding projects, be sure to be regularly pushing your commits or branches to Github/Bitbucket.

System Updates

Install all OS X system software updates, all of them until you go blue in the face. At no time other than right now, with a computer that has nothing interesting or fun running on it yet, is it going to be less annoying to do a series of reboots. Just get it over with; install other software while this is going on, but reboot as often as you need in order to get up to date. Putting this off just leads to pain later.

System Preferences

Look through the OS X System Preferences and make a few choices. Perhaps you have personal preferences about how notifications do/don't appear, what the screensaver looks like, what hot corners do (what happens when you put your mouse cursor in the corner of a screen), or that the "Quack" sound should be used for all alerts. You can always make adjustments later, but you might as well explore what the computer can do. You chose to make news apps because you are curious and want to change things, remember?

The System Preferences window in OS X 10.10 Yosemite.

Web Browsers

Install additional web browsers. Safari is a fine browser, but when writing and testing web applications, it makes no sense to have only one browser installed. I install Google Chrome and Mozilla Firefox. Choose your preferred default browser and make sure you never get nagged again by the others.

FTP Client

You may not need the FTP client immediately, but when you do need to access an FTP server you'll be glad you already have a client. You can use FTP from the command line, but I find that experience to be akin to doing a road trip in a horse and buggy cart. Do yourself a favor, install Cyberduck, and ride in style with air conditioning and power steering.

Text Editor

If you already have a favorite text editor, feel free to skip this section. If you are open to trying new things, or are looking for a recommendation, then I highly recommend you install Sublime Text. It's free, is beginner-friendly, and using it makes me feel like I'm driving a space ship. I originally installed it for the color schemes which abound.

I also recommend installing the Package Control for Sublime Text, which then gives you access to a bunch of nifty tools that plug into the editor.

Terminal Emulator

The default terminal emulator that comes with OS X is okay, but I like pretty color schemes and it seems easier to do this in iTerm2. Anyway, there are more options and it's pretty popular, and it's free.

With either the default terminal emulator or iTerm2, create a new terminal and install some things you'll use there.

Command Line Utilities

Start with the command-line tools you'll need to use for further terminal goodness. You can install that by running xcode-select --install and following the instructions that appear to install the offered command-line tools.

Homebrew is so useful, I wouldn't be surprised if it was included in OS X in the future. It's a command-line package manager with a simple interface that lazy people like me can use. When I want to install a new CLI tool, and I don't want to deal with the tomfoolery of finding all the dependencies and the latest download link, I can usually do it with Homebrew. Get Homebrew with ruby -e "$(curl -fsSL".

Let's install MySQL, to be able to work with MySQL databases in the future. brew install mysql. Yup, that's it. There are some instructions it supplies you after completing the install about getting the MySQL server to start automatically or manually.

Later, we'll be using Python for a few projects, and the way to keep Python library versions and environments organized with different projects is to use virtualenv and virtualenvwrapper. To install those, we need to first install the pip Python package manager, and then we'll install the virtualenv packages with pip. Run sudo easy_install pip && sudo pip install virtualenv virtualenvwrapper in your terminal.

If you installed Sublime Text, it's nice to be able to invoke it from the command line, like subl We can create a subl alias by using these instructions.

We might as well generate an SSH key now, which we'll eventually use with Bitbucket and Github so we don't have to log in from the command line when pushing to those repositories. Generate an SSH key using the command ssh-keygen. The contents of ~/.ssh/ is what you can use for your Github and Bitbucket account settings.

Since we're on the topic of git, you should configure your name and email globally that you'll be using with Github/Bitbucket. Following these instructions from the official Git documentation book, you can do it like this:

git config --global "Nick Bennett"
git config --global

Virtual Machines

We use virtual machines to set up simulated environments of the public web servers we ultimately deploy to, mirroring in many ways the setup of that final environment but with greater speed and control. We use the free and open source project VirtualBox as a host for our virtual machines, in conjunction with Vagrant which gives us a scripted way to efficiently create those virtual machines.

For a real example of using VirtualBox and Vagrant, check out our deploy-tools project on Github; we use this for every one of our WordPress site projects. If you are developing a WordPress site, I highly recommend checking this out to smooth your development and deployment process.


Working remotely full-time only works if we're all in constant communication. We use a host of tools for this, most of them are browser-based. You can use HipChat in the browser too.  I strongly recommend installing that as a native client. Among other great things HipChat enables is the ability to share animated GIFs like this one:

Captain Kirk in a rain of Tribbles.

Dropbox is another tool we use that really begs for a native client to be installed. Sharing files through email or instant messaging kinda stinks. Dropbox is like the shared network drive so common to Windows-based networks, only the files actually go on your computer instead of accessing them through some tenuous ethereal connection. This is why FileVault hard drive encryption is so important.

Along with being able to share files and words, we need to be able to share secrets, aka passwords and logins. We use 1Password vault, available for free as a 30-day trial followed by a $50 purchase. For keeping the keys safe for our organization's fleet of virtual facilities, that's a minor expense.

I will cover personal preference in a future post or series of posts, when the whole INN team are able to weigh in on their personal software recommendations, what assists them in doing what they each do, and work-in-progress configurations.

Other Tools

I mentioned this briefly in the article but I want to point it out again: the other nerds at INN have documented the tools we use for collaboration, starting with HipChat and including several others, that we recommend to others to enable highly distributed and productive teams. This is all a work in progress, as we frequently re-evaluate what best fits our changing needs.

Beyond the collaboration tools, check out this list we maintain of free or discounted tech tools available to non-profit news organizations.

I would also recommend you take a look at How to Setup Your Mac to Develop News Applications Like We Do on NPR Apps' blog, which helped me get started.

Please comment and share your tips for getting started, links to your own guide or others, and share your recommended software and configs.

How to debug email-related theme and plugin functionality in WordPress

Let's say you're having trouble with a WordPress theme or plugin that uses wp_mail. How can you inspect the email that wp_mail composes or verify that it is actually sending (or at least attempting to send)?

Well, one simple way is to tell WordPress to use Python's smtpd DebuggingServer to "send" email.

The DebuggingServer doesn't actually send email, so don't go checking your inbox. It's only meant to show you the email that would be sent, headers included, if it were an actual SMTP server.

Note that this guide assumes you're debugging wp_mail issues during local development.

Let's get started.

Set up the smtpd DebuggingServer

If you have Python installed (comes with Mac OS X and most distributions of Linux by default), this is the one-liner you can use to get the debugging mail server running. From the command line, run:

$ python -m smtpd -n -c DebuggingServer localhost:1025

So that you don't have to remember that command, you can add an alias to your shell profile (e.g., ~/.profile), making it super easy to run the debugging mail server at a moment's notice.

To do this, open your shell profile in your favorite text editor and add the following line:

alias mailserve='python -m smtpd -n -c DebuggingServer localhost:1025'

Save your shell profile and source it in your shell to make sure the new mailserve alias is available:

$ source ~/.profile

Note: ~/.profile is probably the most common shell profile location. If you don't have this file, you can create one by running:

$ touch ~/.profile

Keep in mind that you might already have a shell profile for your specific shell. For example, ~/.basrhc for bash or ~/.zshrc for zsh. If you have a ~/.bashrc or ~/.zshrc, you can try adding the mailserve alias to one of them instead.

Once you have the mailserve alias defined and your profile sourced, running the server is as simple as:

$ mailserve

Note: there won't be any output from running this command initially. The debugging server runs, waiting for an application to connect and attempt to send a message.

Tell WordPress to send mail via the DebuggingServer

Now, in your WordPress theme or plugin, you can add some debugging code that will tell WordPress to send email via the debugging server you have running.

To accomplish this, add the following code to your theme's functions.php or to your plugin's main file:

function phpmailer_debug_settings($phpmailer) {
    $phpmailer->Host = 'localhost';
    $phpmailer->Port = 1025;
add_action('phpmailer_init', 'phpmailer_debug_settings');

This code changes the configuration of the $phpmailer object used by wp_mail, telling it to use the SMTP server on localhost, port number 1025. If you look back at the Python command used to fire up the debugging mail server, you'll see the $phpmailer settings correspond to the arguments passed in that command:

$ python -m smtpd -n -c DebuggingServer localhost:1025

Once you have the debugging mail server running and the code above included in your theme/plugin, you can try sending mail with WordPress and see the entire message contents, SMTP headers, etc in your shell. Here's some example output:

vagrant@precise64:~$ mailserve
---------- MESSAGE FOLLOWS ----------
Date: Thu, 12 Mar 2015 16:21:54 +0000
Return-Path: <>
To: "\"Investigative News Network\"" <>
From: Ryan <>
Subject: [INN Website Contact] This is a test email subject line
Message-ID: <>
X-Priority: 3
X-Mailer: PHPMailer 5.2.7 (
X-Mailer: WP Clean-Contact (
MIME-Version: 1.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is the test email body.

------------ END MESSAGE ------------

Why do I need this?

This can be helpful if you're trying to track down missing parts of an email (e.g., hey, where'd my "from" address go?) or need to verify the contents or formatting of an email that your theme/plugin sends to users.

Keep in mind that, although this post describes how to use the Python smtpd DebuggingServer with WordPress, you can also use this guide with other applications as long as you can configure said applications to connect to the DebuggingServer.