Twitter Removed Counts From Share Buttons, Here’s What You Can Do About It

We've been getting lots of questions about the disappearance of the numerical count of tweets on story pages. For sites using the tweet button provided by Twitter, here's what that looked like until November 20th:

old Tweet button with counter

On November 20th, the tweet count disappeared and it's not coming back. Why? Twitter shut down that feature.

In truth, the value of this particular feature was always rather limited. It was an overly simplistic metric that showed how many people clicked the Tweet button, but didn't include a count of retweets, likes, or replies which can be much more important in measuring reach and impact of any given story. As Twitter explained in their announcement of the changes:

The Tweet button counts the number of Tweets that have been Tweeted with the exact URL specified in the button. This count does not reflect the impact on Twitter of conversation about your content — it doesn’t count replies, quote Tweets, variants of your URLs, nor does it reflect the fact that some people Tweeting these URLs might have many more followers than others.

In our own work, we have also been trying to reduce the number of third-party scripts that are loaded on any given page in the interest of improving load time and protecting users' privacy.

That said, we know that understanding the reach and impact of stories on social media is increasingly important to the publishers we work with, so here are some ways of digging into Twitter analytics that will give you a much better picture than a simple count of how many times a story has been tweeted.

Better Ways to Measure Impact on Twitter

Twitter search

Copy and paste the url of a story page into the search box on Twitter, and you can see who tweeted the story, when they tweeted it, and how many likes and retweets each tweet got. Twitter search now also lets you filter results to see "top" tweets or a "live" stream of all tweets for a particular search.

For each account that tweeted the story, you can then dig a bit deeper to discover how many followers the account has, how many of those followers you know and whether this is someone you might want to reach out to as you try to build a more engaged base of readers.

If you find someone consistently tweeting your stories, you might want to follow them back, add them to a Twitter list, invite them to subscribe to your newsletter or attend an event or just take a minute to say thanks.

Here's an example of such a Twitter search for a recent story on Frontline.

Topsy Social Search

Topsy provides similar functionality in several languages (again, just copy and paste the URL for your story into their search box). If you really just want a numerical count of tweets it gives you that up front, but it also lets you dive deeper to get real insight into your story's reach and impact. Here's a search for tweets and retweets about the same story from Frontline.

Google Analytics

A tweet about your story is nice, but it's even nicer when people who see the tweet click through to your story page. Google Analytics gives you this kind of data and much more.

For an easy overview of all incoming traffic to your site from Twitter, click Acquisition in the Google Analytics reporting sidebar, then on Social -> Network Referrals. You'll probably see Facebook on top, followed by Twitter, Reddit, etc. Click on Twitter and you'll see a list of shared urls from your website. You can see the number of sessions and pageviews for each URL, and importantly the average session duration which tells you something about how people actually engaged with your story and site.

You can drill down much further by tinkering with the various secondary dimension options to see the geographical location of your page visitors, how many used mobile or desktop browsers and many other dimensions too numerous to cover here.

If you want to look up social network referrals for a specific story, click on Behavior in the Google Analytics reporting sidebar, then Site Content -> All Pages. In the search box, paste in the story URL but only include the part of the URL after your site domain name.

Google Analytics data

For example, if the full URL to your story is:

http://nerds.inn.org/2015/10/27/inn-member-website-review-october-2015/

Paste this into the search box:

/2015/10/27/inn-member-website-review-october-2015/

Hit enter and you'll see the number of pageviews and other traffic data for that story. Click on secondary dimension, and in the dropdown select Social Network. You'll see how many pageviews etc. came from Facebook, Twitter, and any other social sources.

This is Work but It's Important

The above methods give you tons more useful information than the now-defunct simple numerical count. No question some of this is more work, but it can really pay off.

If you know who is reading and sharing your content, you have a chance to more deeply engage with them. And if you know what kind of traffic is coming to which stories from where, you might be able to discern how to better reach different audiences.

It takes time and good judgement to work effectively with the rich data available through these tools, and it can be difficult to fit all this into your other work.

But at the end of the day, it's a lot more useful than a Tweet button.

What are you using to measure your reach and impact on Twitter? Leave a comment and let us know what's worked well for you.

Improve Your Website’s Performance With These Photo Optimization Tips

Much has been written lately about slow page loading times on news websites. People are increasingly consuming news on mobile devices, often with limited bandwidth.

Earlier this year, Google announced that they now use "mobile-friendliness" as a ranking signal in mobile search results and even adding an extra second or two of load time has been shown to increase abandonment rates on websites.

Sites that aren't optimizing for performance on all devices and connection speeds are limiting their own audience growth. Every time someone can't find your site or they're too impatient to wait for a page to load, you're losing a potential reader.

Fortunately, the INN Nerds aren't content to just complain about it, we're here to help fix it!

Let's Start with Photos

The average web page now weighs in at just under 2 MB, and images are the main culprit. Photos on the web are essential elements of storytelling and connecting with your audience. But if your photos aren’t optimized, they can also weigh down your web pages and make them slow to load. To improve the overall performance of your website, photo optimization is a great place to start.

What is Photo Optimization

Photo optimization involves compressing the file size of photo using a tool like Adobe Photoshop. We want the highest quality photo with the smallest possible file size. Too much compression can impair the quality of the image. Too little compression can result in a large photo file size which slows the performance of our web page. Optimization is finding the right balance between quality and file size.

Consider these two images:

Photo of Delicate Arch
Not Optimized. Width: 1200px, Height: 800px, File Size: 939 Kilobytes
Delicate Arch in Arches National Park
Optimized. Width: 1200px, Height: 800px, File Size: 107 Kilobytes

The second photo has a file size of less than 12 percent of the first. You can probably see a slight degradation in the photo quality. But most people would not notice the difference between these two on a web page.

On the web we should never use any photo with a file size like 939 Kilobytes. This will slow the loading of the page, especially on slower connections and mobile devices. We want to keep website photos under 100 KB if we can, and much lower for smaller images. For example, here’s the same photo reduced in dimensions:

Delicate Archive in Arches National Park
Not Optimized. Width: 300px, Height: 200px, File Size: 192 Kilobytes
Delicate Arch in Arches National Park
Optimized. Width: 300px, Height: 200px, File Size: 14 Kilobytes

The file size of the second photo is less that 10 percent of the first image, yet most people would see no difference in photo quality. If you have a web page displaying a number of similar-sized images, for example a gallery page or a series of stories with thumbnail images, smaller photo file sizes can add up a huge reduction in page loading time.

How to Optimize Photos in Photoshop

Best practice for optimization is to start with the highest-quality source photo, then resize and compress it for the web. Start by cropping and resizing the photo for the space it will fill on your web page. If the photo will be displayed in a sidebar widget that’s 300px wide, there’s no reason to upload a photo wider than 300px for that space. Reducing the size of the photo by itself will reduce its file size.

After the photo is cropped and sized, in the File menu go to Export -> Save for Web:

Save for Web dialogue box in Photoshop

Here you can select which photo format to export (always use JPEG for photos), and how much compression to apply. Medium is often the optimum setting, but this is a judgement call. If you don’t see a preview of both the Original photo and the JPEG export, click the 2-Up tab at the top. Now you can try different compression settings and see a preview of the results, including the file size:

Optimized image in Save for Web dialogue in Photoshop

Once you're happy with the image quality and file size reduction, click Save to create your web-optimized photo. This will not affect your original image, which should be archived for possible use in the future.

More Tutorials on Photoshop's Save for Web

You can of course find lots of great Photoshop tutorials online.

Here’s a video from Lynda.com that explains how to use Save for Web in Photoshop.

Here’s another really good tutorial on Photoshop’s Save For Web that walks through the process.

Tip: If you like keyboard shortcuts, in Photoshop you can launch Save for Web like this:

  • Command + Shift + Option + s (Mac)
  • Control + Shift + Alt + s (Windows)

Optimizing Photos without Photoshop

If you don’t use Photoshop, there are any number of other tools for optimizing website images.

Compressor.io is a free online tool. You can drag and drop a source photo into it, and download a compressed version of the image. Compressor.io doesn’t have any cropping or resizing tools, and you can’t adjust the amount of compression. In our tests, Photoshop does a better job of balancing photo quality and file size. But if you have a photo sized correctly for your website, it’ll do in a pinch.

If you're comfortable using the command line, there are a number of tools available to you for optimizing different image types.

Your Photo Workflow

If you've produced photos for print, you know it's important to maintain the highest quality photo throughout the process. But with today's cameras, the highest quality photo is likely to be 5000 pixels wide, and more than 20 Megabytes in file size. Such a photo is great for print, but a problem on the web.

Best practice is to safely store the original photo files in their highest resolution, for the day when you need to resize or reuse them in another context. Use the original photos to crop, size, and export for the web, then keep the originals safe for future use.

Help Improve Our Docs

If you have some favorite tips or tricks for dealing with photos online, or would like to suggest other tools and resources, please leave a comment here!

INN Member Website Review: October 2015

In the realm of nonprofit news, the websites of INN members represent the front end of our digital presence and impact. As the newest member of the Products and Technology team — aka the Nerds— I’m working to get acquainted with our members and a site review seemed a good way to start. It’s also a useful every so often to see what we’re collectively doing on the web as a benchmark for future progress.

My review this month of the 100+ INN Member websites shows a very healthy community. I found thousands of examples of insightful reporting, excellent storytelling, and engaging design. As with any sample of 100 websites there are bound to be things we might improve.

I’d like to suggest three priorities we could work on together over the next year:

  1. Responding to the Mobile Challenge
  2. Going Social
  3. What is good design?

Responding to the Mobile Challenge

In State of the News Media 2015, Pew Research Center reports that “39 of the top 50 digital news websites have more traffic...coming from mobile devices than from desktop computers.” Yet a significant number of nonprofit news sites that excel in every other way are not optimized for mobile.

Converting a non-responsive website to cross-device friendliness can be very challenging. The solution used to be providing a “mobile” version along with the “desktop” version of the site. But now with so many different types and sizes of devices and displays, the better practice is to publish a single site for all devices using the techniques of Responsive Web Design.

The speed with which mobile devices have become part of our daily lives is unprecedented in the history of technology. In 1995 there were 80 million mobile phones users worldwide. By 2014 the number of mobile phones reached 5.2 billion, including 2 billion smart phones. The number of smart phone users worldwide is projected to reach 4 billion by 2020.

The smart phone is changing the way people engage with media and each other. In a recent Zogby Analytics survey of millennials, 87 percent said “my smart phone never leaves my side.” 78 percent spend more than two hours a day using their smart phone and 68 percent prefer using their phone over a laptop or desktop computer.

But it’s not just younger demographics who are increasingly going mobile. Since 2008 time spent per day with digital media has more than doubled for all U.S. age groups. As highlighted by Mary Meeker in her Internet Trends 2015 report, almost all of this increase is from media consumption on smart phones.

The integration of smart phones with everyday life is rapidly changing the way people discover, consume, and share news. The urgency of addressing any mobile gap can’t be minimized.

Going Social

Social media have become increasingly important for discovery and sharing of content, with nearly half of digital news-consuming adults saying they use Facebook every week to get news about government and politics. But in some cases social media integration on news sites remains problematic, with bloated tracking scripts or missing Open Graph metadata needed for effective engagement.

I suspect many of us are concerned about the intrusiveness of the big social media players. It’s in their interest to make it easy to share our content on their platforms. This helps us reach new audiences and expand our news impact. But we also understand that their business model is predicated on harvesting as much personal information as possible about the people who visit our websites.

Many of the free widgets we embed on our sites make it easy for people to share our content, at the cost of exposing data about their interests and behavior. Social widgets can also slow website performance. The leading social media players and technologies keep changing. In this environment, developing best practices around social media is very challenging.

What is Good Design?

I’ve been a news professional for 28 years, and a web designer for the past 15. I think design without good content is wasted space. Good reporting on a flawed website can have great impact. But good design applied to great content can make a huge difference.

Ideas about what constitutes “good web design” have changed dramatically over the past decade, and will continue to evolve over the next. Fashions aside, we have learned fundamental lessons about what works for website users. We know people don’t like feeling lost or confused. They don’t enjoy struggling past obstacles to simply read a story.

Website designs can inflict many distractions on visitors in an effort to control their attention. Sometimes it’s important to get across (e.g.) the idea that our organization needs their support. But if we do this in a way that frustrates our users, we’re designing at cross purposes.

Each of us understands this from our own experience. We decide every moment whether to stay on a web page or direct our attention somewhere else. Something is always competing for our attention. As storytellers and designers, our job is to win that competition.

We can help our audiences by providing a distraction-free space to engage with our content. I like the phrase “radical clarity” as an aspiration for our websites, especially story pages. Mobile has forced us to rethink designs that present too much information for a small screen, and we need to carry that thinking over to larger displays as well.

Solving everything now

Building anything of enduring value almost always takes more time than you want it to. The corpus of INN Member websites represents a tremendous amount of work by their creators, and great value to their audiences. As a website builder I know that work is never done.

My hope is that a year from now we can repeat this review and see clear signs of progress, especially in the areas of mobile friendliness, social media optimization, and clarity of design. The INN Nerds will do what we can to help. And I'll be writing with more details and actions we can take to address these priorities in the coming weeks.

What You Don’t Know Can’t Hurt You…Unless You Don’t Ask

We were talking with a respected INN member during the Nerds’ open office hours last week. While asking a question about how to do something on his site, he said a couple of times that he doesn’t know much about website coding. But it struck me that he clearly does know a lot, he just didn’t know the answer to this particular question.

I have seen this behavior in many other people, and also in myself. When talking with people we believe know much more than us about a given topic, we sometimes minimize our knowledge up front.

I suspect we do this because we have learned from past experience that people sometimes use their status as experts to belittle us. This kind of behavior is common, especially in technical fields. Saying “I don’t know much” is a smart strategy if we suspect the expert will act like a jerk in response to our question. For many of us it's a defense reflex.

I can safely say that none of the INN Nerds will ever treat you this way. We welcome questions from all members and constituents from any level of technical knowledge, and it’s in our DNA to not act like jerks.

Not acting like a jerk is also hard-coded in the INN technology team manifesto, which outlines how and why we work. We hold ourselves accountable to this, and you should, too. Here are a few excerpts:

  • We’ll invest in education: creating curriculum and training for members, investing in our apprentices/students, and pursuing continuing education opportunities for ourselves.
  • We will be open to new tools and processes, resisting the stale comfort of “this is how we’ve always done it.”
  • We won't use snark or pedantry to exclude people from conversations.
  • We’ll never judge you or shame you for not knowing something.
  • We won’t feign surprise or jump into conversations with Well, actually...
  • In emergencies, we will send pie.

Because news technology is changing so rapidly, there are many reasons for each of us to feel we don’t know as much as we should. The pace of change is also precisely why we should ask many questions, even at the risk of exposing what we don’t know. Our guest during office hours did exactly that, and deserves to have his question (and his many other contributions as a professional) treated with respect. We will always do that.

When it comes to the web and digital technology, each of us is somewhere on the learning curve. The value of a community like the one we’ve got is that we can help each other gain the knowledge we need to improve and sustain our work. At a time like this, we should make extra efforts to communicate and collaborate.

So please use the Largo Help Desk for any site problems or requests, email us at nerds@inn.org for anything general, and sign up any time for open office hours. We’ll never shame you for not knowing something, and might even have some dumb questions ourselves.

What Should A Nonprofit News Site Look Like?

I had an interesting Twitter conversation this morning and wanted to collect and share some a few more thoughts and some of the side conversations it spawned.

Responding to this tweet by Josh Stearns (of the Dodge Foundation) I recalled some research we've done here at INN (and would love to continue) looking at how nonprofit news sites can better communicate their reliance on member/donor support.

Essentially, we've found that it can be very difficult for nonprofit news sites that rely on donations to distinguish themselves and stand out as distinct from for-profit sites that often rely more on advertising. In fact, some preliminary user testing we've done suggests that if nonprofit news sites have advertising AND donation/membership messaging visitors are more likely to assume the organization makes its money primarily through advertising. We have not yet done research with enough sites to definitively confirm this finding or to show that there is a resulting drop-off in donations, but it definitely gives us pause.

Communicating "nonprofitness" (or at least that donations are a significant source of revenue) is crucial if nonprofit news organizations need visitors to understand just how important their donations are to the organization's survival.

This exchange spawned a number of interesting side conversations. Steve Katz, the publisher of INN member Mother Jones brought their creative director, Ivylise Simones, into the conversation:

Mother Jones is in a unique position where they rely not only on donations but also print subscriptions and advertising. While not common to many INN members (most are web-only, a handful have a print products, most don't realize a significant amount of revenue from advertising/sponsorship), this is a situation that IS shared by some other nonprofit publications.

More well-established organizations struggle with adapting to changing conditions without harming the strong brands that they've established over time.

There's also a lot more to being a nonprofit news organization beyond just asking for money and relying on contributions from your visitors. The most successful organizations think about "membership" not as just a financial transaction, but focus also on involving their community in the editorial process, being responsive, getting out into the community and really providing a valuable public service.

And of course there's a lot nonprofit news organizations can learn (and share) from and with the broader nonprofit sector.

Communicating why your work has value and how the community can get involved are some additional points many donors look at when deciding whether to give. Does your site make this clear to visitors or do they have to go hunting?

And finally, while the tax status of nonprofit news organizations does distinguish them from their for-profit peers, the tax implications of a donation is far less important to many smaller donors. They care much more about the mission of the organization they're pledging their support to.

Should nonprofit news sites look different than their for-profit counterparts? What would they need to do to clearly communicate this to visitors? Leave a comment and let us know what you think!

Remote Control: Introducing a New Series About Remote Work

The landscape of media work environments is changing, and many organizations now allow for flexible schedules and locations, with employees spending more time in Slack than conference rooms. Along with increased freedom and flexibility, distributed work comes with its share of challenges. Being remote, even part time, requires thinking intentionally about how to communicate, structure our days, and set boundaries around work and life.

Here on the Nerds blog, we've talked about how we make remote work work. We've shared our tools and process in our extensive collection of open-source docs. Today, Source and INN are launching Remote Control, an occasional series of interviews with remote workers that explores how journalists and technologists make remote work work: what their set-ups look like, how they organize their time, and what they do in the face of frustrations. We hope to collect honest portrayals of our modern working life and learn from each other in the process.

We’re kicking off the series with the incomparable Mandy Brown, formerly of Typekit, A Book Apart, and Editorially, and now with Vox Product.

Check out the full interview over at Source. Interested in participating? Get in touch.

OS X Setup for News Apps Development

I have the good fortune to be working with a team that values productivity by providing me with an Apple laptop. OS X works really well for what we do and matches the way my brain works. I like to have the power of Unix under the hood, along with the inspiring design of the Apple operating system signature look and feel.

Starting with a fresh Apple MacBook Pro, delivered Monday morning while I was introducing myself in our daily video scrum, here's what I immediately installed to get to 95% of what I need to contribute to  the INN Nerds projects.

Security

It's a really good idea to encrypt the hard drive using the FileVault feature, and it's offered by default on a new OS X setup. By default, this uses your iCloud password for encryption. Set your password to something challenging, which you should be doing anyway.

Backup

After you encrypt your drive, it's imperative that you have a regular backup strategy. I worked for 11 years in tech support, helping people recover data from their crashed laptop hard drives after accidentally running over their laptops in the car, and it was only possible if the drive wasn't encrypted. The odds of data recovery were still pretty bad, but the people who were already backing up didn't skip a beat in getting back to work. Your laptop is not your work, it's just a handy set of tools.

If nothing else, get an external hard drive that you leave at home and set up Time Machine to back up to it. Set a calendar reminder to do this regularly (aim for doing this daily). You could also back up to a Network Attached Storage on your home network, which would work over the wireless connection. Apple sells one called an AirPort Time Capsule.

Any files you're using that are shared interest to the company, work on them in Dropbox. For your coding projects, be sure to be regularly pushing your commits or branches to Github/Bitbucket.

System Updates

Install all OS X system software updates, all of them until you go blue in the face. At no time other than right now, with a computer that has nothing interesting or fun running on it yet, is it going to be less annoying to do a series of reboots. Just get it over with; install other software while this is going on, but reboot as often as you need in order to get up to date. Putting this off just leads to pain later.

System Preferences

Look through the OS X System Preferences and make a few choices. Perhaps you have personal preferences about how notifications do/don't appear, what the screensaver looks like, what hot corners do (what happens when you put your mouse cursor in the corner of a screen), or that the "Quack" sound should be used for all alerts. You can always make adjustments later, but you might as well explore what the computer can do. You chose to make news apps because you are curious and want to change things, remember?

The System Preferences window in OS X 10.10 Yosemite.

Web Browsers

Install additional web browsers. Safari is a fine browser, but when writing and testing web applications, it makes no sense to have only one browser installed. I install Google Chrome and Mozilla Firefox. Choose your preferred default browser and make sure you never get nagged again by the others.

FTP Client

You may not need the FTP client immediately, but when you do need to access an FTP server you'll be glad you already have a client. You can use FTP from the command line, but I find that experience to be akin to doing a road trip in a horse and buggy cart. Do yourself a favor, install Cyberduck, and ride in style with air conditioning and power steering.

Text Editor

If you already have a favorite text editor, feel free to skip this section. If you are open to trying new things, or are looking for a recommendation, then I highly recommend you install Sublime Text. It's free, is beginner-friendly, and using it makes me feel like I'm driving a space ship. I originally installed it for the color schemes which abound.

I also recommend installing the Package Control for Sublime Text, which then gives you access to a bunch of nifty tools that plug into the editor.

Terminal Emulator

The default terminal emulator that comes with OS X is okay, but I like pretty color schemes and it seems easier to do this in iTerm2. Anyway, there are more options and it's pretty popular, and it's free.

With either the default terminal emulator or iTerm2, create a new terminal and install some things you'll use there.

Command Line Utilities

Start with the command-line tools you'll need to use for further terminal goodness. You can install that by running xcode-select --install and following the instructions that appear to install the offered command-line tools.

Homebrew is so useful, I wouldn't be surprised if it was included in OS X in the future. It's a command-line package manager with a simple interface that lazy people like me can use. When I want to install a new CLI tool, and I don't want to deal with the tomfoolery of finding all the dependencies and the latest download link, I can usually do it with Homebrew. Get Homebrew with ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)".

Let's install MySQL, to be able to work with MySQL databases in the future. brew install mysql. Yup, that's it. There are some instructions it supplies you after completing the install about getting the MySQL server to start automatically or manually.

Later, we'll be using Python for a few projects, and the way to keep Python library versions and environments organized with different projects is to use virtualenv and virtualenvwrapper. To install those, we need to first install the pip Python package manager, and then we'll install the virtualenv packages with pip. Run sudo easy_install pip && sudo pip install virtualenv virtualenvwrapper in your terminal.

If you installed Sublime Text, it's nice to be able to invoke it from the command line, like subl README.md. We can create a subl alias by using these instructions.

We might as well generate an SSH key now, which we'll eventually use with Bitbucket and Github so we don't have to log in from the command line when pushing to those repositories. Generate an SSH key using the command ssh-keygen. The contents of ~/.ssh/id_rsa.pub is what you can use for your Github and Bitbucket account settings.

Since we're on the topic of git, you should configure your name and email globally that you'll be using with Github/Bitbucket. Following these instructions from the official Git documentation book, you can do it like this:

git config --global user.name "Nick Bennett"
git config --global user.email nick@inn.org

Virtual Machines

We use virtual machines to set up simulated environments of the public web servers we ultimately deploy to, mirroring in many ways the setup of that final environment but with greater speed and control. We use the free and open source project VirtualBox as a host for our virtual machines, in conjunction with Vagrant which gives us a scripted way to efficiently create those virtual machines.

For a real example of using VirtualBox and Vagrant, check out our deploy-tools project on Github; we use this for every one of our WordPress site projects. If you are developing a WordPress site, I highly recommend checking this out to smooth your development and deployment process.

Communications

Working remotely full-time only works if we're all in constant communication. We use a host of tools for this, most of them are browser-based. You can use HipChat in the browser too.  I strongly recommend installing that as a native client. Among other great things HipChat enables is the ability to share animated GIFs like this one:

Captain Kirk in a rain of Tribbles.

Dropbox is another tool we use that really begs for a native client to be installed. Sharing files through email or instant messaging kinda stinks. Dropbox is like the shared network drive so common to Windows-based networks, only the files actually go on your computer instead of accessing them through some tenuous ethereal connection. This is why FileVault hard drive encryption is so important.

Along with being able to share files and words, we need to be able to share secrets, aka passwords and logins. We use 1Password vault, available for free as a 30-day trial followed by a $50 purchase. For keeping the keys safe for our organization's fleet of virtual facilities, that's a minor expense.

I will cover personal preference in a future post or series of posts, when the whole INN team are able to weigh in on their personal software recommendations, what assists them in doing what they each do, and work-in-progress configurations.

Other Tools

I mentioned this briefly in the article but I want to point it out again: the other nerds at INN have documented the tools we use for collaboration, starting with HipChat and including several others, that we recommend to others to enable highly distributed and productive teams. This is all a work in progress, as we frequently re-evaluate what best fits our changing needs.

Beyond the collaboration tools, check out this list we maintain of free or discounted tech tools available to non-profit news organizations.

I would also recommend you take a look at How to Setup Your Mac to Develop News Applications Like We Do on NPR Apps' blog, which helped me get started.

Please comment and share your tips for getting started, links to your own guide or others, and share your recommended software and configs.

My Apprenticeship With INN Nerds – Part 2

It’s been a little over four months since I joined the Nerds team at INN as their second software apprentice. With just about four months to go, now is a good opportunity to reflect and set goals before the end catches up.

Apprenticeship

News Quiz/Interactables. For one of my first projects, I wrote a WordPress plugin wrapper around Mother Jones’ news quiz library. During the process we talked about what a framework for registering and maintaining more interactive features in the future would look like. This Winter I hope to build out those ideas.

Unit Tests. Early on, I researched options for building unit testing into our theme development process and wrote about it here. Since then Ryan Nagle has built this into our deploy tools and written up some testing examples. Much of Largo still needs tests written, which I want to make a point to help with.

Analytic Bridge. Born out of a weekend project, I wrote a WordPress plugin that pulls and relates metrics from Google Analytic to post objects in the WordPress database. As a proof of concept, I adapted this into a Popular Post widget (a highly demanded feature by members), but the functionality could be used in a variety of applications.

I expect to work out the bugs and extend the features of this plugin in the next couple of weeks, as well as tweak the algorithm to make the popular posts widget ready for prime time.

Fixes/enhancements to Largo. Recently I’ve been picking up some tickets related to getting our Largo Project ready for version 0.4. Learning a lot about the codebase in the process, this Spring I’ll no doubt have the opportunity to contribute more.

Documentation. Documentation for Largo currently exists in three or four different places. Since the addition of our Support Specialist Meredith Melragon to the team we’ve doubled down on our efforts to keep and update complete documentation for both administrators and users.

Recently I’ve been working with Meredith to add python driven sphinx to our deploy tools. The benefits of documentation being versioned along with our code, yet accessible as compiled html means all our contributors have access to writing docs and end users have a single web source to find help. One of the biggest unresolved issues is how to incorporate our existing php block comments into these docs, a problem I hope to research and solve.


This kind of stuff sound like fun? Looking for something to do this summer? Find out how you could be INN's next software apprentice here.

Writing Stupid Simple Unit Tests

In this post, we'll cover the basics of unit tests and walk through the process determining what types of tests you should write for your code. We'll also write a few tests to show just how simple most unit tests are.



The aim here is to make tests a bit more familiar and less daunting and mysterious. Unit tests are good. I'm by no means an expert on software testing. I'm just someone that's discovered the joy and freedom that writing tests provides.



If you are an expert on software testing and notice I'm wrong, please tell me!

Onward!

What are unit tests?

If you’re not familiar with unit tests, they’re essentially software written to test discrete units of functionality in a system.

Unit tests test the expected behavior of your software as it exists (or existed) at a specific point in time. They make sure that everything works as expected based on the rules established for the software at the time it was written.

If those rules change -- and your software along with them -- your tests will fail. When they fail, you'll gain insight into how the change affects the rest of the system.

For example, if a change is made in code for your site’s user profile management, you'll want to make sure that the change does not affect the behavior of your site’s payment processing code.

Tests can save time and money by acting as the first line of defense in catching problems before they result in a late night emergency or catastrophic failure. When things go wrong, they can also serve as the first response team. You can run your unit tests to see if any rules/expectations have been broken by some recent change, either in your code or some external API dependency, and fix the bug introduced by the change.

Sometimes the tests become outdated. Again, the tests will fail, letting you know that you’re no longer testing for the right thing. When this happens, you should update your tests to make sure they are in line with the new rules for your software.

Getting started

Unit tests are rarely difficult to write or run if you’re using a testing framework (e.g., phpunit).

You can write your tests before you write your business code if Test-driven Development (TDD) is your thing or you can write them after.

It doesn’t matter much when you write them as long as you have them.

The brilliant thing is that almost all tests you write will be stupid simple.

Tests usually assert some condition is true (or false, depending on what outcome you expect).

On top of that, most testing frameworks have convenience functions built in for performing these types of tests and several others, including testing the expected output of a function (i.e., what the function prints to the screen).

The three test functions we've used most frequently when writing tests for Largo are assertEquals, assertTrue and expectOutputString.

Our stupid simple example plugin

So, let’s say that we want to write tests for a WordPress plugin. Our plugin simply prints the message “Hello world!” in the top corner of the WordPress dashboard (sound familiar?).

This is the extent of the code:

<?php
/**
 * @package Hello_World
 * @version 0.1
 *
 * Plugin Name: Hello World!
 * Plugin URI: https://gist.github.com/rnagle/d561bd58504a644e9657
 * Description: Just a simple WordPress plugin that prints "Hello world!" in the top corner of the WordPress dashboard.
 * Author: Ryan Nagle
 * Version: 0.1
 **/

function hello_world() {
  return "Hello world!";
}

function hello_world_markup() {
	echo "<p id='hello-world'>" . hello_world() . "</p>";
}
add_action('admin_notices', 'hello_world_markup');

function hello_world_css() {
	$x = is_rtl() ? 'left' : 'right';

	echo "
	<style type='text/css'>
	#hello-world {
		float: $x;
		padding-$x: 15px;
		padding-top: 5px;
		margin: 0;
		font-size: 11px;
	}
	</style>
	";
}
add_action('admin_head', 'hello_world_css');

Couple of things to note:

  1. We have one function -- hello_world -- that returns a string.
  2. Two other functions -- hello_world_markup and hello_world_css -- that echo strings but have no return statements. This means we'll have to check the output of these functions rather than the return value to properly test them.
  3. Note that hello_world_markup relies on hello_world to provide the the “Hello world!” message.

So, how do we test this plugin? Well, if you don’t have your test framework installed and configured, you’ll want to get that ready and raring. The details are beyond the scope of what we'll cover here, but for an overview of setting up unit tests for WordPress plugins and themes, see Will Haynes’ Unit Testing Themes and Plugins in WordPress.

The INN Nerds also have a collection of deploy tools to make developing WordPress sites and writing tests much easier. You can read about using the deploy tools for testing themes and plugins here: Updates to INN’s Deploy Tools.

Disclaimer: getting unit tests working WordPress can be daunting at first. It’s a lot to digest and may take some time to set up if you’re coming to this cold, so be patient. It’ll be worth it.

If you’re really dedicated to the cause and need a hand, reach out to us.

How do I write a unit test?

With that out of the way, let’s design some unit tests for our “Hello World!” plugin.

The first thing we’ll want to do is enumerate the features of the code which we need to test.

There are many ways you can approach this. My preference is to create a single test file per file in my source code and have my tests directory mirror the structure of my source directory.

Each test file has tests for each function (or member functions of classes) in the corresponding source file.

So, the directory structure for our plugin would look like:

hello-world/
   hello-world.php
   phpunit.xml
   tests/
       bootstrap.php
       test-hello-world.php

We’ll be doing our work in test-hello-world.php where we’ll set up the skeleton of our test case, which is as simple as extending WP_UnitTestCase:

<?php

class HelloWorldTest extends WP_UnitTestCase {}

We can then stub out test functions for each function:

 class HelloWorldTest extends WP_UnitTestCase {
  function test_hello_world() {
    // Test hello_world()
  }

  function test_hello_world_markup() {
    // Test hello_world_markup()
  }

  function test_hello_world_css() {
    // Test hello_world_css()
  }
} 

Now, let's look at each function and consider what we're testing:

1. For hello_world, we want to verify that the string returned is "Hello World!":

function test_hello_world() {
  $this->assertEquals("Hello World!" == hello_world());
}

Easy enough. Now if for some reason someone changes the return value to "Hello world!" -- capitalization be damned -- the test will fail.

I can hear you now, "This is stupid, no one writes code like this." Yes, the example is stupid and that's the point. Don't get caught up focusing on the wrong thing.

It makes no difference how complex the function is, the only concern is verifying that the return value or the outcome is what is expected.

So, if instead you're testing some_made_up_function which returns an object, you may want to verify that it is actually a PHP Object:

$this->assertEquals(gettype(some_made_up_function()), "object");

Or that the object has a specific member attribute:

$test_obj = some_made_up_function();
$this->assertTrue(property_exists($test_obj, 'name_of_attribute_goes_here'));

2. For hello_world_markup, we want to verify that the function prints "<p id='hello-world'>Hello World!</p>":

function test_hello_world_markup() {
  $this->expectOutputString("<p id='hello-world'>Hello World!</p>");
  hello_world_markup();
}

Notice that we're expecting "Hello World!" to be part of the output. This might not be a good thing. If the return value of hello_world changes, this test will fail, too.

For the sake of example, let's say we only care about testing the markup and not the message. We can take this test a step further and decouple the two so that we're only testing the markup by changing it to read:

function test_hello_world_markup() {
  ob_start();
  hello_world_markup();
  $result = ob_get_contents();
  ob_end_clean();

  $this->assertTrue(!empty(preg_match("/^<p\s+id='hello-world'>.*<\/p>$/", $result)));
}

Essentially what we're saying is, we don't care what the message is as long as it is wrapped in <p id="hello-world" /> tag.

Simple, right?

3. For hello_world_css, we want to verify that the function prints the CSS rules for our "Hello World!" markup:

function test_hello_world_css() {
  ob_start();
  hello_world_css();
  $result = ob_get_contents();
  ob_end_clean();

  // Make sure there are style tags being printed. Duh.
  $this->assertTrue(!empty(preg_match("/.*?.*<\/style>/s", $result)));

  // Make sure we're using the right selector
  $this->assertTrue((bool) strpos('#hello-world', $result)));
}

And with that, we're done! You can see the entirety of test-hello-world.php here.

When to write tests

As mentioned earlier, you may want to write your tests first. This is called Test-driven Development.

Writing your tests first has lots of awesome benefits. When you write tests first you are forced to think about the design of your system and how best to structure the software so that it is actually testable. It’s a good practice and will help increase the orthogonality of your code.

However, it's never too late to start writing tests. When you write tests for existing code, you'll find places where things need to be refactored (sometimes completely rewritten or redesigned) to clean up the spaghetti code.

If you really can't afford the time it takes to write tests for all your code (or aren't being allotted the time to do so), you might still be able to lobby for time.

It's extremely important that you can verify your code works, especially in cases where you're using code to make an assertion about the world. For example, if you're writing code that processes a data set which will later be used in a story or investigation, it's crucial that your code produces results that are precise and correct.

The other case where you should absolutely be writing tests is for critical customer interactions. Saving sensitive information or processing payment information are things that you don't want to get wrong.

Design Feedback Tools for Remote Teams

Good design doesn't happen in a vaccuum. As a remote team, though, we don't get to sketch around a table together, see each other's whiteboards, or have other daily opportunities for in-person collaboration.

Instead, we mostly share screenshots in group chat and, sadly, a lot of ideas don't even make it that far. It can feel like a high barrier to entry to post something online for the team to review — just by uploading, it becomes *Important* even if it's a simple sketch.

But if we're not able to share work in progress, we miss out on the value and ideas of our teammates and can end up working toward disparate goals. I want to dismantle barriers and make feedback and conversation about design a regular, fun part of our team process. It's essential to share half-baked designs, interface sketches, and unpolished ideas — even more so because we don't inhabit the same physical space.

Everybody agrees our products and apps will be better for it, but like all things with remote work, it takes an intentional commitment. You have to build even casual feedback into your workflow. With that in mind, I've been testing a few design tools meant to help facilitate asynchronous design feedback and communication. Here are my notes and thoughts on the three products we've tried so far.

Red Pen

RedPenCommentsRedPenAddComment

Overall, Red Pen was the fastest and most intuitive tool. This was also the service that everyone on the team actually tried and used successfully — the other two had much lower participation. This, more than anything else, is an indicator of its value. If nobody uses it, it's useless.

Pros:

  • It's easy to share and comment on designs without having to create an account (plus the workflow for account creation is smart).
  • Easy to navigate through designs using keyboard.
  • Simple and fast commenting. All our team members contributed with ease.
  • Tells you who has seen a comment (e.g., "read by meredith") and a few other nice interface features like that.
  • Retains all versions of a design.
  • Browser and email notifications tell you when there are unread comments.

Cons:

  • When we tested it there was no way to customize notification settings — some of us got email updates, some of us didn't, and it wasn't clear why. While the notifications were fairly intuitive, it would be nice to be able to adjust preferences.
  • No "view all comments" option, yet. They say they're working on this feature. Without it, there's no way to get an aggragate view of all feedback for a project.
  • No way to see all versions of a design at once.
  • There doesn't seem to be a way to easily download files (not a huge deal for us).
  • You can only upload png files.

Not seeing all the comments is actually a pretty big deal for me. As the lead designer, I want to be able to take all the feedback, consolidate and translate it into tasks (which live as tickets in GitHub or JIRA). Red Pen would work better for quick feedback on sketches and design ideas, less so for long conversations or contentious feature decisions.

Red Pen is also the most expensive of the tools we tested. I sent them a couple of emails about nonprofit rates and haven't heard back.

InVision

InVisionnewCommentInvisionAllComments

InVision is like the Photoshop of design feedback tools. It can do a lot of different things, and feels a bit bloated as a product (when looking solely for design feedback, at least). But they have put a lot of thought into the design and functionality of their suite of tools, and you can tell that this was created by and for designers.

Pros:

  • You can draw/sketch on designs and toggle comments on and off.
  • Notification options can be set at a user level and changed with each comment.
  • You can build clickable prototypes using wireframe images.
  • Ablility to upload all the file types (or at least a lot of them) and vector handling. There is also a separate repo for assets.
  • There is a conference call feature for live design walkthroughs. We tested this recently with wireframes for a new site and it worked well.
  • The project history page has rich data — I'm not sure how practical any of it is, but it was fun to see.

Cons:

  • Conversations are harder to access (a few clicks to see full thread).
  • Inviting people to comment takes a few more steps, and the sign up process is not intuitive.
  • Navigating between designs within a project, and between different projects, takes quite a bit of menu searching and clicking.

This is not a lightweight product, and while there are a lot of fun features, our team didn't consistently use — or even try — most of them. If we're attempting to cultivate a lower barrier to entry for feedback, this is not the tool I would choose.

InVision does offer nonprofit discounts for the more expensive payment tiers, and has been responsive and helpful when I've reached out.

Conjure

ConjureDrawer

Conjure fell somewhere in between InVision and Red Pen for me. It wasn't as feature heavy as InVision, but wasn't as fast or intuitive as Red Pen. There are a lot of nice elements, but it was the least used by our team during testing.

Pros:

  • A nice way of highlighting particular areas of a design to comment on (drag to select).
  • Pro level is currently free during beta.
  • You are able to approve a project when the feedback period has ended.

Cons: 

  • There's a separate menu you have to click to see the full thread of a comment. You can't see responses to a primary comment on the design itself.
  • Adding collaborators is more complicated than other tools we tried.
  • Navigating between projects and designs is clunky.

Overall it comes down to what our team will actually use. InVision has so many great features, but it also feels needlessly complicated for the purposes of fast feedback. We don't need every single customization option when looking for quick opinions on a design direction. Red Pen, on the other hand, had the most intuitive interface and was the product everyone actually used while testing. It is opinionated in its simplicity and that works to its advantage here.

Despite the higher price and some interface limitations, Red Pen will likely be what we use for sharing sketches and mockups. As with so many things, the right tool is the one that people will use.

For clickable prototypes and more formal design presentations and walkthroughs, I will continue to use InVision. To me it feels more like a protoyping and client-services tool than a home for internal feedback. (For a detailed comparison chart of other prototyping tools, check out http://prototypingtools.co.)

Red Pen Conjure InVision
Pricing $30/month for 10 projects $25/month for unlimited projects (currently free in beta) $22/month for unlimited projects (one designer)