Sample Tow Center printed report - Tow Center for Digital Journalism

Transcription

Sample Tow Center printed report - Tow Center for Digital Journalism
3
Tow Center for Digital
Journalism
A Tow/Knight Report
NEWSLYNX:
A Tool for
Newsroom
Impact
Measurement
MICHAEL
KELLER
AND
BRIAN
ABELSON
Supported by the Tow Foundation
and the John S. and James L. Knight Foundation
5
Acknowledgements
Over the course of this research project we had help from many people deserving of our gratitude. We would like to thank Emily Bell for not just
giving the project a home, but also for creating a place—the Tow Center—
where projects with magical cat mascots are not simply tolerated but encouraged. We are tremendously grateful to Fergus Pitt, who skillfully guided
our research and provided valuable feedback on each successive version of
our drafts. Thanks to Taylor Owen, who shepherded the project in its initial stages, and Claire Wardle, who joined us toward the end but whose key
questions helped us move to completion. We also thank Lauren Mack, Elizabeth Boylan, and Abigail Ronck for all their help in dotting every i and
crossing every t along this journey.
We are tremendously grateful to Dana Chinn, who gave valuable feedback and
organized the first-ever NewsLynx Users' Summit from which we were able to
plan platform improvements and see what worked and what didn’t. The project
owes a great deal to Lauren Furhmann, whose involvement, feed-back, and
support were beyond instrumental to the life of NewsLynx (and yet, who is not
to be trusted in a game of Werewolf). The same goes for Lindsay Green-Barber
and Blair Hickman, with whom we had many con-versations over the years
about what they catalogue as impact and, perhaps more importantly, what
impact could be.
The project would not be what it is without the artwork of Clarisa Diaz, who
brought Merlynne Jones to life in myriad forms—including animated GIFs.
We thank Alastair Dant for his feedback on the NewsLynx visual design and
the acute observation that the initial comparison interface didn’t make sense
at all, leading to renewed research and a workable redesign.
We also thank the staff of Charlotte’s Patisserie, who tolerated our laptops
from opening to closing almost every single weekend between July and October, and those employees at numerous other haunts where we passed shorter
work stints. And, as always, we thank our friends and family for their support and understanding.
June 2015
Contents
Executive Summary
9
Key Observations and Recommendations . . . . . . . . . . . . . . . 10
Introduction
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Previous Research . . . . . . . . . . . . . . . . . . . . . . . . . . .
From Monitoring and Evaluation to Media Impact . . . . . . . . .
The Rise of Nonprofit Journalism . . . . . . . . . . . . . . . . . . .
The Quantification of Content . . . . . . . . . . . . . . . . . . . .
Current Efforts . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Qualitative Projects . . . . . . . . . . . . . . . . . . . . . . . . . .
Quantitative Projects . . . . . . . . . . . . . . . . . . . . . . . . .
Internal Newsroom Tools . . . . . . . . . . . . . . . . . . . . . . .
Where NewsLynx Fits: Incorporating Qualitative and Quantitative
Research Findings
Preparatory Research . . . . . . . . . . . . .
What Do Newsrooms Measure and Why? . .
What Do Impact Reports Look Like and How
Challenges . . . . . . . . . . . . . . . . . . . .
Platform Description
The Model . . . . . . . . . . . . . . . . .
The Impact Framework, as Implemented .
Model Concept: Impact Tags . . . . . . .
Model Concept: Categories . . . . . . . .
Model Concept: Levels . . . . . . . . . . .
The Combination of Tags, Categories, and
Other Concepts for Possible Inclusion . .
Model Concept: Subject Tags . . . . . . .
NewsLynx Interface . . . . . . . . . . . .
This Story's Life . . . . . . . . . . . . . .
. . . . . .
. . . . . .
Are They
. . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Levels
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
14
16
17
19
22
23
24
26
28
31
. . . .
. . . .
Used?
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
33
33
34
37
38
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
40
41
43
43
44
44
45
46
46
51
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
How People Are Reading and Finding It . . . . . . . . . . . . . . . . 52
Who Tweeted It? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Newsroom Use
Approval River Usage . . . . .
Article Section Usage . . . . .
Newsroom-created Taxonomies
Barriers to Entry . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Recommendations and Open Questions
Impact Work Practices: Recommendations . . . . . . . . . . . .
Commit Resources . . . . . . . . . . . . . . . . . . . . . . . . . .
Integrate with Editorial . . . . . . . . . . . . . . . . . . . . . . .
Start Small . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Publish For Both for Humans and Machines By Using Standards
Impact Work Practices: Open Questions . . . . . . . . . . . . . .
Building Impact Tools: Recommendations . . . . . . . . . . . . .
Building Impact Tools: Open Questions . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
55
55
56
56
60
.
.
.
.
.
.
.
.
61
61
61
62
62
63
63
65
65
Future Paths for NewsLynx
69
Platform Improvements . . . . . . . . . . . . . . . . . . . . . . . . . 69
Platform Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Paths Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Conclusion
77
Appendix A
81
Impact Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Appendix B
89
Impact Reading List . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Citations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Executive Summary
With the rise of nonprofit, foundation-funded newsrooms, the field of
Measurement and Evaluation (M&E), which emerged in the international
devel-opment community, has taken a strong foothold in journalism. As
nonprofit newsrooms apply for grants and appeal to donors for funding,
they often need to explain in formal reports “how well” their stories
performed—not just in terms of impressive traffic but in qualitative
evaluations of the im-pact their reporting had on the world: Did it change a
law? Did it move the needle in the conversation? Did it meet the
expectations—however defined—the organization had for it?
Based on survey research and interviews with newsrooms regarding
current impact measurement practices, the researchers designed and built
a new analytics platform called NewsLynx to improve upon existing
methods of displaying quantitative metrics and to add qualitative
information that was previously nonexistent in such tools.
Many newsrooms found current analytics tools insufficient for fully
capturing their output’s performance. They had trouble seeing comparisons
between audience reactions to stories or the effects of their social media and
promotional efforts. While they often had multiple data sources—Google
Analytics, Omniture, etc.—putting these numbers into context was still
difficult.
The NewsLynx Project Implements Three Key Ideas
• NewsLynx seeks to augment metrics with context.
It shows how an article performs in comparison to the average of all a
publication's articles and allows comparisons within subsets—all immigration articles, for example, or within any user-defined category.
• NewsLynx also provides efficient tools for tracking, categorizing, and assessing indicators of impact aside from audience
reach.
Such impact indicators might be legislative reform or community action. This has previously proved extremely difficult and time-consuming.
10
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
NewsLynx’s “Approval river” functionality aims to reduce the effort
asso-ciated with managing traditional clip searches and social media
searches, which newsrooms use to monitor impact. Crucially, it allows
users to apply consistent (and therefore comparable) metadata to impact
indicators.
• The NewsLynx developers propose an impact framework that
allows for the fact that real-world impact measures are often
messy and hard to categorize.
NewsLynx implements a framework that offers newsrooms enough structure to categorize “impactful events” across similar boundaries, while also
providing enough freedom for them to create their own impact definitions
to match particular goals. Importantly, the researchers believe that successful, long-term impact measurement can only result from identifying
such organizational goals.
Key Observations and Recommendations
• Effective impact measurement must be tied to an organization’s
goals.
No amount of technology can help an organization measure what it
hasn't defined as important. Should a newsroom's reporting seek to
change the narrative around an issue? Does it want to reach certain
stakeholders or affect lasting reform? Only after an organization has understood what it wants to achieve can quantitative and qualitative tools
assess how close the organization is to that goal.
• Both quantitative and qualitative metrics have a place in impact measurement.
While quantitative metrics are often vilified as leading journalism astray
from its true purpose, the researchers found they do help tell the story of
a newsroom's performance. Although this project began with an interest
in giving more visibility to qualitative measurements, its founders repeatedly heard reports from newsrooms that quantitative measurements play
an important role for organizations wanting to tell a long-term story of
audience growth.
Executive Summary
11
• Newsrooms should better tag their articles.
Newsrooms that want to properly understand their own performance
over time should put more care into tagging and cataloging their stories.
These practices can give an organization a better understanding of its
own operations and how much space it devotes to each subject. Tags also
offer staff the ability to perform myriad analyses comparing stories and
packages. Without differentiating and labeling content, it is difficult to
understand patterns in traffic or impact.
• Newsrooms have metrics, but they also still have many questions—
particularly about audience.
As one newsroom put it, “Google Analytics feels both too complicated
and not powerful enough for the questions we want to answer about
readers.” Many existing metrics aren't designed to help analyze metrics
from readers' perspectives—as in, what did they think about the story?
Did they leave after the fifth graf because they understood the newsy
part of it and didn't need anymore, or was the site design wrong or the
prose too dense? Nor do common tools provide enough insight into that
relationship between the news organization and its audience.
• Custom analytics solutions have recently become more
feasible.
With the continuing maturation of open source analytics pipelines, it is
now possible for news organizations to own their entire analytics stack
and not rely on third-party vendors for the data-collection portion of
their metrics. In other words, the next few years could see newsrooms
access much more diverse offerings, providing faster analysis and greater
detail more relevant to journalism. That being said, these pipelines are
largely for data collection, so most newsrooms will need to design and
implement their own custom interfaces to interpret this data for the
average reporter and editor.
Introduction
The idea for this project began on a blustery spring day on a visit to an
office inside the recesses of The New York Times. The day's agenda was to
better understand the process whereby Glenn Kramon, then the assistant
managing editor for enterprise, helped decide which of the Times's many
stories from the previous year should be nominated for the Pulitzer Prizes.
His desk was littered with books, printouts, envelopes, and handwritten
notes. On the wall hung a proudly framed full-page ad that the Ford Motor
Company had taken out, promising improvements in response to the paper's
investigation into SUV rollover deaths in the late 1990s.1 Sights like this
weren't entirely unusual in a building where you can easily find yourself in
a hallway covered with portraits of award-winning teams and their front
pages. But the scene was striking for the simple fact that here, for lack of
a better description, unfolded a crucial step in how the impact sausage was
made.
The process was this: Whenever a Times investigation was mentioned in
a meaningful way, whether it be a citation by a competing publication, a
complimentary letter from a senator, or an official response by a corporation
or government, a slip of paper would make its way from Kramon's desk into
one of dozens of large manila envelopes that were filled, hand-labeled, and
filed in boxes under his desk. Pulling a seemingly random scrap from one
of the envelopes on the iEconomy series—an explanatory series that would
later win that category's 2013 Pulitzer Prize2 —Kramon’s eyes lit up. It was
a note he had written describing pickup from an unusual source. “I knew
`iEconomy' was big when Saturday Night Live spoofed it,”3 he said.
As the conversation progressed to the question of how one might actually measure impact, Kramon reached under his desk again, pulled out an
overloaded envelope, and squeezed it to demonstrate its thickness. “Do you
want to know what impact is? It's this right here.” That was to say: at the
end of the year the stories with the thickest envelopes—the ones that resonated the most with the outside world—were the likeliest candidates for
submission.
How newsrooms conceive of and measure the impact of their work is a
14
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
messy, idiosyncratic, and often-rigorous process—based at times on what
simply seems worth remembering during the life of a story; at others, on
strict guidelines of what passes the “impact bar.”
As a result, we conceived of our related research project in two parts:
first as an attempt to understand how and through which processes news
organizations both large and small currently approach the “impact problem” and second, to see if we could develop a better way—through building
a technology platform called NewsLynx—to help those newsrooms fill the
proverbial envelope.
Definitions
Before going any farther, we should clarify what we mean by impact. To
start, our base assumption is that journalism can and does have an effect on
the world and it does so without necessarily becoming advocacy.4
Unlike other work on this topic, we don't offer a strict definition of impact. Based on our research, successful impact measurement can only happen if an organization has identified its institutional goals. From there, it
can begin to measure elements that bring it closer to those goals (e.g., encouraging subject-matter influencers to discuss one's work if the goal is to
improve the credibility of one's reporting). As a result, we created a loose
impact framework as opposed to a strict definition.
This approach offered us two advantages. The first is that it exposed
newsrooms' current thinking about what they consider to be important
events, allowing us to see the existing differences across organizations. The
second is that it served as scaffolding for newsrooms that do not yet have
an articulated understanding of their own goals. We found this was the case
most often with national and international publications and less so with
local, regional, or topic-based ones.
Maintaining standard terminology and vocabulary is a worthwhile goal,
however. It goes without saying that the more newsrooms use common
terms and tools, the more opportunity exists for contextualizing and understanding a project's success. In Chapter 5 of this report, Recommendations
Introduction
15
and Open Questions, we discuss those factors that could make standardization possible in the future.
Our view of impact necessarily incorporates both qualitative and quantitative information. Many newsrooms interested in measuring qualitative
events expressed their frustration at the limits of page view-driven decisions. They worried that only high-traffic stories get lauded and, consequently, shape the editorial agenda. In fact, when we started this project
one of our main goals was to build a tool to better highlight qualitative
events. If traffic is the only measure of success, then how can you show the
value of the niche story on an important topic?
Through the course of our research, we also heard examples emphasizing
the value of tracking existing and new quantitative measurements. One such
story that stuck with us came from a mid-size, nonprofit newsroom producing a mix of investigative, political, and culture reporting. It explained how
routine traffic to its stories is orders-of-magnitude higher today as compared
to two years ago. The newsroom uses this information to show readers and
funders that its organization is trending upwards. Taking this data in the
aggregate, and not letting any one number guide strategy, the company is
able to construct a narrative about its editorial reach backed by numbers,
not discrete and varying qualitative events.
Jonathan Stray wrote about the difficulty of qualitative measurement:
“Some events are just too rare to provide reliable comparisons—how many
times last month did your newsroom get a corrupt official fired?”5 Numbers
do fill a valuable role in understanding organizational health, and we think
removing them from the equation eliminates a potentially valuable lens
through which to gauge success.
When we say “the impact of journalism” or “the impact of a newsroom's
work” we should also clarify the limits to what we can reliably study—that
is to say, where we chose to start research for this project. From cultural
commentary to the court reporter on a beat, journalism exists in so many
varieties that it can make the question of journalism's impact seem too
large to tackle.
To narrow the scope of our research, our initial target newsroom was the
small, nonprofit investigative organization.
16
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Two elements informed this choice. We were most interested in journalism that seeks to address something about the world (this is often investigative work) and, therefore, allows newsrooms to more easily state tangible
goals for their projects. For instance, did this illegal practice end? Did the
government increase oversight? Are companies now following the law?
The second reason was organizational. Such newsrooms often look to
grants or benefactors for funding; and these outside groups often require
reports outlining how the organizations have used their money—hopefully
guaranteeing that it was well spent. As a result, impact measurement is not
a foreign concept to these benefactors, albeit still not an easy one.
We want to stress that these are neither the only nor necessarily the
“best” examples of journalism's impact. For example, looking at how media
coverage can shape discourse is another fascinating and worthwhile area of
study. We do, of course, remain cognizant of other forms of analysis, such
as pre- and post-intervention surveys, which might fold into the NewsLynx
platform in the future as more newsrooms adopt increasingly sophisticated
techniques of impact measurement. Our focus here, however, is on the
current needs and practices of investigative newsrooms.
Previous Research
Although our small, nonprofit variety of investigative newsrooms is on the
forefront of the journalism community in thinking about impact, the concept of impact assessment is by no means new. The international aid and
development communities have been heavily involved in this kind of thinking for years under the name “Monitoring and Evaluation” (M&E).
At the core of this movement is a simple question: How can we know if
our work is having an effect in the world if we can't measure it? This sentiment is perhaps best embodied in the Bill & Melinda Gates Foundation's
annual letter from 2013, in which Bill Gates, summarizing a passage from
William Rosen's book The Most Powerful Idea in the World, wrote, “without feedback from precise measurement . . . invention is `doomed to be rare
and erratic.' With it, invention becomes `commonplace.' ”6
While we found no definitive history on the rise of M&E within interna-
Introduction
17
tional and non-governmental organizations, as early as 1999 the United
Nations Development Programme (UNDP) began a major overhaul of
how it conducted aid and development interventions, shifting toward an
organization-wide emphasis on a “culture of performance.”7 With this shift,
UNDP began mandating that all operations adopt the methodology of
“results-based management” in which the effectiveness of programs would
be assessed by establishing baselines before an intervention and then periodically collecting data to assess whether the program was working.
In 2000, with the unanimous adoption of the United Nations Millennium
Declaration, a major governing document, and the corresponding Millennium Development Goals (MDGs), M&E moved into the mainstream. At
the heart of the MDGs were eight objectives to address the world's most
intractable problems, including poverty, access to education, gender inequality, disease, and environmental degradation. Each of these eight objectives was associated with clear, measurable outcomes. For instance, in
pursuit of the goal of eradicating extreme poverty, the MDGs pledged to
“halve, between 1990 and 2015, the proportion of people living on less than
$1.25 a day.”8 While the design of the MDGs came under harsh criticism
for (among other reasons) its inability to capture relative versus absolute
progress within aid circles,9 the underlying framework of explicitly stating
goals and preselecting indicators to judge movement toward those goals
soon became the norm.
It was not long until the world of philanthropy followed suit. Over the
course of the following decade, organizations of grantmakers focused on
realms as diverse as African aid,10 human rights,11 and the environment12
began discussing methods for monitoring and communicating the impact
of their work. Reams of toolkits, best practices, and case studies were published on the issue.13
From Monitoring and Evaluation to Media Impact
While philanthropists began adopting the mantle of measurement, they also
became increasingly interested in the importance of media for communicating and amplifying the message of their missions. Here we begin to see how
18
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
the M&E framework that aid and development communities established
connects directly with the present topic of measuring media impact.
With the release of high-profile, social-issue documentaries such as Bowling for Columbine (2002) and An Inconvenient Truth (2006), the power
of mass media to steer public debate around a topic became readily apparent. In the years following, prominent funders like the Bill & Melinda
Gates Foundation, the Ford Foundation, and Open Society Foundations
latched onto documentary films as potential means of raising awareness
and prompting action on widespread societal problems. From educational
reform (Waiting For Superman) to schoolyard bullying (Bully) to fracking
(Gasland), many of the most resonant documentary films of the past decade
have received foundation support for their production, distribution, and/or
associated outreach programs. Whether for purposes altruistic, financial,
or both, it is now standard practice for documentary filmmakers to attach
social-issue campaigns to their creative works.
In turn, the foundations that supported these films—influenced by their
concurrent involvement in aid and development interventions—began requiring filmmakers to provide detailed reporting on the impact of their
work. In practice, these reports initially relayed traditional metrics like
viewership, ratings, and box-office returns. Yet, over time, they increasingly adopted more sophisticated social science methodologies, employing
pre- and post-surveys, frame analysis, and monitoring of mass and social
media mentions. BritDoc,14 a foundation established in 2005 to exclusively
support social-issue documentaries, lists over 30 of these impact reports
published since 2008 in its Impact Field Guide & Toolkit.15
Yet a fundamental difference exists between assessing the impact of an
aid intervention versus a documentary film. If your goal is to eradicate
polio—as is one mission of the Gates Foundation—it is (relatively) easy to
measure the effectiveness of your intervention; simply counting the number
of polio cases over time provides a reliable metric of success. If you're concerned about the influence of confounding factors, like simultaneous development initiatives in the same region, you might design a randomized control trial to test the varying effectiveness of different vaccines, treatments,
or educational campaigns. Documentary filmmakers, however, seek more
Introduction
19
abstract goals like raising awareness, shifting societal norms, or advancing
the art form. While academics have attempted to design randomized studies to isolate the effect the mass media has in driving such outcomes, these
approaches are limited to highly specific interventions and do not address
the need for making comparisons across a variety of contexts.16 Journalism faces many of these same challenges as it increasingly moves toward
business models driven by institutional and philanthropic support.
The Rise of Nonprofit Journalism
The last ten years have seen rapid growth in journalistic organizations built
on these support sources. A 2013 study by the Pew Research Center identified 172 such nonprofit outlets in the United States.17 Of these, over 70
percent were founded after 2008. While mostly nascent, nonprofit news organizations have achieved considerable impact in this short time. In 2010,
ProPublica—founded only three years prior—became both the first nonprofit and exclusively digital news organization to win a Pulitzer Prize for
investigative reporting. Since then, the Center for Public Integrity and
InsideClimate News have also received the prestigious honor. The Philip
Meyer Award, an annual prize for computer-assisted reporting, has awarded
its last three top prizes to nonprofit outlets.
Whether because they are unbound from bureaucratic legacies or banner
ads, nonprofit news organizations have become beacons of innovation in the
industry, regularly exploring and experimenting with new revenue models,
distribution channels, mediums, and methods of reporting. This innovative spirit has captured the attention of serious funders such as the Knight
Foundation, which in 2013 awarded at least 20 grants to 13 such institutions totaling nearly four million dollars (authors calculations from Knight's
990s),18 not to mention numerous contributions to individuals and organizations to support the broader journalistic community (one of which has been
the Tow Center itself). Some for-profit media outlets are also experimenting
with foundation support. Since establishing its Strategic Media Partnerships program in 2011, the Gates Foundation has supported initiatives at
The Seattle Times,19 The Guardian,20 and Univision.21
20
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
As foundations have entered the fray of journalism, they have brought
with them the M&E philosophy inherited from their work with NGOs and
the international development field. In turn, the livelihoods of nonprofit
newsrooms have become increasingly linked to their ability to collect and
report meaningful metrics of impact. Unsurprisingly, the Gates and Knight
Foundations remain at the forefront of this movement. In 2011, Dan Green,
the head of Gates's aforementioned Strategic Media Partnerships program,
convened journalists, editors, social scientists, and media grantees to share
and strategize tools and methodologies for measuring impact. These sessions resulted in the publication of “Deepening Engagement for Lasting
Impact: A Framework for Measuring Media Performance & Results.”22 The
report offers a comprehensive guide for media makers facing the onus of
impact, breaking the process of assessing it into four parts:
Introduction
21
Closely following the framework of results-based management, the report
instructs media grantees to set goals, define a target community, measure
engagement, and ultimately, demonstrate impact. And yet, while this fourstep process for measuring impact may appear simple enough, difficulty
arises in its implementation. The report suggests the use of custom surveys,
interviews with stakeholders, and analysis of data from disparate sources.
These are tools that even the largest media organizations struggle to utilize correctly, let alone small nonprofit newsrooms. Many of the report's
proposed methodologies—like using Klout for measuring influential audience members—now appear outdated, even three years after publication. In
sum, while comprehensive, the report ultimately did more to confuse and
overwhelm its audience than it did to crystallize a direction forward.
Beyond these issues, the “Deepening Engagement for Lasting Impact”
report had no response to the problem of scale. By this, we mean the challenge of creating tools and methodologies for measuring impact, which can
be applied to more than a single project. Many of the organizations we
interviewed for this study struggled with the time and energy required to
properly measure their impact. This effort was made all the more frustrating when different foundations asked for different metrics or to report them
in different formats. To address this issue, the Gates and Knight Foundations made a 3.25-million-dollar grant in 201323 to the USC Annenberg
School of Communication to found the Media Impact Project (MIP).24 At
the core of its mission is the promise of “developing processes and tools
needed to implement media impact measurement frameworks.” This promise
is manifested primarily in the Media Impact Project Measurement System,
which has similar goals to NewsLynx.25 It seeks to weave together content,
web and social media analytics, and qualitative data into a unified framework for application in a multitude of contexts (the authors of this study
have consulted MIP on their work in this domain). While the system has
yet to be released, if successful it could be a significant step forward in scaling media impact measurement. The Media Impact Project differs from
NewsLynx in that it anticipates outsiders devoting resources to studying a
news organization's operations.
22
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
The Quantification of Content
While foundations have played a large role in driving the movement toward
measuring media impact, it would be inaccurate to describe this movement
as strictly top-down. Journalists and editors are skeptical of seeing the
practice of journalism through an increasingly quantitative lens—the page
view being the largest example of this (the metric simply counts the number
of times an article has been opened). Page views have risen to prominence
because they are relatively easy to capture and compare across contexts: a
news organization can quickly ascertain which stories are driving the most
traffic by comparing their number of page views.
As with any metric, once success is measured in its terms, sites optimize
for it. Slide shows, which are designed to generate a page view for each
image, are one outgrowth of metrics dictating content and user experience.
Some media outlets, such as Gawker, even incentivized their writers by
paying them based on the number of page views or monthly unique visitors
their articles generated26 (monthly unique visitors is a derivation of a page
view that accounts for multiple visits by the same readers). Others saw this
shift toward metric-driven decision-making at odds with quality journalism
and summarized it as “clickbait.”
The pendulum swing in the other direction started around 2012 when
newsroom figures like Greg Linch, an editor at The Washington Post; Aron
Pilhofer, then at The New York Times; and Jonathan Stray, formerly head
of Interactive News at the Associated Press, began writing about27 and
further discussing28 alternative metrics for the newsroom.
That year, Pilhofer arranged for a Knight-Mozilla OpenNews fellow to
spend a year working on this question,29 during which a co-author of this
report, Brian Abelson, looked at ways to tackle alternatives.30
Many analytics companies have also joined this conversation, oftentimes
declaring the “death of the page view” in so doing.31 Responding to skepticism and disdain for the click-driven web, companies like Chartbeat have
begun developing metrics based on the time readers spend with an article,
rather than the number of instances that article was viewed.32 While “time
on page” has long existed within most analytics platforms, its interpretation is difficult since it can be affected by a reader leaving the page open in
Introduction
23
another tab. “Attention minutes” seek to address these problems by using
more sophisticated methodologies to track when a reader is actually engaging with content.33 Many large media organizations like ESPN, Upworthy,
and Medium have openly stated that they now prefer attention minutes
over page views when it comes to measuring and reporting the success of
their content.
However, despite the promise of attention minutes in better aligning
the interests of publishers and advertisers, the metric offers little help for
truly measuring impact. In an online forum MIP hosted to discuss the relative merits of the metric, Jonathan Stray pointedly asked, “journalism is
very much a multi-stakeholder endeavor, so why should we imagine that
a single number can capture all aspects of the activity?” In other words,
the challenge of measuring impact will not be properly addressed by a single metric. We might even argue that the negative externalities similar to
those generated by page views will simply take on new forms in a media
landscape dominated by attention minutes. Ultimately, the problem is not
the shortcomings of particular metrics—in many ways metrics have greatly
improved in recent years. The problem of metrics lies in optimizing newsrooms' activities around a single figure above all others. Any metric given
absolute primacy has the power to overemphasize certain areas and deemphasize others. One of the goals of this research is to add comparison points
and context wherever possible to give the most holistic view of the metrics
currently monitored—whether they be quantitative or qualitative indicators.
Current Efforts
Our project is certainly not the first or only effort trying to understand
impact. Interesting initiatives are taking place on both the qualitative and
quantitative side of the equation. Because a comprehensive review is outside the scope of this paper, we've chosen to discuss only a selection of
projects—with an eye toward ones that have the most similarity to NewsLynx. For a more comprehensive list, see the “Impact Reading List” in
Appendix B.
24
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Qualitative Projects
Two projects aimed at the qualitative aspects of impact are the Center for
Investigative Reporting's (CIR) impact tracker and Chalkbeat's tool called
MORI (Measures of Our Reporting's Influence).34
Center for Investigative Reporting
Lindsay Green-Barber, a post-doctoral ACLS Public Fellow brought on
to serve as the organization's first media impact analyst, designed CIR's
Impact Tracker as a simple online form that journalists and editors can
fill out when they believe an investigation has led to a real-world impact.
The form prompts its users to describe what happened, when it happened,
optional links or documents associated with the event, and to which CIR
story it relates. Users then assign the event to one of 17 carefully curated
categories, which represent, in Green-Barber's experience, the full range of
potential outcomes from CIR's work:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Law change
Government investigation
Reader/viewer/listener contact
Award
Advocacy organization uses report
Screenshot of CIR story in media outlet
Public official refers to report
Institutional action (firing, reorganization, etc.)
Change of policy or regulation
CIR staffer does a public appearance or interview
Localization of story using CIR data
Lawsuit filed
Editorial
Screening
Professional organization cites reporting
Social network share
Other
The process also classifies impact across three levels of the event's effect:
Introduction
25
• Macro: Stories that have a concrete effect on things like legislation,
changes in staffing involving those in power, or allocation of resources to
a subject. Examples might include the prototypical impact event: the
passage of a new law addressing an investigation's findings.
• Meso: Stories that influence the general discourse and awareness around
a subject. Examples of this could mean increased coverage on a topic at
other media outlets or the public organization of a protest.
• Micro: Stories that lead to changes in individuals' behavior or actions.
Examples of this include an individual who writes a letter to his or her
congressman or stops buying products revealed to be harmful.
Green-Barber has been able to use the data generated to create analyses35
and reports36 on CIR's impact. To date, other organizations, like The Seattle Times, have already started using the Impact Tracker.
Chalkbeat MORI
Chalkbeat, an education-focused publication, centers its impact collection around an open source WordPress plugin called MORI that combines
article-tagging, event-tracking, and goal measurement.
Before an article is published, staff must categorize the story by type—
Analysis, Curation, Enterprise, Quick Hit, etc.—as well as identify the
post's audience—Education Participants, Educational Professionals, General
Public, or Influencers and Decision Makers.
If a story is related to a meaningful offline event, staff users can go to
that article's page in their CMS and add a narrative description and an
impact tag of either “informed action, the actions that readers take based
on our reporting” or “civic deliberation, the conversations readers have
based on our reporting.”
Rather than simply reporting raw metrics, MORI works by first requiring
editors to predefine goals. In turn, all numbers are displayed in the context
of progress rather than performance. This choice was a conscious decision
on the part of Chalkbeat's creators, who were wary of the utility of placing
decontextualized metrics in front of journalists or requiring them to track
the impact of their stories without being clear as to why they were doing it
in the first place.
MORI users can set goals in any of these categories as well: Content
26
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Production (e.g., for the number of stories they've written across certain
focus areas such as teacher evaluations or common core), Content Consumption (e.g., unique visitors, newsletter subscribers), and Engagement (e.g.,
Facebook fans, offline events hosted by the organization, etc.).
While Chalkbeat had initial concerns about whether its journalists would
adopt MORI, its founders were pleasantly surprised by its reception:
Within a week, we were watching conversations unfold in our newsrooms
about whether this or that thing constituted an impact. People were eager
to tally up the results of our stories. Indeed, reporters and editors quickly
began asking how they could sort the data by the stories they had individually produced, a feature we had planned to roll out more slowly.
For more information, their video walkthrough and white paper on the
topic are very much worth review.37
Quantitative Projects
The area of quantitative measurement is also seeing a number of new initiatives. The largest trend is what Andrew Montalenti from the analytics
company Parse.ly referred to as the “democratization of the data pipeline,”
where open source tools are maturing to the point that running your own
analytics collection is becoming much easier. This development is notable
as it opens the door for direct ownership over analytics information, as
well as a lower barrier to entry for custom solutions. That is to say, if a
company isn't happy with the speed, interface, or flexibility of Google Analytics, it could more easily build its own in-house platform. This task is no
small undertaking to be sure, but new advances bring it within the realm
of possibility. Two projects in this space, Snowplow and Piwik, are worth
mentioning.
Snowplow
Fairly new, Snowplow is an open source project that allows users to record
user events and store the data on their own infrastructure.38 It's the best
example of the open source “data pipeline” and gives users the real-time
speed of something like Chartbeat with the quantity of time-series data
Introduction
27
that Google Analytics provides. For most of what Google Analytics records,
users must wait roughly 24 hours for that data to become available.
The Guardian started using Snowplow in early 2015 for the analytics
on its Soulmates and membership pages. As opposed to Google Analytics,
which tends to look at the page view as the atomic unit of consumption,
Snowplow's event-based system makes it easier to track user behavior and
attach metadata to each action, said Dominic Kendrick, a software engineer
at The Guardian. He also appreciates that it provides this data within five
minutes of any user action. “The speed and control you have over what is
recorded is the biggest thing, because innovation is limited by the speed of
the software you implement. If you use a third party, you're limited to that
schedule,” Kendrick said. “Three years ago no one was doing this, but now
you have options.”
Importantly, Snowplow concerns itself with efficiently recording and storing event-level interactions with a high degree of customization options—it
does not come with a visual dashboard out of the box. Advanced users will
see this as a benefit since it means they can create custom visualizations
that answer specific questions their newsrooms might have. For others, however, it might feel they're getting a mere bicycle frame—albeit a robust,
free, and versatile one—when what they had in mind was something they
could ride out of the store.
Which system makes practical sense will differ based on the resources an
organization devotes to analytics, but indeed growth and greater adoption
in this area show promise for future iterations of NewsLynx-like systems.
Snowplow's website keeps an updated list of companies currently using the
system.39
Piwik
Similar to Snowplow, Piwik is another open source analytics suite.40 In addition to storing raw data, it also provides multiple dashboard interfaces for
viewing analytics results. The largest implementation of Piwik we are aware
of is for use at OpenStreetMap (OSM), a kind of Wikipedia for mapping
the world that relies on open source, community-created mapping data.41
Eric Brelsford, a developer at the nonprofit 596 Acres42 and adjunct
lecturer at the Pratt Institute, uses Piwik regularly. “We wanted just what
28
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Google Analytics does but in an open source way,” Brelsford said. “It also
did a great job of importing our raw traffic data from our server logs so we
could see our traffic from even before we had Piwik installed.”
While a number of WordPress plugins exist for CMS integration,43 newsrooms we spoke to had close to no awareness of Piwik's existence. Vendorsolutions still dominate the field of analytics, but as mentioned above the
recent maturation and further testing of these open source tools at scale
could change that dynamic in the future.
Internal Newsroom Tools
A number of news organizations have built their own analytics dashboards.
While we won't be able to go over each of them in depth, the links below
(and end citations) provide further detail.
NPR Visuals Team’s “Carebot”
Although our primary focus in this report is on investigative newsrooms,
other organizations with different goals are also experimenting in this space.
Carebot is the NPR Visuals team's project to capture how their projects,
often human-interest and photography-based, affect their audience. “What
does impact for a team like ours look like?” asked Brian Boyer, editor of
the Visuals team. “We came to the realization that what we create is
empathy—we try and make people care about someone else. Carebot is
finding ways to measure if we made people care or not.”
An open source project, Carebot focuses on user actions44 (e.g., how
many people shared a story or how many liked it on Facebook) but with an
added twist: It divides that number by the total unique visitors for a given
story. This metric allows the team to say things like, “thirty percent of
all people that read this shared it in some way.” Such statements let them
more easily compare articles while controlling for variations in page views or
total traffic.
The other part of the project involves adding questions at the end of
a story, such as, “did you love this story?” or “did you learn something
from this story?” If users answer “yes,” they are led to like the story on
Facebook or donate to the station. These questions aim to bring people into
Introduction
29
the public radio family and are the result of thinking about user experience
and user flow as a crucial part of the impact question. (Example: After a
reader finishes a story, what should he or she do? And if we can think of
actions we prefer over others, how can we optimize and measure that?)
“There are stories that are going to have great raw numbers because they
are about a celebrity comedian that's going to host The Daily Show and the
controversy about his tweets: that's just going to succeed,” Boyer said. “So
how do you take work that is more to the mission of the organization and
hold it up to say, `this thing might not have the page views, but it's doing
the mission.' Carebot is about `how do we prove we're doing the mission?' ”
Indeed, mission-driven metrics is a good label for this type of thinking and,
as we'll discuss later, highlights the crucial intersection between successful
impact measurement and stated organizational goals.
NPR’s Analytics Dashboard
Also working at NPR, Melody Kramer and Wright Bryan designed and
built an internal dashboard based specifically on the questions their editors
and reporters had in the course of a news day.45 As we'll discuss further,
their user-centered design led them to frame visualizations in a friendly
and inviting way and served as a great source of inspiration for parts of
NewsLynx. Their platform took shape after hours of interviews with staff,
focusing specifically on their daily decisions and how technology could help
them arrive at smarter decisions faster.
ProPublica
ProPublica is another outlet taking significant strides to measure its impact. In a 2013 report, Richard Tofel, ProPublica's president, outlined the
organization's approach to tracking impact:
ProPublica makes use of multiple internal and external reports in charting
possible impact. The most significant of these is an internal document called
the Tracking Report, which is updated daily . . . The report records each
story published . . . and any prominent reprints or pieces following the work
by others (with most of this data derived from Google Alerts and clipping
services). Beyond this, the Tracking Report also includes each instance of
official actions influenced by the story (such as statements by public officials
or agencies or the announcement of some sort of non-public policy review),
30
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
opportunities for change (such as legislative hearings, an administrative
study or the appointment of a commission) and, ultimately, change that has
resulted. These last entries are the crux of the effort. They are recorded
only when ProPublica management believes, usually from the public record,
that reasonable people would be satisfied that a clear causal link exists
between ProPublica’s reporting and the opportunity for change or impact
itself.46
Tofel goes on to explain that ProPublica tracks these outcomes for years
after an article's publication—“possible prosecutions and fines continue to
result from this work long after the reporters involved have moved on to
other work, and ProPublica notes these as they emerge.” In tracking the
impact of its work, ProPublica has also developed sophisticated tools like
PixelPing,47 which allows for measuring traffic to its articles that have been
republished on other sites.
The Guardian’s Ophan System
The Guardian uses a custom system called Ophan that is tuned to the
needs of editors so that they can quickly see what is being read on the site
and start to explain the why behind those trends.48 The system is quite
detailed and ever-changing. The linked walkthrough, however, is a good
starting point.
Introduction
31
The New York Times’s Package Mapper
One typical newsroom-specific problem is how to understand the way readers engage with a package of stories. Editors have ideas about how readers
should navigate between these pages, but few teams are tracking how they
actually do so. To better understand these flows, James Robinson built
Package Mapper to track the multi-story experience.49 Similar to Carebot
and miles away from the simplistic page view, this project looks at user
experience and user flow as a key part of understanding performance.
Where NewsLynx Fits: Incorporating Qualitative and Quantitative
While there are many current efforts to address the challenges of capturing
the quantitative and qualitative impacts of media, there remains a clear
need for a comprehensive platform that incorporates these disparate data
streams while remaining relatively agnostic about which metrics are viewed.
While Chalkbeat's MORI is an attempt to do just this, its designers have
an unusually high level of understanding about their organization's content and goals, which many newsrooms do not share. MORI does exist as
a WordPress plugin, which is likely the best choice if one wants to make
a plugin that the maximum number of newsrooms could use, but requiring CMS integration can hinder adoption in many organizations. MIP's
Measurement System promises a similar platform, but it's still in active
development. We designed and built NewsLynx as an attempt to address
needs in the present. By framing it as a research project conducted in the
open, we hope to share our successes and failures so as to help the media
impact community move forward.
Research Findings
Preparatory Research
Our background research unfolded in two parts by way of:
1. A survey we circulated online. It was announced via our launch blog
post and emailed to newsrooms that the authors assessed as within the
demographic of our target newsroom.50
2. Focused interviews with newsrooms that fit our target demographic (and
some that didn't).
Our survey (Appendix A) looked at six categories and adapted some
questions from a similar survey circulated by Joanna Raczkiewicz at the
Harmony Institute in 2013. Newsrooms agreed to participate anonymously
and only be identified by their size and general characteristics. The survey
focused on the following main areas:
1. Organization profile—what size/type of newsroom?
2. Content streams—what types of stories (cultural, aggregation, investigative, etc.) and publishing schedule?
3. Current quantitative analytics practices
4. Current qualitative analytics practices
5. Institutional challenges and goals
6. Actions—what is this information used for/who are the stakeholders?
The 26 organizations that responded to our survey varied greatly in size
and in their prior experience measuring impact. Some employed a small
team that generated daily reports circulated to staff, while others had no
single person officially charged.
We conducted follow-up interviews, usually an hour to an hour-and-ahalf long, from March to July 2014 with newsrooms that had completed the
survey and had indicated interest in using the beta NewsLynx platform.
It was important that they had websites with which the software could
interface, namely an RSS feed and Google Analytics.
34
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Below is a summary of our survey results and more detailed interviews.
Going forward, we'll refer to the target user of NewsLynx as an Impact
Editor, or IE, as shorthand to describe the position tasked with data management.i
What Do Newsrooms Measure and Why?
One of the most surprising sentiments we heard echoed throughout our
research was the importance still placed on quantitative measurements
such as page views. The reasons for this generally fell into two categories:
journalists collected it because donors asked for it, or they measured it
because they did see some utility in growth trends, as explained in the
introduction.
For organizations that rely heavily on syndication (newsrooms that allow
others to copy their articles whole-cloth), “reach” was also a big sticking
point. While techniques for measuring this differ greatly, it is generally
calculated by taking the organization's circulation or home page traffic multiplied by a varying, unscientifically derived percentage. This practice might
seem blasphemous until one considers Google Analytics—one of the biggest
and most popular platforms developed and maintained by arguably the
largest and most powerful technology company in the world—which only
returns estimates of any given metric. In our experience Google Analytics
returns metric values only in multiples of 12, for example. Google Analytics
can return more precise values with its enterprise product, but that cost is
outside the budget of all but a small number of news organizations, leaving
imprecision as often the norm.
Even when organizations acknowledged that both quantitative and qualitative measures were valuable, quantitative measures were still more closely
tied to their business models. “We are working to diversify revenue sources
and need strong metrics to buttress our qualitative measures,” wrote one
i. This isn’t an official title and we’ll use it to refer to what is sometimes just one person or, alternatively, a small team. In our research, this role varies widely from full-time
positions to a single person who juggles impact and analytics reports with numerous
other duties.
Research Findings
35
growing investigative organization. One broadcast organization summarized
this conundrum of revenue versus mission:
In some ways, audio-listens are the single most important thing we can
track because that drives underwriting and donations. But we are also
mission-driven, so journalism that affects laws and people’s lives and sense
of themselves and their relations to others can be equally important.
Although many organizations use quantitative measures, the lack of
insight they provide is frustrating. Organizations expressed a desire to
find a new metric that could satiate the hunger for quantitative simplicity
while offering useful insight, usually in terms of more information about the
audience's relationship to their articles.
In response to the question, “how could measurement help your business
or content strategy?” one organization wrote: “A qualitative metric we
could present to shareholders showing the ROI of our investment in social
media outreach, our marketing efforts, and our dedication to usability.”
To construct such a metric, you would need to agree on some proxy for
popularity or discussion level on social platforms (likes, shares, mentions,
retweets all come with their own caveats) while taking into account promotional efforts on individual articles. You would then need to segment these
results across devices and, if traffic or behavior patterns differ, be able to
attribute those differences to either your internal efforts or external factors.
This analysis would more realistically be shown in multiple metrics, but
the desire for it reflects the need to understand how audiences are reached,
along with the pressure to explain what, if anything, is having a demonstrable effect.
The desire to know more about how the audience engaged was echoed
elsewhere as well:
From a content perspective, [impact measurement] could help us figure out
where to focus our energies, in theory. Given that we are a cash-strapped,
resource-strapped nonprofit, should we be spending so much time making a piece of radio and then also adding stunning visuals and writing a
compelling text story—or are those just bells and whistles that will get us
minimal ROI? What’s the difference between the users who get our stuff on
the radio, on the web, and via their phones (on our app or other apps), and
36
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
are they significantly interested in different kinds of content and do they
have different time constraints? What, if anything, might drive people who
encounter us out of nowhere on social media to explore our other content,
like it, and maybe one day become not just a return visitor but a member?
What messaging and coverage encourages participation in user-generated
stories (and are those things which can actually help us AND serve the public good?) as well as become part of the public radio family? This is just a
start.
Echoed in another organization:
We deeply distrust the page view stat and we see other organizations with
more tech resources develop their own fancy metrics such as Medium’s Total Time Reading or Upworthy’s Attention Minutes and can’t help but feel
we’re missing out on essential things about our audience. Google Analytics feels both too complicated and not powerful enough for the questions
we want to answer about readers. It doesn’t help that Google, Facebook,
Twitter, Quantcast, Comscore, and anything else we’ve used never agree on
anything. And of course quantifying impact is tough and while we try very
hard, some recognized external standards, if wise, could be useful.
We repeatedly encountered the sentiment that existing analytics platforms are “both too complicated and not powerful enough” at other organizations. By “not powerful enough,” users mean that they don't help answer
sophisticated questions that could bolster arguments around, for example,
content strategy. Should a radio station continue putting resources into text
versions of their stories for the web? Are people not scrolling all the way
down the page because the headline and first three grafs were succinctly
written and the reader “got it” or because the story wasn't interesting? Or,
is the website's design—not the journalism—contributing to a high bounce
rate? Many organizations would like answers to complex questions like
these but, for the moment, data in simpler forms is what they are being
asked to report, and technology platforms can't answer these questions out
of the box.
It's important to point out that some newsrooms completely disregard
quantitative metrics or see them as only potentially valuable. As one small
investigative organization wrote: “Our mission is to have impact and improve the public interest. For a while we chased traffic and found it negatively impacted our work, and brought no results.”
Research Findings
37
The pressure to provide quantitative metrics can also be a bit of a moving target—driven by the shifting tastes of funders or changing understanding of what constitutes meaningful measurement. In fact, the Media Impact
Project is currently developing a two-sided booklet addressing this very
dynamic—what newsrooms are currently measuring on one side and what
information funders are requesting (or should be requesting) on the other.ii
Nevertheless, many of these responses influenced our decision to keep a
number of quantitative metrics in our system and augment their usefulness
through comparison points and context.
What Do Impact Reports Look Like and How Are They Used?
Responses were incredibly mixed on both of these points. While most impact reports are strictly internal, some organizations such as the Wisconsin
Center for Investigative Journalism and ProPublica publish examples of
their impact on their websites.51 The former also has an ongoing project
called Investigative Reporting + Art for which it has commissioned artists
to create sculptures inspired by its center's reporting.52 The pieces then
travel the state to schools and other public institutions.
Some organizations only produced reports for foundations that fund
them, whereas others produced regular newsletters circulated among the
editorial staff—sometimes including one-on-one emails notifying reporters
of significant events. Here is an example from a mid-sized investigative and
culture publication:
In addition to the qualitative parts of the Board report and grant reports
referenced above, we produce a biweekly internal memo that catalogues the
qualitative successes of the prior two weeks. We break these up into the
following categories:
• Impact: Politicians citing our reporting, law changes, corporate
actions, etc.
ii. Jason Alcorn and Lauren Fuhrman of InvestigateWest and the Wisconsin Center
for Investigative Journalism, respectively, will look at best practices for impact reporting.
Jessica Clark of Media Impact Funders will address the issue of how foundations could
best interact with newsrooms.
38
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
• Events: Either that we’ve organized or at which our people have appeared.
• In the News: A small selection of the highest profile and most interesting links and citations from other outlets.
• Social Love: A small selection of the highest profile and most interesting social media mentions of our reporting.
• Awards: All the awards we’ve won and been nominated for.
On the more quantitative end is one large-circulation, daily organization
based in South America:
I usually relate different metrics of story performance (like time spent versus
characters) and section performance (which sections get more or less visits
than would be predicted by the amount of stories they publish). Then I
go on to analyze what characteristics underperforming and outperforming
stories have.
Organizations that keep the editorial team regularly informed of impact
events through such reports said that it helps improve morale and keep
the newsroom focused on its mission. As mentioned before, these efforts
are more successful when the organization has articulated its goals and,
consequently, what constitutes important measures of impact for it.
Challenges
Despite advances in understanding what successful impact measurement
could look like, the fact remains that cataloging information will always
take time and expertise, even with custom-built tools. Technology can
solve some of the efficiency problems—many of which we tried to tackle
with NewsLynx—but Impact Editors will still be required to make sense
of the information. Goal-tracking remains an organizational and cultural
challenge, not a technology problem.
As one organization succinctly put it, impact reporting is “time-consuming
and measurements of engagement are still elusive.”
Platform Description
We designed NewsLynx around two sets of tasks where staff found difficulties:
1. Managing an event-tracking workflow while juggling other responsibilities.
2. Understanding what it means for a story to “do well” and what happened to cause that.
NewsLynx has two main interfaces: a workflow-management tool for
collecting and organizing indications of impact (mostly done in a section
called the Approval River) and a section for analyzing stories' impact
where comparisons and related metrics can be seen.
Below is a diagram of the site layout. The next sections will go into
detail about our impact framework and each of the platform's interfaces.
For a more technical walkthrough and code repositories, please see our
GitHub: http://github.com/newslynx.
Recipes: Bots that automatically flag impact indicators from external services.
Approval river: Users manage content coming into the system from recipes; meaningful
impact indicators are then attached to stories as “impact events.”
Subject Tags: Free-text labels to describe the content of stories.
Impact Tags: Free-text labels to describe impact within an impact framework to allow for
grouping and comparisons.
40
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
The Model
These diagrams show the concepts that are implemented in the NewsLynx
application. Each of the significant concepts is explained below.
Stories: Published output of the news organization.
Recipes: The output of recipes automatically populates the Impact Events.
Platform Description
41
The Impact Framework, as Implemented
A major goal of our initial research was to investigate the feasibility of
an impact taxonomy—shared terms that could make sense for multiple
organizations.
Taxonomies: On Defining the World
Taxonomies are notoriously difficult to create because real-world data does
not necessarily fall into discrete buckets. Take, for instance, Jorge Luis
Borges's Celestial Emporium of Benevolent Knowledge, which divided animals into the following categories:53
• Belonging to the emperor
–
–
–
–
–
–
–
–
–
–
–
–
–
Embalmed
Tame
Suckling pigs
Sirens
Fabulous ones
Stray dogs
Included in the present classification
Frenzied
Innumerable
Drawn with a very fine camelhair brush
Et cetera
Having just broken the water pitcher
That from a long way off look like flies
Although humorous, creating any taxonomy inherits the same absurdity
and measure of futility. In the news context, we face constantly shifting
content types as well as desired outcomes that differ on a per-project basis.
In developing our framework, instead of implementing a strict taxonomy, we
intentionally left the question of what constitutes an impactful event up to
the discretion of the newsroom.
One strategy that gives more comparative power to qualitative taxonomies involves assigning values to each category—making it more like
a quantitative measure. Since it's one of the simplest examples of impact
to visualize, let's see how this type of impact classification would play out
around legislative activity.
42
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
For example, let's say that any article that led to a law creation might
earn a rating of 10. An article that gets cited by an influencer (however defined) would earn a 6, and so on. This strategy gets tricky, however, since
one must quantify just about everything. How many points separate a bill
passing, one being proposed, a watered bill that passed but didn't fully
solve the problem, and a bill that didn't pass and yet spurred vigorous public debate and changed the narrative around an issue? How would one assign points around different assumptions of causality? If Congress is moved
to investigate an industry, what of that can you contribute to any one piece
of reporting on the topic that came from any one news organization? Would
the value-scale seek to capture the strength of that causal link?
To borrow a term from statistics, we think this type of thinking “overfits” the model to the data. It might perfectly describe one scenario, but
it loses all generalizability to new events or events at other newsrooms. To
borrow another image from Borges, “On Exactitude in Science” describes
the uselessness of a one-to-one mapping between a subject and a frame used
to understand it:
. . . In that Empire, the Art of Cartography attained such Perfection that
the map of a single Province occupied the entirety of a City, and the map
of the Empire, the entirety of a Province. In time, those Unconscionable
Maps no longer satisfied, and the Cartographers Guilds struck a Map of
the Empire whose size was that of the Empire, and which coincided point
for point with it. The following Generations, who were not so fond of the
Study of Cartography as their Forebears had been, saw that that vast map
was Useless, and not without some Pitilessness was it, that they delivered
it up to the Inclemencies of Sun and Winters. In the Deserts of the West,
still today, there are Tattered Ruins of that Map, inhabited by Animals
and Beggars; in all the Land there is no other Relic of the Disciplines of
Geography.54
Platform Description
43
Model Concept: Impact Tags
For the reasons explored above in the discussion of taxonomies and for the
simple fact that impact measurement must be closely wed to an organization's goals, we chose broad language to define our impact framework. In
our system, each organization is free to make an “impact tag” for any type
of event it finds important. For structure, each tag must have both a category and a level, ideas for which we took inspiration from Chalkbeat and
the Center for Investigative Reporting (CIR), respectively.
Model Concept: Categories
Chalkbeat's MORI system has only two categories of impact. An event
is evidence of “civic deliberation”—did someone talk about it or discuss
the issue in some way?—or an “informed action”—did it, at least in part,
bring something about? This structure was useful not only in avoiding the
impact rabbit hole described above, but also in disambiguating the term
“impact.” For example, some newsrooms call reader reactions or references to their work in legal proceedings “impact.” And while you could
argue that the article “brought about” that citation, the state of affairs
didn't change. We felt this distinction between talk and action was an
important one to standardize.
In addition to these categories, which we renamed “Citation” and
“Change,” we added two more: “Achievement,” which includes articles
that win an award, see record traffic, or are cited more than any other
story (in effect a meta category); and “Other” to maintain the spirit that
NewsLynx is an open research platform and the framework is open to
evolution. If trends develop within the “Other” category the framework
can and should adapt.
44
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Model Concept: Levels
From CIR we borrowed the idea that an impactful event can happen at
different scales. Its tracker uses the terms Micro, Meso, and Macro, as
previously discussed. To make these more understandable to the average
newsroom and to expand on the idea, we ended up with five levels:
•
•
•
•
•
Institutional
Community
Individual
Media
Internal
The most novel of these levels is Internal, which recognizes that projects can
shift organizational priorities or become models for future work.
The Combination of Tags, Categories, and Levels
Putting these three concepts together, a sample configuration could look
like the following:
Tag name
Reprint/Pickup
Localization
Influencer mention
Editorial
Award
Increased awareness
Staff interview/appearance
Government investigation
Internal discussion
Policy/regulation change
Category
Citation
Citation
Citation
Citation
Achievement
Change
Citation
Change
Citation
Change
Level
Media
Media
Individual
Media
Institution
Community
Media
Institution
Internal
Institution
Platform Description
45
Other Concepts for Possible Inclusion
As the process of research is always ongoing, since NewsLynx's launch we
have had discussions about areas of impact that might be worth including
as a part of the default configuration. Kramon, of The New York Times,
uses a similar “Change” category as the most important measure, calling
it “the gold standard for journalism.” He also stressed the importance of
celebrities and humorists commenting on the Times's work as a sign that its
journalism broke out into larger popular culture:
You want everyone from Oprah to Taylor Swift to speak out on a subject
and ideally praise the work of The New York Times. I remember once we
did a story that Rush Limbaugh praised, and we were able to say, “everyone
from Rush Limbaugh to Paul Krugman praised this work.”
Kramon added:
Humor really works, also. If people pick up on it and try and make it funny,
that should be a complement to the journalistic organization—even if it’s
just a local cartoonist. It doesn’t need to be somebody that’s nationally
known.
You could most easily incorporate these ideas with tags in the “Citation”
category at the level of “Media” or “Individual.” For example:
Tag name
Celebrity commentary
Humorist/spoof
Pop culture appearance
Category
Citation
Citation
Citation
Level
Individual
Media
Media
46
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Model Concept: Subject Tags
The other type of classification we employed is the subject tag, which is an
open tagging system to assign stories to different editorial verticals. This
could also be used to group stories that appear in a series or package. As
we'll discuss in the next section, NewsLynx runs statistics across subjecttag groups so newsrooms can see how different articles or packages perform
against other groups.
NewsLynx Interface
The Approval River
The NewsLynx Approval river is a section where users manage impact
indicators for stories their newsroom publishes. It allows users to create
“recipes” that let them connect to existing clip-search-type services or perform novel searches on social media platforms. The results of these recipe
searches go into a queue where they can be approved or rejected.
Platform Description
47
The tool is designed to streamline (and perhaps replace) an existing
common workflow for measuring impact where IEs monitor one or more
news-clipping services for mentions, local versions, or republication of their
work (if the organization allows that). Many of the IEs we interviewed expressed difficulty in managing the diversity of clipping services they used,
as well as storing the meaningful hits in one place. In addition, the process's complicated nature—often requiring different login credentials for
each service—took up an inordinate amount of time and raised the barrier
to entry for training someone new on the system.
Out of the box, NewsLynx supports the following recipes:
• Google Alert
• Twitter List
• Twitter User
• Twitter Search
• Facebook Page
• Reddit Search
The Approval River provides easy methods for gathering information
from social platforms. For example, one recipe service NewsLynx includes is
the ability to search a Twitter List for keywords. Let's say, as a part of an
investigation, your organization has identified 25 key influencers or decisionmakers and has added their handles to a Twitter List. A recipe could watch
that list and notify you of discussion on the topic, or when anyone on it
shares a URL from your site. This alert would show up in the Approval
river and, if approved, would be assigned to the relevant article with any
other information the IE wishes to add.
A simple way to think of this page is, “if this, then impact.”
With some programming knowledge, anyone can add new recipes or
NewsLynx can be set up to receive emails from different clipping services
and process those streams as recipe feeds as well.
48
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Analytics Tools
The analytics section is where we hope users can gain insight into metrics
that interest them and view any information about an article all in one
place.
We designed this section of the platform with two guiding principles in
mind: Make it navigable for the average newsroom user and give context to
numbers and events wherever possible.
On the first point, we organized our data and presentation at the article
level, which is often not the case in platforms such as Google Analytics. We
also labeled our graphs and data visualizations with sentences and questions, such as “who’s sending readers here?” instead of more ascetic labels
like “traffic-referrers.” On this point we were inspired greatly by NPR's
previously mentioned internal metrics dashboard, which proposes these semantic headers as a way to make dashboards more easily approachable for
average newsroom users.55
Platform Description
49
To understand “how well a story did,” metrics need context. As a result, our other principle was never to show a number in isolation—any value
should always be contextualized with respect to some baseline. In our two
analytics views—the multi-article comparison view and the single-article
detail view—we provide this by always comparing a given metric to a baseline value, such as “average of all articles along this metric.” Users can
easily change this baseline to the average of all articles in a given subjecttag grouping. In other words, “show me how these articles performed as
compared to all politics articles.”
We also do our own novel data collection to view article performance in
the context of newsrooms' promotional activities. For instance, we collect
when a given article appeared on a site's home page, when any main or
staff Twitter accounts tweeted it out, and when it was published to the
organization's Facebook page or pages.
We'll walk through each section to see how we implemented these comparisons and context views in the platform.
Article Comparison View
When users open the Articles screen, they see a list of their top 20 articles
with bullet charts across common Google Analytic and social metrics.iii
iii. In this version, we chose to sort these articles by page views. A proposed idea for
the future would see the entire list of metrics be customizable. We sort by page views instead of publish date, which would be the other logical choice, (Continued overleaf...)
50
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Bullet charts are so named because they include a small bar, the bullet,
that can show whether a given metric is above or below a certain
reference point. In this view, users can see which articles are overperforming or under-performing at a glance based on whether the blue bar
extends beyond the bullet or falls short, respectively. Users can change the
comparison point from “all articles” to any group of articles sharing a
subject tag.
To guard against a few high-performing articles skewing the results, the
bullet charts use the 97.5 percentile as the maximum value. In addition,
users can select the median value as the comparison point as opposed to the
average, since the median will be more resilient to outliers skewing results.
Users change the comparison point using the dropdown menus at the top of
the middle column.
The design of this section is meant for small-batch comparisons between
groups of articles. For instance, IEs can compare the seven articles in one
investigation against those from another package, or they could compare
recent articles against historic performance.
Article Detail View
NewsLynx also lets users easily drill-down to the individual article level
to see a timeline of qualitative and quantitative performance, as well as
contextualized detailed metrics about traffic sources and reader behavior.
This view also shows top-level tag information, allows users to manually
create an impact event, as well as download an article's data.
The information on this page is divided into three sections:
• This story’s life—A time series of page views, Twitter mentions, Facebook shares, time on home page, internal promotion, and online or other
offline events created manually or as assigned through the Approval river.
• How people are reading and finding it—A selection of Google Analytics metrics around platform breakdown, internal versus external
traffic, and top referrers. Similar to the comparison view, each metric
includes a customizable comparison metric.
• Who tweeted it?—A comprehensive-as-possible list of accounts that
have tweeted the article sorted by the account's number of followers.
... because Google Analytics takes at least a day to populate data. As a result, the
dashboard would always show in-complete data for organizations that publish daily. As
the system grows to support other metrics, this view-starting position could be
customizable as well.
Platform Description
51
In the interface, these three sections described above are prefaced by the
text, “Tell me about. . . ” in an effort to make their use and functionality
readily apparent to the user.
This Story’s Life
This visualization aims to combine the relevant information for better understanding why a story performed the way it did. We chart page views
over time within the context of a newsroom's internal promotion efforts.
The orange blocks are time on home page, the light blue dots are when the
main or staff Twitter accounts tweeted the story, and the darker blue dots
are when the story appeared on the main Facebook page or pages. Similarly, any events added through the Approval River or created manually
appear on the time series grouped by category. Events that exist across
multiple categories are shown once in each relevant category row.
Any impact events that have been treated on this article appear below in
a list filterable by impact tag, category, level, and date.
52
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
How People Are Reading and Finding It
We consider this one of the most useful pages in the platform. It displays
the metrics in Google Analytics that newsrooms expressed most interest in
or need to report:
• What device people were reading on
• Whether traffic came from external or internal sources
• Who referred traffic
The functionality and design mirror the bullet charts in the comparison
view, allowing users to see these numbers in relationship to all articles or
a specific group of articles. Hovering over the bullet chart's marker will
display the comparison value in a tooltip. The referrer information is particularly useful when IEs are asked to figure out the source of traffic for a
popular story.
Who Tweeted It?
This section shows a list of everyone who tweeted a link to this article as
obtained by querying the Twitter Search API on a regular basis. The list
is sorted by the number of followers the account has in descending order. iv.
If a Twitter Search recipe in the Approval River wasn't set, for example, or
iv. Unfortunately, Twitter doesn’t guarantee a return of every tweet in its search results. Only through access to the Twitter firehose would a comprehensive list be possible.
Platform Description
53
someone previously not on the newsroom's radar tweeted a link, IEs could
look here and create a new event using the button on the top of the page.
Users can also export all of this data using the button at the top of the
page. Importantly, the back-end data-collection code is separate from the
front-end interface. As a result, the data collection is completely agnostic
about how it is displayed, allowing a newsroom to design its own custom
views or visualizations of data. What we've produced here, directed by our
research, is a first attempt at giving newsrooms the broad view of their
stories and packages as they relate to one another, as well as easy-to-use,
drill-down capabilities for when they need to explain the narrative behind
how a particular story performed.
Newsroom Use
After four months of development, we launched NewsLynx in October 2014
with roughly six newsrooms. They varied in size from half a dozen people
to a microsite within a large metro daily. Most, but not all, already had
impact workflows that they used to generate reports for either grants or on
an internal reporting schedule. The newsrooms with existing workflows also
tended to be nonprofits. Just as with the newsrooms that responded to our
survey, these organizations agreed to participate anonymously and only be
identified by their size and general characteristics.
In this section, we detail how the participating organizations used NewsLynx and what that can teach us about impact measurement best practices
and future newsroom adoption.
Approval River Usage
Newsrooms mostly used the Approval River as a way to track mentions of
their work by influencers on social media or by other publications. Some
newsrooms leveraged Twitter Lists to a large degree. They set up recipes
with their domain name on “Presidents-HeadsOfGovt,”56 U.S. government
officials,57 or the Justice Department's list of U.S. attorneys.58 They would
typically go through the Approval river once a week and categorize possible
hits.
IEs also used Google Alerts recipes to look for pickups of their stories
by other organizations or mentions of their founders or board of directors.
We weren't able to completely replace existing workflows, however. One
challenge we faced was that some clipping services changed during the development of the platform, and we didn't have the time to fully implement
new service recipes. For example, according to some IEs, the quality of
Google Alerts has decreased in recent months and their organizations now
use other clipping services to track mentions. We discuss improvements to
this problem of shifting technologies in the chapter Future Paths for NewsLynx.
In terms of affecting efficiency, participant newsrooms reported that it
56
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
helped streamline their clip-search tasks and, in the case of Twitter List
searches, helped them surface elements they wouldn't have otherwise seen.
Article Section Usage
Participants reported that the most useful area of this section was the “How
people are reading and finding it” on the detailed article view. They said
that it helped them explain to the newsroom “how well an article did,” especially in relationship to a meaningful baseline. This page also provided
links that helped IEs record which sites or accounts had picked up or were
linking to the original article. Similarly, the full tweet list helped participants create events that they would have missed without a recipe set up to
catch them in the Approval River.
Understanding traffic sources was also a main takeaway from the Facebook and Twitter time-series charts. As one IE put it:
When there’s a spike in traffic for any story, it’s super handy to be able to
quickly see a list of where traffic is coming from. For example, I noticed we
had a story that was getting crazy traffic and NewsLynx said it was coming
from Facebook. I could then find the origin post quickly.
This finding was encouraging since Facebook is particularly opaque in
seeing specific activity. Knowing that our novel metric of Facebook shares
over time can provide useful insight is an important takeaway.
One key feature was data export. IEs reported that it saved them a
great deal of time in preparing their grant reports, as they no longer had
to navigate through their analytics platform to copy and paste various
numbers. The organizations' existing data-collection workflows also lacked
comparison metrics, which made NewsLynx data more understandable for
higher-level stakeholders.
Newsroom-created Taxonomies
Taxonomies varied from very general tags to more specific lists. The newsroom chose tag names and which category and level they belonged to. Categories and levels were chosen from the predefined list as described in Chap-
Newsroom Use
57
ter 2. Below is a sampling of what taxonomies were developed. They range
from generic to highly detailed.
Example 1: Moderately Customized and Detailed
This middle-of-the-road taxonomy classified the typical citation sources
(e.g., pickup, editorial written) and wider measures of change (e.g., change
in discourse, policy action) into separate groups.
Tag name
Pickup
Influencer mention/promotion
Editorial
Staff interview/appearance
Internal discussion
Increased awareness
Policy/regulation change
Award
Reader reaction
Category
Citation
Citation
Citation
Citation
Citation
Change
Change
Achievement
Other
Level
Media
Individual
Media
Media
Internal
Community
Institution
Institution
Individual
58
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Example 2: Detailed
The more detailed taxonomies were compiled by internal surveys and further refined through follow-up interviews with staff. Note that in this example the newsroom has set up tags for events that are particularly pertinent
to its work.
Tag name
Social network share
Story mention
Reprint
Localization
Editorial
Change of policy/regulation
Institutional action
Government investigation
Law passed
Institutional action
Law proposed
Government hearing
Lawsuit filed
Award
Screening
Advocacy organization uses report
Professional organization cites reporting
Public official refers to report
Staff interview/appearance
Category
Citation
Citation
Citation
Citation
Citation
Change
Change
Change
Change
Change
Change
Change
Change
Award
Other
Other
Other
Other
Other
Level
Institution
Media
Media
Media
Media
Institution
Institution
Institution
Institution
Institution
Institution
Institution
Institution
Community
Community
Community
Community
Institution
Media
Newsroom Use
59
Example 3: A Generic Approach
Other newsrooms started with a simple, broad approach and reported that
they will define more categories when they start seeing what kinds of events
happen around their work. One organization that took this approach said
it was primarily concerned with reporting its reach to funders, including
important mentions by the community or influencers.
Tag name
Media pickup
Media social share
Inst. social share
Comm. social share
Indv. social share
Category
Citation
Citation
Citation
Citation
Citation
Level
Institution
Media
Institution
Community
Media
60
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Barriers to Entry
We weren't able to launch NewsLynx at every organization interested in
participating. Besides a limited research staff that kept us from onboarding
more organizations, some weren't able to adopt the platform because they
didn't themselves know all the pieces of their impact puzzle. This problem
took two forms: First, immature analytics market offerings and standards
for their publishing platforms and secondarily, a lack of internal consensus
on what should be measured and how.
The first issue is particularly visible at broadcast or combined digital
and broadcast organizations where currently available analytics metrics
are fraught with unknowns. Syncing terrestrial radio listeners with those
who might tune in via webstream, along with those listening via podcast
or a web version of the story presents a problem whose solution is still in
the process of unfolding. Although web analytics is no hallmark of clarity,
its comparative simplicity gives purely digital operations an advantage in
quantifying their audiences.
The other kind of insufficient clarity is the lack of internal articulation of
what impact is, how often impact reports should be formally produced, if at
all, and who is responsible for producing them. This issue is related to the
fact that adopting new workflows that don't show an immediate short-term
benefit, or if the long-term benefit isn't clearly communicated, is extremely
difficult. Even if a newsroom is interested in starting to measure impact,
giving employees a list of new tasks they have to monitor can be a hurdle
that is hard to overcome.
This problem is addressed more in Chapter 5, Recommendations and
Open Questions.
Recommendations and Open Questions
Impact Work Practices: Recommendations
Articulate Organizational Goals
In the M&E world, an understanding of one's goals is tied to one's “theory
of change”59 —if journalism does have an effect on the world, how does
that come about? What are the preconditions for that impact to have the
most reach? Is it simply putting out carefully vetted facts like the Census
or Bureau of Labor Statistics does and letting policy makers and other
parts of civil society analyze possible responses? Or is it about covering an
issue from sharp, newsworthy angles that shift the media narrative and the
news cycle? Are opinions shaped best through cultural commentary or by
surfacing new information, such as Mother Jones's 47 percent story from
the 2012 election—one of the few news stories that seemed to show an effect
on the polls during the 2012 election.60 And what is your newsroom most
skilled at?
No single right answer exists to these questions, but strategies are certainly needed if an organization wants to understand its progress and
whether it's moving toward or away from its mission.
Commit Resources
If a newsroom wants to take impact measurement seriously, it must commit resources to it. In addition to full-time Impact Editor roles, newsrooms
must take care to catalog their own content in a system of subject matter
tags that make sense. An analyst can only compare, for example, the success of the latest energy multimedia package against past feature projects if
those articles are properly tagged in the system.
Because newsrooms often publish too much content to do this cataloging
and tag standardization post-facto, editors must employ discipline in tagging stories at the time of publishing. Without such standards, analysts
waste a vast amount of time and the newsroom loses vital internal information about its own operation. Apart from the workflow aspect, a newsroom
62
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
that doesn't view tagging as a key part of its daily operation inevitably
lacks insight into the long-term operation of its own coverage.
One feature we weren't able to include for this version of NewsLynx
was a “Train Your Lynx” section that would let users train a classifier to
potentially auto-label articles for them. While some of this tagging problem
might be alleviated through better technology like this, it can't solve the
whole puzzle. In other words, even if a computer could find good groupings
among published articles, institutionalizing those computer-determined
buckets is the tail wagging the dog. Similar to setting goals, understanding
content buckets should be a human-made decision, not one outsourced to a
black box.
Integrate with Editorial
Impact measurement isn't just for grant reports. Some organizations reported they use impact measurement as a way to improve newsroom culture
and breed editorial ideas. It fuels staff morale by knowing that work, even if
it's just a small blog post, wasn't published into the void but saw audience
interaction. In short, impact is not just a job for an analyst that sits apart
from reporters and editors—it requires outreach, organizational knowledge,
and organizational backing. In the end, impact measurement is about engaging with the audience and understanding how work is received.
Start Small
One of the participating newsrooms only applied NewsLynx to one of its
microsites focused on education. This smaller scope helped focus the impact
goals, as well as the work required to monitor the platform. Starting with
one content vertical or one package can be a good proof of concept and allow for the creation of a manageable workflow, especially if your newsroom
is large or has multiple layers of bureaucracy.
Recommendations and Open Questions
63
Publish For Both Humans and Machines By Using Standards
There are many benefits to implementing standards for metadata: interoperability, efficiency, and transparency among them. A fair amount of
NewsLynx's code is dedicated to figuring out information that is already
kept in a structured format in the CMS but is not machine-readable when
viewing the published page. NewsLynx scrapes the headline, tries to discern
an accurate publish date (which is surprisingly difficult), as well as information like one or many authors. As previously discussed, tagging articles
is a difficult task and one that NewsLynx currently requires manual input despite the fact that many CMS's already require articles be tagged at
publication time, although often not systematically, as previously discussed.
If article pages included this information in a structured data format,
third-party tools like NewsLynx could more easily leverage newsroom content. The analytics platform Parse.ly has started requiring its paying clients
to implement the JSON for Linking Data standard (JSONLD),61 which provides the metadata described above in a common standard. This format is
promising and for a low investment on the part of the newsroom, its inclusion could solve a range of problems involved with measurement and data
standardization and lower the bar for building new tools to gain insights
from an organization's publishing habits.
Impact Work Practices: Open Questions
Is an Impact Standard Possible?
Given the high cost of adopting new workflow practices in news organizations, any drastic change will need to be attached to an institutional
benefit. Multiple newsrooms expressed interest in comparing their metrics
to their competitive set. Parse.ly currently offers a similar paid service for
aggregate traffic figures among participating newsrooms. If a similar system could be developed for a NewsLynx-like system for qualitative metrics,
we believe there would be enough incentive for newsrooms to converge on
agreed-upon impact buckets.
The main idea here, however, is that if organizations could more clearly
64
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
and quickly see a benefit this would greatly help standards and possibly
wider adoption of impact measurement. Being able to see how organizations
in your competitive set are doing compared to your figures would certainly
be an attractive offer.
Whether standards could work, however, is a different question. On this
point, too, we believe they can. Participating newsrooms said the framework of categories and levels worked well for classifying their impact events.
At the very least this type of aggregate information would already work
cross-organization—“what kinds of change events did the top three packages
get?” etc. To go a step further, though, the specific impact tags people created did not differ significantly, giving hope that general equivalencies could
eventually be found.
Impact Culture: Start Top-down or Bottom-up?
We are advocates of building cultures of impact within newsrooms, however,
we've heard conflicting views on whether impact culture is best accomplished from a top-down directive or a bottom-up desire from reporters
and editors. The critics of the top-down approach say that a push from
leadership about the importance of impact will likely fall flat since it takes
significant time and strategy or staff to implement. The people tasked with
this job are usually already time-taxed reporters and editors.
Critics of the bottom-up approach say that without standardization any
uncoordinated effort will mostly result in inconsistent noise—that is to say,
the problem of inter-rater reliability, to borrow a social science term.
From hearing this discussion, we think any new impact effort needs buyin from both leadership and staff. This comes when there are natural incentives for newsrooms to achieve impact. Both groups need to understand
how including impact in strategy may help the organization and the editorial product. Business interests may include increased cachet with those
audiences attractive to advertisers or the ability to attract funding from
philanthropic sources. Editorially, staff will be more engaged with how the
audience is reacting to its work and how readers could potentially be enlisted as future sources—helping reporters and editors to do their jobs of
telling compelling news stories more easily.
One of the bridges between these two sides, as previously mentioned, is
Recommendations and Open Questions
65
the bi-weekly report that some newsrooms send to the staff. These reports
both clarify what impact means and give encouragement, as previously
stated, that readers are interacting and engaging with what the newsroom
puts out.
Building Impact Tools: Recommendations
User Experience, User Experience, User Experience
While not a novel lesson, we saw firsthand how important good user experience is in gaining adoption of a platform. While we were successful in
our design goals, we did have a few minor bugs from a technical perspective
that translated into larger usability concerns. For example, some Approval
river alert items would come back even after an IE approved or rejected
them. Although the eventual fix was minor, the initial frustration was not.
As a recommendation, always solicit feedback as often as possible from your
users and take user experience into account when prioritizing fixes.
Separate Data Collection from Interface
We built NewsLynx as two entirely separate code bases. A data-collection
code base, written in Python, handles all ingestion, standardization, and
database population. It serves JSON data via an API.62 The user-facing
website is a NodeJS application that queries this API and returns data
which a front-end Backbone-powered JavaScript client turns into an interface and visualizations. This separation gives us great flexibility to update
the user-facing and data-collection parts of the platform independently. If
someone doesn't like our interface, they have the choice of building their
own and not being locked into one system.
Building Impact Tools: Open Questions
To Integrate with the CMS?
Whether this version of NewsLynx was best constructed inside or outside
the CMS was never a real question. In order to work with a variety of
news organizations that used different CMS technologies, we always knew
it would have to be its own platform. That leaves open the question of
66
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
whether an organization should build its own internal measurement tool
within its CMS—if indeed it has the engineering capacity.
This question is different for every organization, but we can discuss the
pros and cons. The biggest advantage is that staff members do not need to
sign-in to another system other than the one they are used to. This consolidation reduces friction and raises a user's comfort level when adopting a
new technology, which should be a high priority for an organization looking
to start seriously measuring impact.
Another large advantage to living in the CMS is that you save having to
follow one or many RSS feeds to ingest every article. You also get information like authors, publish date, and tags for free. Chalkbeat's MORI tool
also imposes the requirement that every article must have an impact goal
assigned to it as a requirement for publication. That design decision might
not work for every newsroom or every type of article—breaking news needs
to go out as quickly as possible—but it does force staff to be more conscious
of what to expect from each article and keeps monitoring and evaluation
fresh in everyone's minds.
Arguments against integration are also persuasive. Most of the metadata
information can be acquired if the previously mentioned JSONLD standard is adopted and some effort is put into creating a standard RSS feed.
Moreover, customizing a CMS is a task whose monetary cost, time cost,
and required level of expertise cannot be overestimated. As a result, it's no
small undertaking to design and implement any change to the system. This
reason alone, rightly or wrongly and also depending on the organization, is
most likely enough to outweigh any efficiencies gained by integrating with
the CMS and tip the scale toward building something outside of it.
As a compromise, we recommend designing a modular system, similar
to NewsLynx, as the best balance. We built an API-driven platform that
handles all data collection and standardization completely separate from the
interface and user-facing code base. In this way, a newsroom could integrate
a portion of a NewsLynx-like system into its CMS if it so wished while still
keeping the impact tracking infrastructure in a separate code base. If a
newsroom decides to upgrade or change the CMS, it would have to rebuild
Recommendations and Open Questions
67
the visualization and interface—also a task not to be underestimated—but
it wouldn't lose the core impact tracking mechanism.
Future Paths for NewsLynx
Platform Improvements
We developed NewsLynx within a short time frame and for specific research
goals. If one were to continue building on its code base or design a similar
system, these are new features and improvements we would recommend.
Approval River Improvements
One difficulty we encountered was supporting the variety of external services that newsrooms use. Some of them even fell out of favor or came into
vogue over the course of our research. Newsrooms have largely abandoned
Google Alerts, for instance, first in favor of mention.net, which in turn was
replaced by clip services such as Vocus and Meltwater. The latter is a paid
service that returns higher-quality clip results and maintains a database of
the circulations of those outlets—useful for determining the reach an article
achieved when placed on a partner's site.
Many of the current recipe sources are most useful for tracking influencers or specific groups of audience members. More external services would
broaden the Approval river's capabilities. For instance, one newsroom expressed an interest in monitoring Facebook groups it might create and
tracking the “quality of discussion.” Phrases such as “let’s take this offline”
or “do you have a suggestion?” could let an organization gauge whether its
actions led to a self-sustaining, helpful community around an issue that its
reporting highlighted. Integrating more offline-monitoring services would be
useful, too, such as Sunlight Foundation's Capitol Words API, which provides a queryable database of what is discussed on the floor of Congress.63
Participating newsrooms said that integrating more metadata around
these citations would be helpful (e.g., aggregating Meltwater's circulation
figures for all pickups of a story). Many IEs currently export the data from
Meltwater and do this aggregation in Excel.
Because new recipe sources will come in and out of fashion, an easy way
to add and remove language-agnostic recipe modules would be the ideal.
70
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Individual Form Page
One requested feature we were not able to implement in time was an impact
event form submission page that staff could access without logging into the
system. For example, a reporter who hears of a noteworthy event could
fill out a form, which the Impact Editor could then approve or reject. CIR
already uses a similar workflow with a form powered through Podio.64
More Feedback from the Platform to Users (Meet Users Where They
Are)
Similarly, the more the platform could exist as part of an ecosystem and
less of a portal you must log into, the better. This could look like automated emails, a mobile app to submit events, easily forwardable notes that
create events, or push alerts to staff when a story surpasses some threshold
and is “doing well.”
More Article Types
Currently, NewsLynx only supports articles published on a unique URL.
It doesn't distinguish between pages that contain text, video, or audio,
for example. Being able to detect page types and assign specific metric
collections would be a very interesting feature. Embed.ly currently provides
analytics on video viewership behavior (e.g., how far into the video did
most people watch).65 Integrating those types of viewership details into our
existing comparative approach would be a useful addition. As analytics
standards emerge in radio or for digital-broadcast hybrid organizations,
these relationships will need to be formalized and operationalized.
More Article Relationships
NewsLynx users can currently group articles together with subject tags, but
more complicated relationships exist in practice. For instance, subject tags
don't distinguish between topically associated articles and a specific package
of articles that ran in a series. At broadcast organizations, you might have
a digital story that ran as a companion to a TV or radio piece. Relating
these different versions of articles to each other and then adding aggregate
functions to combine the metrics of these two versions of the same story for
reporting purposes would be a very useful feature.
Future Paths for NewsLynx
71
Replace Google Analytics
Working with Google Analytics proved to be extremely frustrating. Each
news organization often had a custom-defined property or view that represented its content. Often, it had multiple properties reflecting different
subdomains that needed merging. Based on the advancements in the open
source data pipelines discussed earlier, implementing something like Snowplow for quantitative analysis would make most sense going forward.
Platform Challenges
If you want to measure something happening on the web, you're necessarily
wrangling a moving target. New social networks might appear; new metrics
might come into vogue, and users will want to see those reflected in their
system. The best one can do when faced with dynamic technology is to
build a system that allows for new modules to be plugged in and out. Future modules, however, will still have to be designed and coded depending
on the latest input.
Another challenge is that many interesting questions are technologically
unknowable with our current state of affairs. For instance, one popular
request was to know why an article was doing well on Facebook. Due to
the nature of that network, however, which is much less open than, say,
Twitter, we simply can't know the cascade of shares that lead to virality.v
Understanding the limits of analytics is an important starting point when
framing the questions we want to ask about one's content.
Paths Forward
As we have been developing this platform, we have also thought about how
this project and projects like it become sustainable. NewsLynx is certainly
not unique in wrestling with this question since foundation or grant-funded
journalism tools are just as ascendant as foundation-funded newsrooms.
v. One company, CrowdTangle, promises to do this by monitoring a large number of
Facebook pages for an organization’s content. It is a paid service. An open source and
community-maintained repository would be one interesting alternative for democratizing
this kind of insight.
72
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
A number of former and current project leaders spoke to us about what
they thought they did well and what they would have done differently.
What follows are the main questions projects like these should be asking
themselves and discussion of some consensus around how viable various
alternatives may be.
Should You Charge for the Service?
Most people respond to this question by saying either, “you'll never make
money off of newsrooms,” or they take the Bill Cunningham approach:
“You see, if you don't take money, they can't tell you what to do, kid.”66
Neither is completely true.
For the former, whole industries (analytics being one, commenting platforms being another) do make a lot of money off of newsrooms. As for the
latter—that the lack of a price tag grants creative freedom—Miranda Mulligan, digital creative director at National Geographic and former executive
director of Northwestern University's Knight Lab, succinctly put it: “Users
have expectations whether they give you money or not.” While one could
argue about just how much users expect from a free tool versus a paying
one, answers to this relate to issues of value and utility. If the platform
is not useful, people will neither use it nor pay for it. Either way, getting
people to use a platform—even one that has the highest utility of any tool
out there—is still a question of overcoming existing workflows and getting
organizational buy-in from multiple levels of management. Shane Shifflett,
one of the developers of FOIA Machine—a platform for newsrooms to easily
send and track large quantities of Freedom of Information requests—echoed
this sentiment. “Once you have a [platform] set up in the newsroom,” he
said, “you still have to constantly remind people of the benefit, especially if
it's a shared, newsroom-wide benefit.”
Sometimes open source projects don't cost enough for a newsroom to
easily buy it. In other words, they don't charge what newsroom finance departments are built to pay. As Brian Boyer said about his experience with
PANDA—a 2011 Knight News Challenge winner that enables newsrooms
to store and query shared data resources—staff can sometimes run into a
speed bump getting a credit card authorization for the minimal computer
costs required to maintain a PANDA server. “The kind of money [news or-
Future Paths for NewsLynx
73
ganizations] are used to spending is 50,000 dollars, not 300 dollars. They're
used to getting a purchase order, not using their credit card.” Inflating
prices and charging an arbitrary multiple of 10,000 dollars can be a viable
solution for some folks, but in the Venn diagram of people who are comfortable with doing that and the people who pitch open source, civic-minded
technology platforms for journalism, the intersection is small.
On a simpler level, many newsrooms simply don't have the budget for
a new product. Many project leaders we interviewed echoed the sentiment
that, for them, they would want to charge the full value of their system,
or simply make it free. Undercharging, they felt, would alienate too many
potential newsrooms while not providing material benefit to the developers.
Shifflett said this was true for him, emphasizing as well that the team
already had full-time jobs and didn't see this project as a business it
wanted to start running.
Others indeed felt that they could be more true to their own priorities without the pressure from paying clients—and that can be true if the
project has a solid enough foundation that its utility is not in question and
is relatively stable.
What Is the Value and Who Sees the Benefit? Making It “Kid-tested,
Mother-approved.”
One of the things Boyer would have changed about PANDA was broadening
the idea of who its user was. “We did user-centered design, but I would
have thought more about the managing editor as a user—the person with
the checkbook—along with the reporter as a typical use case,” he said. He
continued:
What we struggled with is there are only a handful of news organizations
that have decided that this is a priority. PANDA has some pretty amazing
tools to create efficiencies and make people work smarter, not harder and
[things] like that. Managing editors as a class, however, aren’t necessary
thinking about it in these terms yet.
Understanding the attractiveness of a product to different stakeholders is
the crucial takeaway. NewsLynx, for example, could appeal to management,
as it could help keep the organization afloat financially. As we discussed
in the section on how impact standards could eventually be shared across
74
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
newsrooms, a multi-speed approach seems absolutely necessary—“reportertested, editor-approved,” so to speak.
The Community Question
Simply making a project open source is far from a silver bullet in achieving
sustainability. We would love to see a community of developers building
its own NewsLynx modules and we have done our best to enable that type
of system in the future, but it's important to consider that communitybuilding is a skill in itself and takes a concerted effort. Boyer and Shifflett
both pointed out that if they had to do it over again, bringing more people
on to cultivate and manage relationships with user groups would be key.
And while the word community is so overused in the tech sphere that
it's become the subject of satire,67 what underlies this issue is the simple
economics that, with some exceptions, one to three people cannot develop,
maintain, and steer a technology project of significant size in the long term.
All project leaders with whom we spoke echoed the sentiment that even
with the newest tools, which lower the barrier to entry, technology is hard—
and user-facing technology is even harder.
What we are really discussing with the community question is, “how do
you get people invested in your project?” More often than not simple utility
is not the deciding factor, since, as we've discussed, “useful for whom?” is
its own political question. The solutions to this bind range from the mundane (go speak at conferences) to the novel (Mulligan pointed out a group
of South American developers who created a game to help crowdsource data
for their urban cycling company),68 to the small-scale (we gave NewsLynx
a magical lynx mascot,vi Merlynne Jones, whose image and GIFs populate
our site with some personality). While difficult, the community question is
an expression of this underlying need for not just utility but ways to fuel
buy-in and enable a network effect (the idea that enough people using it
makes it the default choice), which can be aided by anything from traditional promotion efforts to usability, or even emotional preference for the
interface.
Another phrase for community is “highly-interested newsrooms.” FOIA
vi. Taxonomically she is an Impcat, a rare breed of lynx adept at measuring impact.
Future Paths for NewsLynx
75
Machine is pursuing a strategy of working with a few key organizations
as a way to fine-tune the platform and offer a jumping-off point for wider
adoption. If we can keep working with motivated organizations and refine
the tool to their needs, a community could develop to share tips and ideas
for NewsLynx, and eventually contribute code as well.
Conclusion
After over two years of thinking about and, in part, building impact tools,
we're happy to see a markedly different landscape from when we started.
Ideas that were then hypothetical are now being put into practice. In reviewing some of the older literature while preparing this report, we came
across a 2012 Nieman Lab article by Jonathan Stray that concluded with a
picture of which kind of technology could help guide the way through understanding the messy world of impact. “Ideally, a newsroom would have
an integrated database connecting each story to both quantitative and qualitative indicators of impact: notes on what happened after the story was
published, plus automatically collected analytics . . . ”69 While rereading this
article, we were happy to see that NewsLynx attempts to make more concrete what was previously hypothetical and see how that idea played out in
practice, what needed improvement, and how we can move forward.
With the platform, newsrooms were able to streamline their workflow
and surface insightful elements of impact that would have otherwise been
missed. They could tell stories of their journalism's audience exposure to
stakeholders much more quickly and with reliable data to back-up those
assessments.
In the larger field of media impact measurement, the amount of experimentation taking place and the fact that the conversation has moved past
the less interesting problems—trying to find the holy grail of a universal
taxonomy being one of them—makes it an extremely exciting time for impact measurement. It has never been easier for a newsroom to design its
own analytics geared toward questions it wants answered. And here lies the
next challenge, which was really the challenge all along.
These technological advancements and the democratization of the data
pipeline are most helpful, paradoxically, in that they drive us back to base
assumptions and away from technology. “Tool-wishing”—phrases that start
with “if only we just had a platform to do X”—can be a blinder for the real
hurdles at play. No tool, no matter how well designed or implemented, can
tell a news organization what impact is or should be. As Stray continued
in his piece, “but nothing so elaborate [as this proposed platform] is nec-
78
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
essary to get started. Every newsroom has some sort of content analytics,
and qualitative effects can be tracked with nothing more than notes in a
spreadsheet.”70
Indeed, the newsrooms that got the most out of NewsLynx were those
that had already started with “notes in a spreadsheet” and previously
worked through the harder problems of deciding what they care about
measuring. In the end, computers are better, faster, and (sometimes) more
reliable notebooks; but, just as is true in the physical world, fancy pens
can't make a writer tell a good story.
Going forward, we see a few trends, or if not yet trends, then helpful
directions:
• Automate more.
We made the Approval river because newsroom staff has better things
to do than search through multiple clipping services and other lists for
hours each week. We still imagine a “human in the loop” system, but the
more of these kinds of services that can be automated to put ready-toinput, structured information into an article's timeline, the better.
• More context in metrics.
By showing numbers in relationship to newsroom or topic averages,
NewsLynx users were able to quickly get a sense of where each article
stood. Efforts, like NPR's Carebot, to contextualize metrics in terms of
“what percentage of people shared this story” are a great way forward in
this vein of experimentation.
• Defining expectations.
Similarly, we have known for years that not all articles are created equal,
nor are they all expected to perform equally. Operationalizing this idea
has been slow-going, however, because it's hard to admit that not every
article will be a star. Developing mission-driven metrics will be crucial to
sell this kind of measurement to management.
• Quantitative metrics aren’t going anywhere.
Numbers will continue to be useful because they provide value for many
organizations. Their emotional utility is not to be underestimated. As
Caitlin Petre recently examined in her Tow report's chapter on the design and use of Chartbeat, even if you're not a traffic-driven site, it feels
great when you hit record figures.71
Conclusion
79
• Impact measurement needs to know how to market itself to
news organizations.
This concern is smaller at organizations where impact is a part of their
business model. But at larger organizations interested in this field, how
do you convince management to commit resources to something with
generally only mid- to long-term benefits? Folded into this question is
how to approach a wary audience of journalists who view impact measurement as at odds with impartiality. Again, this idea is tied back to an
organization's goals. What are we here to do and how can we measure
that? Impact measurement with no objective can come across as purely
self-congratulatory with no organizational benefit.
In the future, we think the practices of impact measurement align with
healthy processes of understanding how one's newsroom operates and,
importantly, why it operates at all. Whatever that why turns out to be,
whether it is purely to inform readers or hold power to account, finding out
what is required to get there should be an instrumental part of an organization's mission and achieving that mission a strong part of the newsroom
culture. We hope that NewsLynx, or future NewsLynx-like systems, can
help organizations year after year to keep filling those impact envelopes.
Appendix A
Impact Survey
This survey* is part of a research project at the Tow Center for Digital
Journalism at Columbia University. The responses will be used to inform
the creation of a platform for tracking the quantitative and qualitative
impact of journalism, with an emphasis on serving the needs of nonprofit,
investigative-oriented outlets. Responses are strictly anonymous and all
analysis will be presented in the aggregate, though unattributed quotes may
be reproduced.
If you would like to test our platform starting Summer 2014, please check
the box below and leave an email address.
If you have any questions or concerns, please reach out to contact@newslynx.org.
* Some questions were adapted from a similar survey put together by
Joanna Raczkiewicz at http://harmonyinstitute.org/.
Organizational Profile
What’s the name of your organization?
What functions do you perform?
(check all that apply)
Editing
Reporting
Social Media
Community Outreach
Fundraising
Programming (code)
Business Development
Board Member
Other:
82
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
How many fulltime employees work for your news organization?
How many freelancers, contributors, and/or interns?
What is your news organization’s primary source of revenue?
Foundations
Advertisements
Subscriptions / Membership
Donations / Major Gifts
Endowment
Other:
Content Streams
Does your organization produce original content?
Yes
No
Which of the following types of content does your organization
publish or aggregate?
(check all that apply)
Breaking news
Analysis
Explanatory pieces
Long form investigations
Cultural criticism (Film / Theater / Literature / Music / Television, etc.)
Blogs
Opinion
Datasets
Interactive graphics / Interactive databases
Learning aids / resources
Teaching aids / resources
Appendix A
83
Reader comments / forums
Professional development resources
Syndicated content
Video / Documentary
Radio stories
Podcasts
Other:
Through which channels do you distribute content?
(check all that apply)
Online
Print
Radio
Television
Resyndication
Other:
Does your organization produce original content?
Hourly
Daily
Weekly
Monthly
Other:
Does your organization tag content with metadata so that it can
be searched, sorted, or otherwise organized?
Yes
No
I don’t know
Are the tags you use aligned with any external metadata standards?
Yes
84
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
No
I don’t know
If you answered “yes” to the previous question, please list the
standard(s) used.
Analytic Practices Quantitative
Are you required by foundations, private funders, management, or a Board of Directors to produce reports on organization analytics?
Yes
No
I don’t know
If so, what kind of metrics do they ask for and how often
must they be reported?
What analytics platforms / tools do you currently use?
(check all that apply)
Google Analytics
Parse.ly
Mixpanel
Chartbeat
WebTrends
Omniture
KISSmetrics
SparkWise
Bit.ly
Twitter Analytics
Appendix A
85
Facebook Insights
CrimsonHexagon
Topsy
SocialFlow
Klout
Vocus
Mention.net
Google Alerts
Nielsen
Arbitron / Nielsen Audio
Internal (as in homemade)
Other:
Analytic Practices Qualitative
Do you do anything currently to measure the qualitative
performance of stories?
i.e. metrics about a story that aren't quantitative like pageviews or
social shares.
Yes
No
I don’t know
If so, what?
(please be as detailed as possible and include links to reports or evaluations you've produced)
Who is in charge of monitoring impact at your organization?
86
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
A dedicated employee (i.e. ‘Impact Analyst’, ‘Engagement Manager’, etc.)
A primary person who holds other responsibilities as well
A small team of people
Everyone spends some time doing this
We don’t have anyone currently doing this
I don’t know if someone does this
How many hours per week does this person / these people
spend on impact measurement?
Can you provide links to 23 stories / projects that you
thought were especially ‘impactful’ and a very brief explanation as to why?
Institutional Challenges / Goals
Are quantitative (pageviews, mentions, likes) or qualitative
measurements (change in law, an interesting citation) more
important?
Quantitative
Qualitative
Why is that measurement more important?
What is the most challenging aspect of measurement for
your organization?
Appendix A
87
Actions
In what ways could measurement (quantitative and/or qualitative) positively influence your organization’s content or
business strategy?
Who is most interested in the outcome of articles?
Reporters
Editors
Donors
Board of directors / CEO
Everyone equally
Other:
Further Information
Would you be interested in beta testing our impact measurement platform at your news organization?
Yes
No
If so, please provide your email address so we can contact
you with more information.
Appendix B
Impact Reading List
B. Abelson, “HI Score: Towards a New Metric of Influence,” Harmony Institute, 26 June 2012, http://harmony-institute.org/therippleeffect/2012/06/27/hiscore-towards-a-new-metric-of-influence/.
B. Abrash and J. Clark, “Social Justice Documentary: Designing for Impact,”
Center for Social Media, September 2011, http://www.centerforsocialmedia.org/
sites/default/files/documents/pages/designing_for_impact.pdf.
Ad Council, “Overview of Ad Council Research and Evaluation Procedures,”
Harmony Institute, Date unknown, http://www.adcouncil.org/Impact/Research/
Overview-of-Ad-Council-Research-Evaluation-Procedures.
C.W. Anderson et al., “Post-industrial Journalism: Adapting to the Present,”
Tow Center for Digital Journalism, Fall 2012, http://towcenter.org/wp-content/
uploads/2012/11/TOWCenter-Post_Industrial_Journalism.pdf.
D. Barrett and S. Leddy, “Assessing Creative Media’s Social Impact,” Fledgling
Fund, December 2008, http://www.thefledglingfund.org/wp- content/uploads/
2012/11/Impact-Paper.pdf.
J. Blakely, “Research Study Finds That a Film Can Have a Measurable Impact
on Audience Behavior,” Norman Lear Center, 12 February 2012, http://www.
learcenter.org/pdf/FoodInc.pdf.
J. Blakely, “TedX Phoenix—Movies for a Change,” YouTube, 12 February
2012, http://youtu.be/Pb0FZPzzWuk.
D. Bornstein, “Why We Need Solutions Journalism,” Skoll World Forum, 2012,
http://skollworldforum.org/debate-post/why-we-need-solutions-journalism/.
R. Breeze, “Measuring Community Engagement: A Case Study from Chicago
Public Media,” Reynolds Journalism Institute, 1 December 2011, http://www.
rjionline.org/blog/measuring-community-engagement-case-study-chicago-publicmedia.
A. Brock et al., “Room for Improvement: Foundations’ Support of Nonprofit
90
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Performance Assessment,” Center for Effective Philanthropy, 2012, http://www.
effectivephilanthropy.org/assets/pdfs/Room%20for%20Improvement.pdf.
S. Caulkin, “The Rule Is Simple: Be Careful What You Measure,” The Guardian,
9 February 2010, http : / / www . guardian . co . uk / business / 2008 / feb / 10 /
businesscomment1.
D. Chinn et al., “Measuring the Online Impact of Your Information Project,”
Knight Foundation/FSG Social Impact Advisors, 31 May 2011, http://www.
knightfoundation.org/media/uploads/publication_pdfs/Measuring-the-OnlineImpact-of-Information-Projects-092910-FINAL_1.pdf.
J. Clark, “Five Needs and Five Tools for Measuring Media Impact,” PBS
MediaShift, 11 May 2010, http://www.pbs.org/mediashift/2010/05/5-needs-and5-tools-for-measuring-media-impact131.html.
J. Clark and T. Van Slyke, “Investing in Impact,” Center for Social Media, 12
May 2010, http://www.centerforsocialmedia.org/sites/default/files/documents/
pages/Investing_in_Impact.pdf.
Community Wealth Ventures, “How Nonprofit News Ventures Seek Sustainability,” Knight Foundation, October 2011, http://www.knightfoundation.org/media/
uploads/publication_pdfs/13664_KF_NPNews_Overview_10-17-2.pdf.
Channel 4 BritDoc Foundation Evaluation, http://britdoc.org/real_good/
evaluation.
S. Duros, “How Impact Counts for Hyperlocal News, but How to Count It?”
Block By Block, 6 August 2012, http://www.blockbyblock.us/2012/08/06/impactand-what-it-is-for-community-and-hyperlocal-news/.
S. Fisch and R. Truglio eds., G Is for Growing: Thirty Years of Research on
Children and Sesame Street (Mahwah: Lawrence Erlbaum Associates, 2001).
S. Fox, “Why Are We Spending So Much Time ‘Measuring the Impact of Journalism?’” UMass Journalism Profs, 30 March 2012, http://umassjournalismprofs.
wordpress.com/2012/03/30/why-are-we-spending-so-much-time-measuring-theimpact-of-journalism/.
FSG Social Impact Advisors/John S. and James L. Knight Foundation, “Measuring the Online Impact of Your Information Project: A Primer for Practitioners
Appendix B
91
and Funders,” FSG, 2010, http://www.fsg.org/tabid/191/ArticleId/428/Default.
aspx?srpush=true.
FSG Social Impact Advisors and John S. and James L. Knight Foundation,
“IMPACT: A Practical Guide to Evaluating Community Information Projects,”
Knight Foundation, February 2011, http://www.knightfoundation.org/media/
uploads/publication_pdfs/Impact- a- guide- to- Evaluating_Community_Info_
Projects.pdf.
B. Gates, “My Plan to Fix the World’s Biggest Problems,” Wall Street Journal,
25 January 2013, http://online.wsj.com/article/SB1000142412788732353980457826
1780648285770.html.
Bill & Melinda Gates Foundation, “A Guide to Actionable Measurement,”
2010, http://www.gatesfoundation.org/learning/Documents/guide-to-actionablemeasurement.pdf.
S. Gigli, “What Is ‘Disruptive Metrics’,” InterMedia, 20 March 2013, http:
//www.intermedia.org/2013/03/20/what-is-disruptive-metrics/.
K. E. Gill, “Carnival of Journalism: How to Measure What Matters?” WiredPen, 4 April 2012, http://wiredpen.com/2012/04/04/carnival-of-journalism-howto-measure-what-matters/.
J. Gordon, “See, Say, Feel, Do: Social Media Metrics That Matter,” Fenton,
Date unknown, http://www.fenton.com/resources/see-say-feel-do/.
L. Graves, “Traffic Jam: We’ll Never Agree About Online Audience Size,”
Columbia Journalism Review, 7 September 2010, http://www.cjr.org/reports/
traffic_jam.php?page=all.
L. Graves et al., “Confusion Online: Faulty Metrics and the Future of Digital
Journalism,” Tow Center for Digital Journalism, September 2010, http://www.
journalism.columbia.edu/system/documents/345/original/online_metrics_report.
pdf.
D. Green, “Eyeballs and Impact: Are We Measuring the Right Things If We
Care About Social Progress?” Skoll World Forum, 2012, http://skollworldforum.
org/debate-post/eyeballs-and-impact-are-we-measuring-the-right-things-if-wecare-about-social-progress/.
L. Green-Barber, “3 Investigations, 3 New Laws: See How CIR’s Stories
92
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
Gain Macro Impact,” Center for Investigative Reporting, 2014, https://www.
revealnews.org/article-legacy/3-investigations-3-new-laws-see-how-cirs-storiesgain-macro-impact/.
L. Green-Barber, “Creating an Impact Community,” Center for Investigative
Reporting, 2014, http://cironline.org/blog/post/creating-impact-community-6301.
L. Green-Barber, “How Can Journalists Measure the Impact of Their Work?
Notes Toward a Model of Measurement,” Nieman Journalism Lab, 2014, http:
//www.niemanlab.org/2014/03/how-can-journalists-measure-the-impact-of-theirwork-notes-toward-a-model-of-measurement/.
L. Green-Barber, “The Language of Impact: Introducing a Draft Glossary,”
Center for Investigative Reporting, 2014, https://www.revealnews.org/articlelegacy/the-language-of-impact-introducing-a-draft-glossary.
L. Green-Barber, “Measuring Media Impact: 5 Steps to Put You on Track,”
Knight Foundation blog, 2014, http : / / www . knightfoundation . org / blogs /
knightblog/2015/4/27/measuring-media-impact-5-steps-put-you/.
Harmony Institute, “Waiting for Superman: Entertainment Evaluation Highlights,” May 2011, http://harmony-institute.org/wp-content/uploads/2011/07/
WFS_Highlights_20110701.pdf.
L. Heedy and S. Keen, “SROI for Funders,” New Philanthropy Capital,
September 2010, http://www.thinknpc.org/?attachment_id=815%5C&postparent=4924df.
B. Hickman et al., “Best of MuckReads 2012,” ProPublica, 31 December 2012,
http://www.propublica.org/article/best-of-muckreads-2012.
International Center For Journalists, “An Evaluation of the Knight International Journalism Fellowships,” Date unknown, http://www.knightfoundation.org/
media/uploads/publication_pdfs/Evaluation_of_Knight_ICFJ_Fellowships_
final.pdf.
International Center For Journalists, “Evaluation Field Manual and Tools for
the Knight International Journalism Fellowships,” January 2011, http://issuu.
com/kijf/docs/icfj_knight_international_evaluation_manual.
J. Johnson, “A New Approach to Making Films That Matter,” GOOD, 11
Appendix B
93
January 2013, http://www.good.is/posts/a-new-approach-to-making-films-thatmatter.
D. Karlan et al., “More Than Good Intentions: How a New Economics Is
Helping to Solve Global Poverty,” Dutton, 2011, http://www.amazon.com/MoreThan-Good-Intentions-Economics/dp/052595189X.
KETC 9, “Facing the Mortgage Crisis: People, Connections, Resources,” Corporation for Public Broadcasting, Spring 2008, http://www.stlmortgagecrisis.org/.
R. Kohavi et al., “Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained,” Microsoft, 2012, http://www.exp- platform.com/
Documents/puzzlingOutcomesInControlledExperiments.pdf.
M. Kramer and J. Kania, “Collective Impact,” Stanford Social Innovation
Review, Winter 2011, http://www.fsg.org/tabid/191/ArticleId/211/Default.aspx?
srpush=true.
N.D. Kristof, “Getting Smart on Aid,” The New York Times, 18 May 2011,
http://www.nytimes.com/2011/05/19/opinion/19kristof.html?_r=1%5C&
partner=rssnyt%5C&emc=rss.
G. King et al., “Matching As Nonparametric Preprocessing for Reducing Model
Dependence in Parametric Causal Inference,” Political Analysis, no. 15 (2007):
199–236, http://gking.harvard.edu/files/gking/files/matchp.pdf.
G. Linch, “Qualifying Impact: A Better Metric for Measuring Journalism,”
greglinch.com, 14 January 2012, http://www.greglinch.com/blog/2012/01/14/
quantifying-impact-a-better-metric-for-measuring-journalism/.
M. Lewis and H. Niles, “Measuring Impact: The Art, Science and Mystery
of Nonprofit News,” Investigative Reporting Workshop, 2013, http://irw.s3.
amazonaws.com/uploads/measuring-impact-final-pdf.pdf.
LFA Group: Learning for Action/Bill & Melinda Gates Foundation/John
S. and James L. Knight Foundation, “Deepening Engagement for Lasting Impact:
Measuring Media Performance Results,” Learning for Action, February 2013,
http://dmeforpeace.org/learn/deepening-engagement-lasting-impact-frameworkmeasuring-media-performance-results.
Data Desk, “Complete Guide to the LAFD Data Controversy,” Los Ange-
94
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
les Times, 12 April 2012 (ongoing), http://timelines.latimes.com/lafd- datacontroversy/.
J. Mayer and R. Stern, “A Resource for Newsroom: Identifying and Measuring
Audience Engagement Efforts,” Reynolds Journalism Institute, 3 June 2011,
http://www.rjionline.org/sites/default/files/theengagementmetric- fullreportspring2011.pdf.
McKinsey & Company, Social Impact Assessment Portal, http://lsi.mckinsey.
com/.
National Center for Media Engagement (NCME), “Measuring Public Media’s
Impact: Challenges and Opportunities,” March 2013, http://mediaengage.org/
CommunicateImpact/measure3.cfm.
E. Ní Ógáin et al., “Making an Impact: Impact Measurement Among Charities
and Social Enterprises in the UK,” New Philanthropy Capital, October 2012,
http://www.thinknpc.org/publications/making-an-impact/making-an-impact/.
J. Peters, “Some Newspapers, Tracking Readers Online, Shift Coverage,” The
New York Times, 5 September 2010, http://www.nytimes.com/2010/09/06/
business/media/06track.html.
J. Peters, “A Newspaper, and a Legacy, Reordered,” The New York Times, 11
February 2012, http://www.nytimes.com/2012/02/12/business/media/thewashington-post-recast-for-a-digital-future.html?pagewanted=all%5C&_r=0.
A. Pilhofer, “Find the Right Metric for News,” aronpilhofer.com, 25 July 2012,
http://aronpilhofer.com/post/27993980039/the-right-metric-for-news.
J. Priem et al., “The Altmetrics Collection,” Public Library of Science, 2012,
http://www.ploscollections.org/article/info:doi/10.1371/journal.pone.0048753.
C. Ramsay et al., “Misinformation and the 2010 Election: A Study of the US
Electorate,” World Public Opinion, 10 December 2010, http://www.worldpublicopinion.
org/pipa/pdf/dec10/Misinformation_Dec10_rpt.pdf.
J. Reisman et al., “A Handbook of Data Collection Tools: Companion to ‘A
Guide to Measuring Advocacy and Policy,’” Organizational Research Services,
2007, http://www.organizationalresearch.com/publicationsandresources/a_
handbook_of_data_collection_tools.pdf.
Appendix B
95
M. Rosenblum, “How to Quantify the Impact of Journalism,” New York Video
School, 30 March 2012, http://www.nyvs.com/blog/user/michael/How- ToQuantify-The-Impact-of-Journalism.
M. Salagnik, “Experimental Study of Inequality and Unpredictability in an
Artificial Cultural Market,” Science, no. 311 (2006): 854, http://www.sciencemag.
org/content/311/5762/854.short.
J. Search, Beyond the Box Office: New Documentary Valuations (May 2011),
http://www.documentary.org/images/news/2011/AnInconvenientTruth_
BeyondTheBoxOffice.pdf.
Sparkwise, http://sparkwi.se/.
A. Spittle, “Defining New Metrics for Journalism,” andrewspittle.net, 28 April
2012, http://andrewspittle.net/2012/04/28/new-metrics/.
J. Stray, “By the Numbers, American Journalism Failed to Inform Voters,”
jonathanstray.com, 29 December 2010, http://jonathanstray.com/americanjournalism-failed-to-inform-voters.
J. Stray, “Designing Journalism to Be Used,” jonathanstray.com, 26 September
2010, http://jonathanstray.com/designing-journalism-to-be-used.
J. Stray, “Does Journalism Work?” jonathanstray.com, 14 December 2010,
http://jonathanstray.com/does-journalism-work.
J. Stray, “Metrics, Metrics Everywhere: How Do We Measure the Impact of
Journalism?” Nieman Journalism Lab, 17 August 2012, http://www.niemanlab.
org/2012/08/metrics- metrics- everywhere- how- do- we- measure- the- impact- ofjournalism/.
E. A. Stuart, “Matching Methods for Causal Inference: A Review and a Look
Forward,” Statistical Science 25, no. 1 (2010): 1–21, http://biostat.jhsph.edu/
~estuart/Stuart10.StatSci.pdf.
R. J. Tofel, “Non-profit Journalism—Issues Around Impact: A White Paper
from ProPublica,” ProPublica, February 2013, http : / / s3 . amazonaws . com /
propublica/assets/about/LFA_ProPublica-white-paper_2.1.pdf.
TRASI: Tools and Resources for Assessing Social Impact, http://trasicommunity.
ning.com/.
96
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
N. Ward/Infomart, “Un-juking the Stats: Measuring Journalism’s Impact on
Society,” Infomart, 17 October 2012, https://www.infomart.com/un-juking-thestats-measuring-journalisms-impact-on-society/.
N. Ward/Infomart, “We Become What We Measure: Developing Impact Metrics for Journalism,” Infomart, 3 October 2012, http://www.infomart.com/2012/
10/03/we-become-what-we-measure-developing-impact-metrics-for-journalism/.
L. Williams, “How Can News Organizations Assess Impact and Engagement?”
Investigative News Network, 2013, http://newstraining.org/2013/09/25/how-cannews-organizations-assess-impact-and-engagement/.
J. M. White, “Bandit Algorithms for Website Optimization: Developing, Deploying, and Debugging,” O’Reilly Media, 2012, http://oreilly.com/shop/product/
0636920027393.html?bB=g.
“WITNESS Performance Dashboard,” December 2009, http://www3.witness.
org/sites/default/files/downloads/witness-dashboard-evaluation-2010.pdf.
E. Zuckerman, “Metrics for Civic Impacts of Journalism,” ethanzuckerman.com, 30 June 2011, http://www.ethanzuckerman.com/blog/2011/06/
30/metrics-for-civic-impacts-of-journalism/.
Appendix B
97
Citations and Endnotes
1. K. Bradsher, “License to Pollute: A Special Report,” The New York Times,
30 November 1997, http://www.nytimes.com/1997/11/30/business/licensepollute-special-report-light-trucks-increase-profits-but-foul-air-more.html.
2. C. Duhigg et al., “The iEconomy Series,” The New York Times, 2012, http:
//www.nytimes.com/interactive/business/ieconomy.html?_r=1&.
3. A. Nazir Afiq, “Saturday Night Live Pokes Fun at iPhone 5 Tech Pundits,”
Vimeo, 2013, https://vimeo.com/51392953.
4. It’s important to point out that media does not always positively influence the
world, nor do its effects necessarily relate to any intention [see L. Bennett, “Toward a
Theory of Press-State Relations in the United States,” Journal of Commu-nication
40, no. 2 (June, 1990): 103–127, http://onlinelibrary.wiley.com/doi/10. 1111/
j.1460-2466.1990.tb02265.x/abstract; H. Gans, Deciding What’s News: A Study of
CBS Evening News, NBC Nightly News, Newsweek, and Time (Chicago:
Northwestern University Press, 1979); E. Herman and N. Chomsky, Manufac-turing
Consent: The Political Economy of the Mass Media (New York: Pantheon Books,
1988); and J. Mermin, Debating War and Peace: Media Coverage of U.S. Intervention
in the Post-Vietnam Era (Princeton: Princeton University Press, 1999) ].
Research has shown, for instance, that increased exposure to certain media
outlets was associated with fundamental misperceptions about the Iraq War
[R. Lewis et al., “Misperceptions, the Media, and the Iraq War,” Political Science
Quarterly 118, no. 4 (Winter 2003): 569–598, http://onlinelibrary.wiley.com/doi/
10.1002/j.1538-165X.2003.tb00406.x/abstract].
Despite attempts by some papers to correct the errors in their reporting [see, for
example, NYT editors, “The Times and Iraq,” The New York Times, 26 May 2004,
http://nytimes.com/2004/ 05/26/international/middleeast/26FTE_NOTE.html],
a Harris Interactive Poll in 2008 found that a shockingly high 37 percent of Americans
still believed that Saddam Hussein was manufacturing weapons of mass destruction in
the lead-up to the U.S.-led invasion [Harris Interactive, “Significant Minority Still
Believe that Iraw Had Weapons of Mass Destruction When U.S. Invaded,” 10
November 2008, http://www.harrisinteractive.com/vault/Harris-Interactive-PollResearch-Iraq- 2008- 11.pdf ].
Psychological experiments back these empirical and theoretical findings, reliably
demonstrating how media frames—or lenses through which is-sues are defined and/or
explicated—influence readers’ perceptions [T. Nelson et al., “Media Framing of a Civil
Liberties Conflict and Its Effect on Tolerance,” The American Political Science
Review 91, no. 3 (September, 1997): 567–583, http://www.uvm.edu/~dguber/
POLS234/articles/nelson.pdf ].
98
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
5. J. Stray, “Metrics, Metrics Everywhere: How Do We Measure the Impact
of Journalism?” Nieman Journalism Lab, 17 August 2012, http://www.niemanlab.
org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-ofjournalism/.
6. B. Gates, “Why Measurement Matters: 2013 Annual Letter from Bill Gates,”
Bill & Melinda Gates Foundation, 2013, http://www.gatesfoundation.org/whowe-are/resources-and-media/annual-letters-list/annual-letter-2013.
7. United Nations Development Programme Evaluation Office, Handbook on
Monitoring and Evaluating Results (New York: United Nations Development Programme, 2002).
8. United Nations Development Programme, What Will It Take to Achieve the
Millennium Development Goals? An International Assessment (New York: United
Nations Development Programme, 2010).
9. W. Easterly, “How the Millennium Development Goals are Unfair to Africa,”
World Development 27, no. 1 (2009): 26–35, http://dri.fas.nyu.edu/docs/IO/
13016/UnfairtoAfrica.pdf.
10. “Funding Impact: Partnerships, Networks & Collaborations: A Learning Opportunity,” AFRICA Grantmakers’ Affinity Group, 6 August 2014, http://
africagrantmakers. org/resource/funding- impact- partnership- networkscollaborations- learning-opportunity-2009-conference/.
11. RESOURCE ARCHIVE, International Human Rights Funders Group, https:
//ihrfg.org/resource-archive/entry/debate-impact-single-issue-vs-multi-issueorganization.
12. “Measuring the Impact of Environmental Communications,” Environmental Grantmakers Association, 12 March 2015, https : / / ega . org / events / 2015 /
measuring-impact-environmental-communications.
13. “Tools and Resources for Assessing Social Impact,” Foundation Center, http:
//trasi.foundationcenter.org/browse_toolkit.php.
14. The BRITDOC Foundation, http://britdoc.org/.
15. BritDoc, The Impact Field Guide & Toolkit (Miami: Ford Foundation, Knight
Foundation et al., Date unknown).
16. D. Green and E. Paluck, “Deference, Dissent, and Dispute Resolution: An
Experimental Intervention Using Mass Media to Change Norms and Behavior in
Rwanda,” American Political Science Review 103, no. 4 (November 2009): 622–
644.
17. A. Mitchell et al., “Nonprofit Journalism: A Growing but Fragile Part of the
U.S. News System,” Pew Research Center, 10 June 2013, http://www.journalism.
org/2013/06/10/nonprofit-journalism/.
18. John S. and James L. Knight Foundation, 2013 990 Return of Private Foundation, http://www.knightfoundation.org/media/uploads/media_pdfs/KNIGHT_
FOUNDATION_990PF_FINAL_2013.pdf.
99
19. How We Work Grant: New Venture Fund, July 2013, Bill & Melinda Gates
Foundation, http://www.gatesfoundation.org/How- We- Work/Quick- Links/
Grants-Database/Grants/2013/07/OPP1092058.
20. How We Work Grant: Guardian News & Media Ltd, August 2011, Bill &
Melinda Gates Foundation, http://www.gatesfoundation.org/How-We-Work/
Quick-Links/Grants-Database/Grants/2011/08/OPP1034962.
21. How We Work Grant: Univision Communications Inc., June 2014, Bill & Melinda
Gates Foundation, http://www.gatesfoundation.org/How- We- Work/QuickLinks/Grants-Database/Grants/2014/06/OPP1109707.
22. “Deepening Engagement for Lasting Impact: A Framework for Measuring
Media Performance & Results,” Learning for Action, October 2013, http://www.
learningforaction.com/wp/wp-content/uploads/2014/08/Media-MeasurementFramework_Final_08_01_14.pdf.
23. “New Program Funded to Measure Media Impact and Audience Engagement,”
Knight Foundation, 29 April 2013, http://www.knightfoundation.org/pressroom/press-release/new-program-funded-measure-media-impact-and-audien/.
24. “Media Impact Project,” USC Annenberg Norman Lear Center, http://www.
mediaimpactproject.org/.
25. Introducing the Media Impact Project Measurement System, Media Impact
Project, USC Annenberg Norman Lear Center, 2014, http://www.mediaimpactproject.
org/measurement.html.
26. A. Phelps, “I Can’t Stop Reading This Analysis of Gawker’s Editorial Strategy,” Nieman Journalism Lab, 21 March 2012, http://www.niemanlab.org/20
12/03/i-cant-stop-reading-this-analysis-of-gawkers-editorial-strategy/.
27. G. Linch, “Quantifying Impact: A Better Metric for Measuring Journalism,”
greglinch.com, 14 January 2012, http://www.greglinch.com/2012/01/quantifyingimpact-a-better-metric-for-measuring-journalism.html.
28. J. Stray, “Metrics, Metrics Everywhere: How Do We Measure the Impact
of Journalism?” Nieman Journalism Lab, 17 August 2012, http://www.niemanlab.
org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-ofjournalism/.
29. A. Pilhofer, “Finding the Right Metric for News,” aronpilhofer.com, 25 July
2012, http://aronpilhofer.com/post/27993980039/the-right-metric-for-news.
30. B. Abelson, “The Relationship Between Promotion and Performance: Pageviews
Above Replacement,” abelson.nyc, 14 November 2013, http://abelson.nyc/opennews/2013/11/14/Pageviews-above-replacement.html.
31. B. Abelson, “Whither the Pageview Apocalypse?” abelson.nyc, 9 October
2013, http://abelson.nyc/open- news/2013/10/09/Whither- the- pageview_
apocalypse.html.
100
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
32. “What is the Attention Web?” Chartbeat, https://chartbeat.com/attentionweb#.VV3haWTBzGc.
33. B. Abelson, “The Relationship Between Promotion and Performance: Pageviews
Above Replacements,” abelson.nyc, 14 November 2013, http://abelson.nyc/opennews/2013/11/14/Pageviews-above-replacement.html.
34. A. Anand, “Introducing MORI—Our Impact Tracker Tool,” Chalkbeat, 2
June 2014, http://chalkbeat.org/2014/06/02/introducing- mori- our- impacttracker-tool/.
35. Lindsay Green-Barber Author Page, Center for Investigative Reporting, http:
//cironline.org/person/lindsay-green-barber.
36. L. Green-Barber, “3 Investigations, 3 New Laws: See How CIR’s Stories Gain
Macro Impact,” Center for Investigative Reporting, 2 October 2014, https://www.
revealnews . org / article - legacy / 3 - investigations - 3 - new - laws - see - how - cirs stories-gain-macro-impact/.
37. “MORI: Chalkbeat’s Impact Tracker Tool,” Chalkbeat, http://chalkbeat.
org/mori/.
38. Snowplow: The Event Analytics Platform, http://snowplowanalytics.com/.
39. Who Uses Snowplow, http://snowplowanalytics.com/product/who-usessnowplow.html.
40. Piwik, http://piwik.org/.
41. OpenStreetMap, https://www.openstreetmap.org/.
42. 596 Acres, http://www.596acres.org/.
43. WordPress.org, Plugin Directory, https://wordpress.org/plugins/search.
php?q=piwik.
44. NPR’s Visual Carebot, Github, https://github.com/nprapps/carebot.
45. J. Lichterman, “Building an Analytics Culture in a Newsroom: How NPR
is Trying to Expand Its Digital Thinking,” Nieman Journalism Lab, 30 April 2014,
http : / / www . niemanlab . org / 2014 / 04 / building - an - analytics - culture - in - a newsroom-how-npr-is-trying-to-expand-its-digital-thinking/.
46. R. J. Tofel, “Non-profit Journalism—Issues Around Impact: A White Paper from ProPublica,” ProPublica, February 2013, https://s3.amazonaws.com/
propublica/assets/about/LFA_ProPublica-white-paper_2.1.pdf.
47. Pixel Ping on Github, https://documentcloud.github.io/pixel-ping/.
48. J. Lichterman, “Constantly Tweaking: How The Guardian Continues to Develop Its In-house Analytics System,” Nieman Journalism Lab, 29 January 2015,
http://www.niemanlab.org/2015/01/constantly-tweaking-how-the-guardiancontinues-to-develop-its-in-house-analytics-system/.
101
49. J. Robinson, “Watching the Audience Move: A New York Times Tool Is Helping Direct Traffic from Story to Story,” Nieman Journalism Lab, 28 May 2014,
http://www.niemanlab.org/2014/05/watching- the- audience- move- a- newyork-times-tool-is-helping-direct-traffic-from-story-to-story/.
50. B. Abelson and M. Keller, “Tow Fellows Brian Abelson, Stijn Debrouwere,
and Michael Keller to Study the Impact of Journalism,” Tow Center for Digital
Journalism, 29 April 2014, http://towcenter.org/tow- fellows- brian- abelsonand-michael-keller-to-study-the-impact-of-journalism/.
51. Impact measures from ProPublica and WisconsinWatch.org, respectively, http:
//www.propublica.org/about/impact/ and http://wisconsinwatch.org/about/
impact/.
52. K. Golden, “Center Awarded $35,000 from Knight-supported INNovation
Fund to Translate Investigative Reporting into Art, Explore New Audiences—and
Profit,” WisconsinWatch.org, 23 October 2014, http://wisconsinwatch.org/20
14 / 10 / center - awarded - 35000 - from - knight - supported - innovation - fund - to translate-investigative-reporting-into-art-explore-new-audiences-and-profit/.
53. Celestial Emporium of Benevolent Knowledge, Wikipedia, http://en.wikipedia.
org/wiki/Celestial_Emporium_of_Benevolent_Knowledge.
54. Jorge Luis Borges, “Del Rigor en la Ciencia (On Exactitude in Science),”
Los Anales de Buenos Aires 3, no. 3 (March 1946), http://www.sccs.swarthmore.
edu/users/08/bblonder/phys120/docs/borges.pdf.
55. J. Lichterman, “Building an Analytics Culture in a Newsroom: How NPR
is Trying to Expand Its Digital Thinking,” Nieman Journalism Lab, 30 April 2014,
http : / / www . niemanlab . org / 2014 / 04 / building - an - analytics - culture - in - a newsroom-how-npr-is-trying-to-expand-its-digital-thinking/.
56. Presidents-HeadsOfGovt, @VITweeple, Twitter, https://twitter.com/VITweeple/
lists/presidents-headsofgovt.
57. Presidents-HeadsOfGovt, @VITweeple, Twitter, https://twitter.com/VITweeple/
lists/presidents-headsofgovt.
58. U.S. Attorneys on Twitter, @TheJusticeDept, Twitter, https://twitter.com/
TheJusticeDept/lists/u-s-attorneys-on-twitter.
59. Theory of Change, Wikipedia, https://en.wikipedia.org/wiki/Theory_
of_change.
60. N. Silver, “Sept. 27: The Impact of the 47 Percent,” FiveThirtyEight blog,
The New York Times, 28 September 2012, http://fivethirtyeight.blogs.nytimes.
com/2012/09/28/sept-27-the-impact-of-the-47-percent/?_r=0.
102
COLUMBIA JOURNALISM SCHOOL | TOW CENTER FOR DIGITAL JOURNALISM
61. JSON for Linking Data, http://json-ld.org/.
62. NewsLynx: Own Your Analytics, http://newslynx.readthedocs.org/en/
latest/.
63. Capitolwords: A Project of the Sunlight Foundation, http://capitolwords.
org/about/.
64. Podio, https://podio.com/.
65. embed.ly, http://embed.ly/analytics.
66. M. Down, “Hunting Birds of Paradise,” The New York Times, 5 April 2011,
http://www.nytimes.com/2011/04/06/opinion/06dowd.html?_r=0.
67. XOXO Festival, “Darius Kazemi, Tiny Subversions—XOXO Festival (2014),”
YouTube, 21 October 2014, https://www.youtube.com/watch?v=l_F9jxsfGCw.
68. Bikestorming, http://www.bikestorming.com/.
69. J. Stray, “Metrics, Metrics Everywhere: How Do We Measure the Impact
of Journalism?” Nieman Journalism Lab, 17 August 2012, http://www.niemanlab.
org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-ofjournalism/.
70. J. Stray, “Metrics, Metrics Everywhere: How Do We Measure the Impact
of Journalism?” Nieman Journalism Lab, 17 August 2012, http://www.niemanlab.
org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-ofjournalism/.
71. C. Petre, “The Traffic Factories: Metrics at Chartbeat, Gawker Media, and
The New York Times,” Tow Center for Digital Journalism, 7 May 2015, http://
www.niemanlab.org/2012/08/metrics-metrics-everywhere-how-do-we-measurethe-impact-of-journalism/.