Alive in the Swamp

Transcription

Alive in the Swamp
Alive in the Swamp
1
Assessing Digital Innovations in Education
Alive in
the Swamp
Assessing Digital
Innovations
in Education
Michael Fullan and Katelyn Donnelly
July 2013
2
Alive in the Swamp
Assessing Digital Innovations in Education
About NewSchools
NewSchools is a non-profit venture philanthropy firm working to
transform public education for low-income children. Through funding
and guidance of entrepreneurial organisations, we aim to make sure
every child receives an excellent education.
About Nesta
Nesta is the UK’s innovation foundation. An independent charity, we
help people and organisations bring great ideas to life. We do this by
providing investments and grants and mobilising research, networks and
skills.
Nesta Operating Company is a registered charity in England and Wales with company number
7706036 and charity number 1144091. Registered as a charity in Scotland number SC042833.
Registered office: 1 Plough Place, London, EC4A 1DE
www.nesta.org.uk
© Nesta 2013.
Alive in the Swamp
Assessing Digital Innovations in Education
In November 2012 Nesta published Decoding Learning: The Proof,
Promise and Potential of Digital Education. Using the lens of eight
well-evidenced learning acts, this report explored how educational
technology could achieve impact – provided, of course, that we
stress the pedagogy as much as the technology.
Alive in the Swamp builds on this focus by adding the ambition to
deliver change across a school system, in each and every classroom.
This ambition is the right one, which is why we were thrilled to be
approached by Katelyn Donnelly and Michael Fullan with a request
to publish this report.
In essence, the authors ask one very simple question – ‘what does
good look like?’ They answer the question by providing an Index, a
way of assessing any digital innovation in learning, which stresses
all the elements needed for system impact. For example, in addition
to improving learning outcomes educational technologies must be
delightful to use and easy to implement.
Our hope is that this Index will go on to be debated, refined and
used – not least by Nesta as we continue our efforts to make good
on the promise of technology to improve learning.
Helen Goulden
Executive Director, Nesta’s Innovation Lab.
3
4
Alive in the Swamp
Assessing Digital Innovations in Education
About the Authors
Michael Fullan
Michael Fullan, O.C. is professor emeritus, OISE, University of Toronto. He has
served as the special adviser to Dalton McGuinty, the premier of Ontario. He works
with systems around the world to accomplish whole system reform in education in
provinces, states and countries. He has received several honorary doctoral degrees.
His award winning books have been published in many languages. His most recent
publications include: Stratosphere: Integrating technology, pedagogy, and change
knowledge; Professional capital of teachers (with Andy Hargreaves), and Motion
leadership in action.
Visit his website, www.michaelfullan.ca
Katelyn Donnelly
Katelyn Donnelly is an executive director at Pearson where she leads the Affordable
Learning Fund, a venture fund that invests in early–stage companies serving
schools and learners in the developing world. Katelyn is also an active advisor on
Pearson’s global strategy, research and innovation agenda as well as a consultant to
governments on education system transformation and delivery. Katelyn is the co–
author of two recent education system publications: Oceans of Innovation and An
Avalanche is Coming. She serves as a non–executive director and strategic advisor for
several start–up companies across Europe, Asia and Africa. Previously, Katelyn was a
consultant at McKinsey and Company.
Acknowledgments
Thank you to those who have given us feedback and advice.
Thank you to Mark Griffiths, Robbie Coleman and Jason Goodhand for thoughtful
feedback. Thanks also to Carina Wong, Jamie McKee and to the Gates Foundation
for partial support; to the Motion Leadership/Madcap (MLM) team, including David
Devine, Mark Hand, Richard Mozer, Lyn Sharratt, Bill Hogarth and Malcolm Clark; to
our colleagues and friends Sir Michael Barber and Saad Rizvi. Thanks also to Joanne
Quinn, Claudia Cuttress, and Greg Butler. Thank you to Esme Packett for proofreading
and editing. Additional thanks to James Tooley for helpful feedback.
Alive in the Swamp
5
Assessing Digital Innovations in Education
Alive in
the Swamp
Assessing Digital
Innovations
in Education
CONTENTS
Foreword: Sir Michael Barber 6
Preface: TED MITCHELL
7
1Explosions Galore
8
2The Index
13
3Initial Learnings and findings
23
4Navigating the Swamp
26
Appendix A: What green and red look like References 28
36
6
Alive in the Swamp
Assessing Digital Innovations in Education
Foreword: Sir Michael Barber
In Alive in the Swamp, the authors Michael Fullan and Katelyn Donnelly have made a
real breakthrough for which all of us around the world interested in improving education
systems can be grateful.
For years – ever since the 1970s – we have heard promises that technology is about to
transform the performance of education systems. And we want to believe the promises;
but mostly that is what they have remained. The transformation remains stubbornly five or
ten years in the future but somehow never arrives.
In the last decade we have begun to see some important research on how technology
might lead to transformation. Organisations such as the Innosight Institute in the US, Nesta
in the UK and CET in Israel have enhanced our collective knowledge in ways that help
inform policy. Even so, policymakers have struggled because the nature of the technology
itself changes so fast that they understandably find it hard to choose when and how to
invest and in what.
The breakthrough in Alive in the Swamp is the development of an Index that will be of
practical assistance to those charged with making these kinds of decisions at school, local
and system level. Building on Fullan’s previous work, Stratosphere, the Index sets out the
questions policymakers need to ask themselves not just about the technology at any given
moment but crucially also about how it can be combined with pedagogy and knowledge
about system change. Similarly, the Index should help entrepreneurs and education
technology developers to consider particular features to build into their products to drive
increased learning and achieve systemic impact.
The future will belong not to those who focus on the technology alone but to those who
place it in this wider context and see it as one element of a wider system transformation.
Fullan and Donnelly show how this can be done in a practical way.
Both the authors are well known to me and it has been my pleasure to work with and learn
from them in a variety of contexts. Their different backgrounds and perspectives have
proved the ideal combination for this piece of work and they share one vital characteristic:
they are both among the leading lights of their own generation in thinking about how to
ensure education systems rise to the challenge of the 21st century.
Sir Michael Barber
May 2013
Alive in the Swamp
7
Assessing Digital Innovations in Education
Preface: Ted Mitchell
Alive in the Swamp vividly articulates the key components needed for digital innovations
to be transformational in a practical, easy-to-use tool that has applicability across the
spectrum, from leaders of large school systems to education entrepreneurs. As education
systems across the world continue to struggle with learner engagement, student
achievement and equity, this work is more relevant and necessary than ever before.
Luckily, over the past several years, we have seen a proliferation of disruptive digital
learning innovations, many of them represented in the NewSchools Venture Fund portfolio,
which are dedicated to, and focused on, fundamentally altering existing education
structures and improving learning outcomes. While the existing innovations are exciting
and racing toward a better future, there is still a long way to go to translate innovations
into changed facts for millions of learners. This paper offers practical suggestions on how
entrepreneurs and systems leaders can evaluate innovations to identify the ones with the
most potential for transformation and improve those that need improvement.
Michael Fullan and Katelyn Donnelly’s innovation index is a step forward for the education
field, allowing for easy analysis of any innovation on the basis of technology, pedagogy
and system change, and the interrelationship between the three. Furthermore the index
puts the focus of digital innovation on evidence of what works for the learner and how to
ensure that change is embedded in the entire system, not just niche projects in a handful of
schools. The index will help us drive forward to an education 2.0 world that allows students
to progress at their own pace, focuses on an activity based assessment of a large suite of
skills and empowers teachers to be mentors, motivators and change agents.
Certainly the potential of digital technology to change the nature of learning is still
emerging but there is a plethora of opportunities, as well as evidence of what works, to
better inform new products and system implementation. Those that use the lessons of the
past and build them into the designs of the future will find success.
Ted Mitchell
CEO, NewSchools Venture Fund
8
Alive in the Swamp
Assessing Digital Innovations in Education
1Explosions Galore
Two powerful forces are combining to procreate the swamp. One is a relentless ‘push’
factor; the other is a prodigious and exponential ‘pull’ phenomenon. The push factor is
how incredibly boring school has become. We show one graph in Stratosphere from Lee
Jenkins that indicates 95 per cent of students in Kindergarten are enthusiastic about
school but this steadily declines bottoming out at 37 per cent in Grade 9. Part and parcel
of this, teachers are increasingly alienated. Satisfaction is plummeting in the US among the
teaching force; and those wanting to leave the profession are approaching one in three.
Students and teachers are psychologically and literally being pushed out of school.
Figure 1: Loss of enthusiasm by grade level
100
95
90
90
82
80
76
74
70
65
60
Per cent who
love school
55
51
48
50
40
45
37
39
40
9
10
11
30
20
10
0
K
1
2
3
4
5
6
7
8
12
The counterforce is the ‘pull’ of rapidly expanding digital innovations that are the result
of lean and not so lean start–ups, as small–scale entrepreneurs and behemoth businesses
and financiers populate the digital swamp. The result is an exciting but undisciplined
explosion of innovations and opportunities. There will be more development and spread of
digital innovations in 2013 than at any other single time in history. In fact, the NewSchools
Venture Fund found that in 2012 alone, 74 early–stage companies focused on US primary
Alive in the Swamp
9
Assessing Digital Innovations in Education
and secondary demographics received a total of $427 million in funding. The sheer size
of investment and attention is unprecedented for education technology. 2013 promises
to be an even larger year for founding and funding. At the same time that there are new
platforms being built specifically for education, the amount of global digital information
created and shared is growing at an increasing rate and has increased ninefold over the last
five years. The Internet is becoming a powerful access portal and content is increasingly an
open and free commodity. The impact on education and learning, however, is still murky.
Figure 2: Global digital information created and shared, 2005–2015E
8,000
6,000
Digital information
created and shared
(zettabytes)
4,000
2,000
0
2005
2007
2009
2011
2013E
2015E
*1 zettabyte = 1 trillion gigabytes. Source: IDC IVIEW report ‘Extracting value from chaos’. June 2011
As in all revolutions, opportunities and problems present themselves in equal measure. In
Stratosphere we said that three forces, each of which has had an independent history over
the last 50 years, must come together. One is technology (the PC is almost 50 years old);
the second is pedagogy (which has been around forever, of course, but the science and
art of learning is a recent phenomenon and has indeed been outpaced by technology);
and the third is change knowledge (again a forever proposition but the first studies of
implementation were in the mid 1960s). For revolutionary learning results, we need to
combine how we learn with how to ensure engagement, with how to make change easier.
Our white paper essentially argues that these three powerful forces must be combined to
catapult learning dramatically forward.
10 Alive in the Swamp
Assessing Digital Innovations in Education
Figure 3: The three forces of stratosphere (Fullan, 2013)
System
Change
Technology
Student
Pedagogy
Up to this point, technology has not impacted schools. We agree with Diana Laurillard that
technological investments have not been directed at changing the system but only as a
matter of acquisitions. Billions have been invested with little thought to altering the learning
system. There are also potentially destructive uses of technology on learning; we must
beware of distractions, easy entertainment and personalisation to the point of limiting our
exposure to new ideas. We focus not simply on the technology itself but on its use.
In Stratosphere we suggested four criteria that new learning systems must meet:
I. Irresistibly engaging for students and teachers.
II.Elegantly easy to adapt and use.
III.Ubiquitous access to technology 24/7.
IV.Steeped in real life problem solving.
We cite only a few studies here to show that digital innovations are prowling the swamp
like omnivores. We consider this a good thing. The future is racing with technology. The
gist of our report is that pedagogy and change knowledge will have to dramatically step up
their game in order to contribute their essential strengths to the new learning revolution.
Additionally, the complex and dynamic relationship between technology, pedagogy and
change knowledge will need to be developed and nurtured if we are to get ‘whole system
reform’.
Pedagogy, for example, is increasingly being bypassed because of its weak development.
Frustrated funders, entrepreneurs and learners do and will bypass pedagogues when
they find them wanting. But this is dangerous because wise learners will always benefit
from a mentor or active guide. We get a snippet of this from John Hattie’s meta analysis
of over 1,000 studies in which he assessed the impact of learning practices on student
achievement. One particular cluster calculation revealed the following:
Alive in the Swamp
11
Assessing Digital Innovations in Education
Teacher as facilitator (effect size .17)
Included: smaller class sizes; simulations and gaming; enquiry–based learning – setting
students a question or problem to explore; personalised instruction; problem–based
learning; web–based learning.
Teacher as activator (effect size .60)
Included: reciprocal teaching – where student and teachers are both ‘teachers’ learning
from each other; regular, tailored feedback; teacher–student verbal interaction; meta
cognition – making explicit the thinking process; challenging goals – setting ambitious
and achievable learning goals.
We see that ‘teacher as activator’ (or change agent) has an impact size over three times
greater than ‘teacher as (mere) facilitator’. Several comments are warranted. First, Hattie
was not focusing on technology. Second, we can note that simulations, gaming and web–
based learning do not fare well. Our guess is that the reason for the low impact is because
they were employed with poor pedagogy. In other words, the ‘guide on the side’ is a
poor pedagogue. Third, a lot more has to be done in fleshing out the nature of effective
pedagogy in its own right, as well as how it relates to the use of technology to accelerate
and deepen learning.
For now, we can conclude that teachers as activators or change agents will be part of the
solution. Much more work has to be done to examine what this might mean in practice.
In fact, we are part of some initiatives to delve into what we call the ‘New Pedagogy’
that involves a learning partnership between and among students and teachers with
teacher as change agent – and students in charge of their own learning under the active
guidance of teachers. This work has just begun; and part of our purpose in this paper is to
stimulate further development in exploring the relationship between technology and active
pedagogy and how this integration can be enhanced by using change knowledge that
focuses on whole system reform.
If we consider digital innovations, the field is currently characterised by either weak or
undeveloped pedagogy, or strong technology and pedagogy confined to a small number
of schools; that is, the best examples tend to be small–scale exceptions that are not
representative of the main body of schools. Additionally, there is in general a lack of strong
efficacy evidence demonstrating the impact of the digital innovations on student learning.
Robust academic meta–analysis research, such as that by Steven Higgins et al., shows a
current lack of causal links between the use of technology and student attainment.
Pushing the envelope in the direction of larger–scale reform will require the integration of
technology, pedagogy, and system change knowledge as explored by Tom Vander Arks’
Getting smart: How digital learning is changing the world, and our own book, Stratosphere.
In a recent report, Vander Ark and Schneider focused on How digital learning contributes
to deeper learning. In all these cases, the press is on for learning skills essential for the
current century in three domains: (1) the cognitive domain (thinking); (2) the intrapersonal
domain (personal skills of drive and responsibility); and (3) the interpersonal domain
(teamwork and other relational skills) – see the National Research Council.
Once more, and understandably, the examples identified by Vander Ark and Schneider are
small–scale exceptions to the mainstream.
We set out, then, to establish an index that would capture the overall dimensions of
technology, pedagogy and what we might call ‘systemness’. We found only one (and
very recent) attempt to take stock of digital innovations in education, Nesta’s report
Decoding learning: The proof and promise of digital education by Luckin et al. The report
helpfully unpacks learning themes around eight dimensions: learning from experts; from
12 Alive in the Swamp
Assessing Digital Innovations in Education
others; through making; through exploring; through enquiry; through practicing; through
assessment; and in and across settings. These distinctions rightly place learning up-front,
but there are too many to grasp in practice and they overlap conceptually.
Further analysis in the report is very useful. Innovations are classified as ‘teacher led’ (n=
300 sources) or ‘researcher led’ (n=1022). Using the criteria of ‘quality of evidence’ it ended
up with a sample of 86 teacher–led innovations, and 124 research–led innovations.
Decoding Learning then begins the process of considering how the eight domains can be
connected but it found few instances of linked learning activities in the sample.
We agree with its conclusion that digital innovations have failed in two respects: they have
“put technology above teaching and excitement above evidence.”(p.63).
In thinking about how to reverse this situation, a simpler approach might be more
helpful — one that is quite close to the strategising that will be required. There need to
be policies and strategies that will simultaneously i) conceptualise and operationalise the
new pedagogy; ii) assess the quality and usability of specific digital innovations; and iii)
promote systemness. In other words, we considered what might be necessary in order for
technology to go to scale and to produce systemic change.
Section 2 is the result.
Alive in the Swamp
13
Assessing Digital Innovations in Education
2The Index
In response to the changes in digital technology and a renewed focus from entrepreneurs
and educators on improving learning outcomes, we have developed a comprehensive Index
to be used as an evaluative tool to predict the transformative power of the emerging digital
innovations. The Index allows us to systematically evaluate new companies, products and
school models in the context of all that we’ve seen is necessary for success.
The Index is best applied to innovations that focus on the K-12 demographic in the US – in
the UK, primary through to secondary – and have a school-based application. We designed
the Index and tested it on 12 innovations spanning technology–enabled learning tools like
Khan Academy and Learn Zillion and school–based models such as School of One and
Rocketship Education. We chose innovations that have been publically mentioned and
supported by premier funding. Appendix A elaborates on the Index in terms of ‘what green
looks like’ and ‘what red looks like’.
Figure 4: Score card: Innovation Index
Criteria area
Rating
Rationale summary
Pedagogy
Clarity and quality of intended outcome
Quality of pedagogy and relationship
between teacher and learner
Quality of assessment platform and functioning
System change
Innovation Index
Implementation support
Value for money
Whole system change potential
Technology
Quality of user experience/model design
Ease of adaptation
Comprehensiveness and integration
GREEN: Good – likely to succeed and produce transformative outcomes
AMBER GREEN: Mixed – some aspects are solid; a few aspects are lacking full potential
AMBER RED: Problematic – requires substantial attention; some portions are gaps and need improvement
RED: Off track – unlikely to succeed
14 Alive in the Swamp
Assessing Digital Innovations in Education
We believe that student–centered learning requires a blended combination of critical
drivers as laid out in Stratosphere. It will require the interaction of pedagogy, system
change and technology. Technology, for example, can provide feedback, or support
effective feedback for teachers to improve their pedagogy, and the system to monitor
and improve student achievement. Training of system actors will need to focus not just
on the use of the actual technology but on how it can support collaboration and effective
interaction. Each of the three components should be leveraged and interconnected in a
way that produces results for learners and reverberates throughout the system.
The Index breaks each of the three components into three subcomponents for evaluation.
We use the tool by starting at the subcomponent level to get a granular perspective on
each of the key elements and questions and then build up to a synthesised view and big
picture that includes a rationale of judgements, particularly where key elements fall in the
interrelationships between components and subcomponents. We believe this allows for
easier diagnosis of gaps and problems as well as identifications of spots of excellence. The
tool should also make evaluation systematic and easy to use for almost any innovation,
providing the widest spread of applicability and reliability.
Each component and subcomponent is given a rating on a four–point scale: green, amber
green, amber red and red. We picked a four–point scale because social science research
has shown that three or five–point scales cause gravitation toward the middle and the
judgements becomes less powerful. We’ve taken this approach from the work of Michael
Barber and the prime minister’s Delivery Unit in the United Kingdom.
Each colour is an output based on the evaluation of a series of underlying questions. A
green rating is assigned when the innovation is outstanding in fulfilling the best practices
underlying the subcomponent. To achieve a green, the innovation would need to be truly
world class. An amber green rating is assigned when the innovation has many criteria
fulfilled and is on the way to outstanding. An amber green rating may signal that there are
a few areas for further refinement and improvement. An amber red rating is assigned when
the innovation has a few aspects that fulfill the subcomponent but is mostly problematic
and off track for success. A red rating is assigned when the innovation misses most criteria
completely and is off track to make progress.
We recognise that the evaluation system is qualitative. Some subjectivity is thus inevitable
because evaluation is based on human judgement. However, when the Index is applied
in many settings and over a large sample, the users of this framework develop a keener
sense of shared judgements both individually and, crucially, as a community. As the Index
is applied and more and more judgements are shared, subjectivity should decrease. Human
discretion is necessary to achieve applicability and comparability across myriad products
and services.
Thus, in quantitative terms, a given innovation could receive a score of three to 12 for each
subcategory, or nine to 36 for the Index as a whole. To keep the ratings consistent, under
each subcomponent area we have further defined and outlined what questions should be
asked and what green or good and red or bad, looks like. The next section will delve into
those details in the order that they are present in the Index.
Alive in the Swamp
15
Assessing Digital Innovations in Education
Pedagogy
Clarity and quality of intended outcome
The first subcomponent area under pedagogy is clarity and quality of intended outcome.
Here we ask several questions: How clearly are the learning outcomes of the innovation
defined? Are the learning outcomes explicit and defined for the school, the student, the
parents and the school system? Is the clarity of outcome shared by student, teacher,
parent, school and school system? This is an important category because to make
transformative system improvements we need to know, with precision and clarity, what
the learning goals are. Digital technologies that do not align with what is to be learned will
likely not translate into increased attainment.
To achieve a green rating, generally each activity, overall lesson and broad course of study
should have clear, quantified outcome(s). These learning outcomes and goals should be
communicated and shared effectively within the school and, of course, with students,
teachers, parents and the broader system. The innovation should be able to demonstrate
strong benefits for customers and/or students.
In the very best situations, trajectories are produced for key outcomes so that progress
towards goals can be tracked and measured in real time. Where possible, modelling is used
to quantify the impact on key national indicators of student attainment and benchmarked
internationally. We have seen examples of substantual trial results being produced in
education systems like Ontario, where the student and teacher together have been clear
about learning goals and the corresponding success criteria in relation to their learning
outcomes.
In red rated innovations, exercises and modules do not have outcomes identified. When
outcomes are identified, they lack specificity and clarity. It is problematic when there
is insufficient linkage to key leading indicators and a lack of progression trajectories.
Confusion abounds in learning environments where teachers and actors are unaware of the
impact of the innovation and the benefits to students. We have found that when there is no
tracking or monitoring system in place to ensure learning is taking place and adaptations
are made, the innovation is unlikely to succeed in producing outcomes.
Pedagogy itself
The second subcomponent in pedagogy is the quality of the pedagogy itself, defined as
the underlying theory and practice of how best to deliver learning. Our Index asks the
following questions under this component: How refined is the pedagogical underpinning?
Does the pedagogy reflect the latest global research, including the emphasis on enquiry,
constructivism and real–world examples? Are students encouraged to learn through
enquiry? How is the teacher’s role defined? Is the role reflective of the ‘teacher as activator’
relationship? Can teachers effectively manage all the students? How is the learner
engaged? Is there a student–teacher partnership? Is the pedagogy consistent across the
system? Is there a shared understanding among all the teachers involved? Does the model
include an emphasis on the necessary psychological and intellectual processes? Is there
a mechanism to ensure the pedagogy is updated? Can teachers and students provide
defensible evidence of positive links to learning?
While the new pedagogy, particularly in a digital context, is still being defined, we think
that it is important that the innovations contain pedagogy that reflects the most advanced,
evidence–based techniques to date. We find that innovations should have a theory of
16 Alive in the Swamp
Assessing Digital Innovations in Education
learning that is stated explicitly in the technology, model design, and training of teachers.
It is important to note that many teachers struggle to define and use an evidence–based
pedagogy regardless of whether it sits in a digital innovation.
In any case, the model should include a view that the teacher is a change agent in the
classroom. This means problems and questions are placed in real world contexts; the
emphasis is on intellectual risk taking and trial–and–error problem solving; and there is a
healthy partnership between the student and teacher that is built on enquiry and data.
Innovations that achieve a green should take note of Professor John Hattie’s meta analysis
work on teaching – thus, we should be able to see signs that the teacher’s role is defined
as an activator of learning and his/her job is centred on servicing and pushing deeper the
thoughts from the learner. Teachers should be seen in partnership with students and exhibit
behaviour that shows openness to alternatives – a constant behavior of seeking evidence
and adaptability and flexibility when new evidence is raised.
Further, for green innovations, students are engaged through enquiry, and learning
is personalised with the goal of unlocking the passion of the learner. Students feel
psychologically supported by teachers who are trained to focus on the personal experience
of the individual student and to help uncover values and motivations.
Red rated innovations may employ a pedagogy that lacks innovation and often relies
on the traditional rote method of ‘tell and test’ or ‘experience and evoke’. In other cases
of poor pedagogy, we have found that a theory of learning or a consistent standard of
teaching is absent, difficult to detect and sometimes totally incoherent.
In these cases, pedagogy is weak and it is not clear how technology can assist and accelerate
learning. There is a tension between the student and teacher based on misaligned incentives
and teaching style. Teachers, or the implementers, are teaching using a method that works
for them and is without evidence base. Students are taught ‘to’; curriculum and school
design are imposed; and students feel unsupported by the school and/or teacher.
This ‘teaching and learning’ subcomponent is clearly one of the most important areas – and
the one in which innovations are either successful or completely fail. Often the pedagogy
simply isn’t focused on or researched; too many evaluations focus on the technology. We
need, instead, to be explicit about and to assess the theory of learning employed.
Quality of assessment platform
The third pedagogy subcomponent is the quality of the assessment platform. Both
summative and formative assessments are vital for engagement, learning and progression.
Here we ask the following questions: What is the quality of the built–in assessment
systems? Is it adaptive? Does it include an optimal amount of detail? Is it clear how the
outcomes will be measured? How does the teacher use the assessment system to motivate
and activate the learner? How does the student use the assessment system to monitor and
motivate his or her own learning?
To achieve a green, the assessment platform should be adaptive and integrated. Ideally, the
system is completely adaptive, interactive and integrated seamlessly into the innovation.
It must be rigorous and accurate and be integral to learner engagement. The assessment
results should leverage the techniques of big data analysis, data visualisation and
international benchmarking of standards. In some of the best cases, the student is unaware
of being assessed. We’ve seen best practice models of these systems in terms of situation
simulator games and first generation models such as Khan Academy’s learner progression
to badge.
Alive in the Swamp
17
Assessing Digital Innovations in Education
Top assessment systems will continuously reinforce learner engagement. In The Progress
Principle, Teresa Amabile and Steven Kramer show the power of positive progress loops.
They write that small wins on a daily basis produce a meaningful increase in positive
emotions and internal motivation. This is true for both learners and teachers. The best
innovations should embed these loops into the digital technology itself and the way it is
used by teachers.
Assessments that provide questions, collect responses and then feedback their correctness
to the learner are solid traditional models. However, the next generation of assessments
will likely focus on activities which result in a product or performance. In this model, the
assessment system should be able to identify features of student behaviour and make
observations on it, not in terms of binary correctness, but in the form of useful information
on the ways in which the learner has engaged with the activity.
Green assessment platforms should collect dramatically large and ubiquitous samples
of data across users. This data collection should be built into daily activity and used to
drive and inform improvements in the innovation. The analysis of the data into useable,
actionable outputs is critical. Additionally, to ensure continuity at the level of the individual
student, assessments should start with access to the learner’s previous history and data
and not be ignorant of previous activity.
Best in class innovations cover both formative and summative assessments; and the
assessment system should show each stakeholder (student, teacher and parent) an optimal
level of detail and an analysis of performance in real time.
We also know that data and performance feedback only has impact to the extent and
ability with which it is used. Top rated innovations have mechanisms, or a process, for the
teacher to use the assessment system to motivate the learner. The system should guide the
learner to experience incremental daily accomplishments and stimulate a sense of progress
while also appropriately targeting other areas for further refinement.
Innovations that receive a red often lack an assessment system altogether. If there is an
assessment system in place, it may be unclear, lack robustness or be misleading. In poor
assessments, only crude outputs are measured. Teachers who use the assessment system
do not properly interpret the results or merely use it as a simplistic way of grading students.
In red rated innovations, students do not have access to assessment results and are unable
to monitor their own progress. Additionally, poor assessment systems include instances
when assessment is used only to monitor a few broad indicators – and when teachers
‘teach to the test’.
System change
The digital swamp is full of innovative ideas and products that have outcomes in a limited
environment. However, we have yet to see true transformation at scale. In this section we
examine the necessary criteria for an innovation to produce a whole system revolution.
Implementation support
Schools and education systems are frequently bombarded with new strategies and tools
but the key to stickiness is not just the design of the innovation, it is the process of being
embedded in the learning environment and the learning day. As whole system reform
experts repeat endlessly, strategy and product design gets you 10 per cent of the way and
the remaining 90 per cent is implementation.
18 Alive in the Swamp
Assessing Digital Innovations in Education
For the implementation support subcategory we use the following questions: What is the
nature of the implementation support provided? What support is provided for technology
functions and what parts are included? Is it inclusive of software, hardware, maintenance,
electricity and connectivity? For how long is the implementation support or servicing in
place for? Does the innovation include teacher training and professional development? Are
teacher development goals explicit? Is there appropriate follow–up and mentoring? Is the
support based on a culture of learning, risk–taking and learning from mistakes?
In green cases, the innovative product or service incudes full implementation support
that acts in constant partnership and dialogue with the school system, teachers and
students. The technology support is timely and effective on all aspects: software,
hardware, maintenance, electricity and connectivity. Often schools will need physical
infrastructure upgrades, additional server capacity and basic software assistance. The more
comprehensive the implementation support and the more viral and intuitive the use of the
innovation is, the more likely it will be successfully sticky.
Best in class innovations include a continuous professional learning component for
teachers to ensure the change is embedded in the learning day and that the activity of the
teacher reinforces the innovation and produces the desired outcomes. Collective learning
among teachers and across schools is an especially powerful indicator. The innovation
should exemplify teachers as vital change agents integral to the success of the student,
and show why the professional development is a necessary investment in their career
development and success.
Professional development focuses on the teacher learning how to assess the impact he or
she has on every student; how to provide feedback that assists students’ progress; and how
to master the motivation of students. Professional development is constantly monitored
and refined on an as needed basis. The focus of the professional development should be
on capacity building through rapid learning cycles, fast feedback, continual reflection and
good coaching, particularly as it directly relates to the innovation.
Poor implementation support can cause an innovation to crumble. A red rating is
appropriate when an innovation is dropped into the school without support or when
implementation is left up to the teachers, schools and school system to figure out on their
own. Sometimes there is a lack of focus on implementation because it is assumed the
design of the product is intuitive and doesn’t require training; and sometimes it is because
pressure on funding causes procurement personnel to drop implementation related line
items in an attempt to reduce costs. Other times the innovation may be one among a
variety of innovations that come and go in a sea of what some people call the ‘disease of
initiativitis.’
Professional development involving teams or groups of teachers – what Hargreaves and
Fullan have called ‘professional capital’ – is especially critical. Professional capital is a
function of three components in interaction: human capital (qualifications of individuals),
social capital (the quality of the group in action) and decisional capital (the quality of
personalised decisions being made by individual teachers and teams).
To summarise, there are two areas of support required: one is technical; the other is
pedagogical. There is a real issue of efficacy when there is no technology support or the
support is lacking coordination, timeliness and reliability. We have been to many schools
that are not equipped with the necessary (and in many cases simple and inexpensive)
physical infrastructures to support technology. This often leads to unused materials and
wasted class time as teachers scramble to download files properly and handle uncharged
devices.
Alive in the Swamp
19
Assessing Digital Innovations in Education
On pedagogy, study after study has concluded that the impact of digital technology has
been stifled when there is no emphasis on the pedagogy of the application of technology
as used in the classroom. This phenomenon has been recently documented by Steven
Higgins et al. in a large meta–analytical study on digital learning. When teachers are not
taught how to use an innovation, how to adapt to the model, and provided with on–going
support, they revert to their traditional behaviours and practices. And, if professional
development is stacked at initial launch, it risks neglecting the need for continuous
reinforcement and upgrading. Professional development must address both technical
and pedagogical knowledge and skills; and, as we have stressed, this must be a collective
endeavour.
Value for money
For digital innovations to be systemically embedded they must be able to demonstrate a
keen sense of value for money, particularly given the increasing budget constraints of large
public education systems. Schools and systems are under tremendous pressure to manage
and even reduce costs. The product must be priced at a point the system can afford for
the demonstrated value of learning it brings. There is a distinct possibility that digital
innovations in the immediate future may be cheaper, deeper, easier and faster; that is, they
may accelerate learning at a much less expensive cost.
To evaluate value for money we asked the following questions: Are there overall school
cost savings realised by the innovation? Is the product of sufficient value, demonstrated
by learning outcomes, to justify change? How expensive is the product or design change
itself? Are there hidden costs such as infrastructure upgrades? Does the product accelerate
quality learning?
For an innovation to receive a green rating for this subcategory, from a school perspective,
the innovation should, for example, be able to produce twice the learning outcome for
half the cost of previous methods. This four times benchmark may in fact lack ambition.
Perhaps we should expect more. The school and learner are significantly better off with the
innovation and would actively choose to allocate scarce resources towards its purchase.
For an innovation to receive a red rating, the innovation would add excess cost to
the learner and the school without proving real value. As the value in learning is not
demonstrable, it is difficult to tell how the learner will be positively impacted by the
innovation. The innovation would receive a red if the consumer – and perhaps end learner –
would not allocate resources to purchase the innovation. Many of the innovations that have
been heavily subsidised could also be more on the amber side as it is unclear whether they
are truly scalable or sustainable outside of their pilots and without the large–scale support
of philanthropy dollars. For example, School of One has received $70 million from New
York City to develop its unique playlist system to match students’ needs with appropriate
digital modules. Unless this system can be scaled without too much adaptation, it is
unlikely to be a financially viable option for most schools.
Whole system change potential
The last and perhaps most difficult subcategory, is whole system change potential. Here
we start with the notion of ‘simplexity’ – or that a small number of key factors must be
embedded within large groups of people. As described in Stratosphere, for effective whole
system change there must be four elements:
•Motivation for people to engage in change.
•Continuous learning from failure and wrong paths.
20 Alive in the Swamp
Assessing Digital Innovations in Education
•Ability to leverage and learn with the collective group.
•Emphasis on the very large scale.
Within this area we ask the following questions: Does this innovation have the ability to
scale system–wide? Is the scaling plan based on world–leading change knowledge? Will
clusters of schools learn from each other? Will teachers learn from each other? Is capacity
building a central component of the strategy? Are innovations developed and scaled in
laterally?
In the best cases, the innovation scales virally to schools throughout the system. The
change and product design is so absorbing, so automatically useful and so easily
embedded that it spreads like wildfire. For green rated innovations there is little central
management support necessary to ensure innovation is maintained. In fact, clusters of
schools learn from each other and continue to build and improve with the innovative
product/service – the type of behaviour we have seen during the Ontario school system
reform.
Another example, documented by Santiago Rincón–Gallardo and Richard Elmore, is
the Learning Community Project in Mexico that used social movement theory to bring
educational change to thousands of schools as a social change. Using a system of training
and supporting tutors, the strategy spread from 30 schools to 6,000 over a period of eight
years, with results to match. Literacy rates for the affected schools increased significantly
relative to its comparators. This strategy provides a best–in–class example of changing the
teacher–student dynamic in going to scale in a low–cost effective manner. Moreover, the
strategy did not use technology – the point being that even greater, faster, less expensive
results could be obtained with the judicious integration of technology.
In short, green innovation implementation teams have a robust understanding of
‘simplexity’ (small number of key components which are deeply integrated) and stick to
their priorities as they strive for scale.
For a red rating, the innovation is difficult to scale throughout the system. There is
burdensome management; systems are necessary to maintain, scale and embed the
innovation. Innovation is directed entirely top down; schools are directed how to learn
new developments. Similarly, when the innovation is expected to grow bottom up and the
system struggles to adapt; that could be a cause for a red judgement.
Another example of poor system change behaviour is when feedback from other schools is
not considered or used. Similarly, if the innovation causes the district and/or school system
to be overburdened with too many priorities and it becomes distracted away from core
goals; that would be cause for a red rating.
When all three subcategories in system change are examined together – implementation
support; value for money; and system change potential – they provide a comprehensive,
critical means for us to evaluate. Implementation support is vital for effective change to
be embedded in the school or learning delivery system; value for money is critical for the
innovation to be widely adopted; and whole system change potential, in concert, leads us
to a critical judgement on whether the systemic linkages between schools and the system
management will be improved as a result of the implementation.
Finally, we must stress that the changes we are describing represent a fundamental shift
in the culture of schooling on almost every dimension imaginable: the roles of students,
teachers and administrators; the culture of the school in terms of collaborative learning; the
culture of the district and the larger infrastructure, with much greater vertical and horizontal
permeability; the relationships to parents, the community and business, and so on.
Alive in the Swamp
21
Assessing Digital Innovations in Education
Technology
The last category delves into the underlying technology and product model design. We
believe that technology acts as the enabler in an innovation to make learning quicker,
clearer, faster and better. These categories help us get closer to the way the end learner
interacts with the product.
Quality of user experience/model design
The first subcategory is the quality of user experience/model design. To evaluate this
area we ask the following questions: How is the technology experience for the end user?
Is it easy to use and intuitive? Is it irresistibly engaging and elegantly efficient? Does the
technology incorporate latest design principles for user experience or does it look dated?
On the green side of the spectrum, the technology should be absolutely irresistibly
engaging for the learner. The interface should be well designed, using the most
current formatting and based on data–analysis of learner needs and studies of learner
effectiveness. The learner should have little difficulty learning various functions as the tools
are intuitive and questions are answered in an easily navigable ‘help’ menu or section.
In the best innovations, digital tools are participatory, engaging, co–creative, and
collaborative. The innovation might also contain some gameification elements that immerse
the learner and maintain his/her interest. In an ideal world, these would be linked to the
assessment system we described in the pedagogy section. The content of the system should
be accurate, engaging and tailored to learning outcomes. Digital learning tools, if designed
effectively, can powerfully promote deeper learning by expanding access and options and
personalising skill building.
On the red side, the technology lacks engagement for the learner. The user experience
and design elements feel heavily dated. In the worst cases there are frequent stop points,
downloads and interruptions. The learner and user cannot find the applications and tools
easily. The modules feel clunky and not fully integrated.
Ease of adaptation
The second subcategory in technology is ease of adaptation. This classification addresses
the ease and speed of updating, modifying and customising the innovation. To evaluate
the criteria area we use the following questions: Is the technology adaptable? Is it highly
connective? Can the technology be accessed on any device? Can it be accessed any time?
Innovations receiving a green rating in this category will have technology that is connected
to high–speed Internet, allowing for real time adaptation of the programme to the learner.
Green innovations might also have built–in access to the resources of the global Internet,
when appropriate. Users should only need to enter security information once and be able
to access information on the cloud 24/7 from any device. Students should be able to
access the platform or content wherever and whenever they want.
On the red side, the technology is disconnected from the Internet and is unable to be
adapted in real time. There is little to no communication between devices, terminals and
users. In worse case examples, access to resources is limited to that immediately within the
programme and there is minimal connectivity to resources in the broader community.
22 Alive in the Swamp
Assessing Digital Innovations in Education
Comprehensiveness and integration
The third and last subcategory in technology is comprehensiveness and integration.
For the technology to be effective in the classroom, it needs to be integrated into all
relevant aspects of the learning environment and learning day. To evaluate this area, we
asked the following questions: Is the technology integrated and seamless? Is the content
comprehensive? Is the assessment system integrated into the pedagogy and curriculum? Is
24/7 access and learning enabled?
A key indicator of integration is whether the technology department or unit and the
curriculum and instruction units at the district or state level are ‘siloed’ or interrelated. We
have found that in the green examples of system integration (and this is the trend) techies
are becoming increasingly interested in pedagogy and teachers are increasingly interested
in technology. Red examples occur when the technology departments operate as separate
islands of expertise. Green examples exist when both technologists and pedagogues see
themselves and each other engaged in mutual and collective learning. The same could be
said about students: they learn from technology specialists as well as teachers; and they
(the students) sometimes teach the experts.
For world–class, green innovations, all elements of the technology, pedagogy and system
change knowledge are integrated. Teachers should understand how the technology
functions; know how to use it as a tool to engage and enhance student learning; and feel
confident they can integrate it into the classroom. The system should seamlessly integrate
the innovation as the teacher is seen as essential; the product is irresistible; and on–going
capacity building ensures the changes happen by contagion.
The innovation is technologically ubiquitous and every learner has equal access and
opportunity. The technology is embedded in the school day and enables teacher
interactions with the students. In green cases, as mentioned in the pedagogy section, there
should be an integrated assessment system that monitors progress and provides rewards
and excitement. The technology should contain comprehensive sets of materials and
supporting learning mechanisms.
In red examples the elements of technology and pedagogy do not complement each other
and there is often friction. Users must frequently log into separate systems and terminals
that do not interact. In red rated innovations, the technology is crudely added into the
school day and teachers individually adapt to the technology using their own techniques.
Content and assessment is split and often not aligned.
In the technology category, we analyse and sum up the quality, consistency and
adaptability of the innovation and make a categorical judgement of roughly where on the
spectrum the innovation falls.
Overall, there is considerable interdependence and richness that occurs between and
among the three categories. When making categorical judgements, it is important to keep
in mind the relationship between the three areas and to avoid duplication, either positively
or negatively. We suspect many of the most insightful judgements will come from
observations about the strengths and weaknesses of the linkages between model points.
Alive in the Swamp
23
Assessing Digital Innovations in Education
3Initial learnings and findings
We have initially applied the Index to a group of 12 recent innovations to get a flavour of
the emerging strengths and weaknesses in the field as well as what gaps exist for further
refinement of the Index. One apparent trend is that many of the innovations are weak on
pedagogy and implementation support and that two groups of innovations, divided by
their primary delivery mechanisms – namely, school–based vs. technology–enabled – had
different strengths and challenges.
Our first emerging observation is that both pedagogy and implementation/system criteria
are a consistent and reliable challenge across most innovations. In other words, they are the
weakest part of the triangle of technology, pedagogy and system support. Entrepreneurs
find it more exciting and absorbing to design and build digital innovations than to grapple
with a new pedagogy, not to mention the daunting task of addressing systemness policies
and support for implementation. The ‘New Pedagogy’, as we have defined it, consists of
a new learning partnership between and among students and teachers with teachers as
change agents and students in charge of their own learning.
We have also observed that much work remains to be done to delve into the meaning of
the new pedagogies. We suggest that this will mean:
I. Clarifying the learning goals, especially related to ‘deep’ learning.
II.Being precise about the pedagogy that will deepen learning in relation to these goals.
III.Seeing how technology may accelerate the learning.
IV.Using assessments of the learning to inform improvements and to provide evidence
of efficacy.
Even the tasks in this list do not address the systemness component. We and our
colleagues have developed a good deal of knowledge in the system domain and suggest
here that the next phase of digital innovation development must address how the
integration of learning and technology can occur across the system – in regular schools, so
to speak. Barber, Donnelly, and Rizvi’s work on Oceans of Innovation points to the need for
systems to simultaneously engage in continuous improvement and systemic innovation. In
the meantime there is an enormous ‘system gap’, regardless of the source of innovations.
In this report we found that breaking out the innovation by primary mode of transmission,
school-based or technology–enabled, was helpful to illuminate the complexity of the
system change challenge. School–based innovations are often transformative, whole school
design models that reconceptualise the entire learning environment and learning day, for
example, School of One, Rocketship and Carpe Diem School. In each example, the school
day, the role of the teacher, the use of technology and the method of pedagogy have been
rethought completely. School–based innovations often have a highly technical component,
for example, School of One’s learning ‘playlist’ which creates a learning schedule for
each student on a daily basis, taking into account a student’s mastery of the material, the
available resources to deliver learning and a student’s learning preferences. School–based
24 Alive in the Swamp
Assessing Digital Innovations in Education
innovations are often confined to a few select schools that are run entirely within the same
management structure, like Rocketship.
The other category is technology–enabled innovations, usually distributed via the
Internet or through proprietary software. These innovations include technology tools
such as learning management platforms, online adaptive learning paths, video lecture
sets, school data management software and a connected social network for sharing
ideas. Khan Academy, LearnZillion and Edmodo are notable in this category. Technology
innovations often master one specific aspect of learning or the school (e.g., 3–6 grade
maths curriculum, after school study help, digital take-home reading materials, or student
behaviour management).
While both the school–based and technology–enabled innovations are potentially powerful,
neither has shown the ability to embed into all, or even most, aspects of a school and to
spread across all schools. But this is what is needed to truly change the whole system.
We believe the diagram below helps clarify what we mean. The horizontal axis represents
the degree of embeddedness, or the extent to which the digital innovation is fully
implemented and effectively absorbed by teachers and students. It shows how embedded
the intervention is in all the elements of learning – assessments, content, curriculum,
communication tools, collaboration spaces, report cards, teacher development and learning
platforms. The vertical axis is the representation of scale – or the impact of the innovation
on the number of schools and students.
Figure 5: Innovation system gaps
Technology–
enabled
Transformation
No impact
School–
based
Scale
Embeddedness
Source: Barber and Donnelly
Alive in the Swamp
25
Assessing Digital Innovations in Education
More often than not, with school-based digital innovations, the problem is scale. Schools
are accountable for student outcomes; they can often control the basic infrastructure
and the professional development of their teachers. However, systemic change is a big
challenge. How does one school scale to five schools and, most importantly, how does it
scale to 500 or 5,000? We have yet to come across a strong example of a technologically
related school-based innovation that has scaled beyond an initial pilot with much success.
Scale across schools is usually a strong suit for technology-based innovations as tools
are often open and accessible on the Internet. Teachers looking for solutions pick up
the tools and use them in their classrooms and recommend them to colleagues so they
gain early traction. However, there is an open question about drop-off rates. Several
of these innovations, namely Khan Academy and LearnZillion, have begun class-based
implementations but these trials are still early stage. The iterative improvement and
implementation process is just beginning and thus difficult to evaluate.
Technology–enabled innovations have a different problem, mainly pedagogy and
outcomes. Many of the innovations, particularly those that provide online content and
learning materials, use basic pedagogy – most often in the form of introducing concepts by
video instruction and following up with a series of progression exercises and tests. Other
digital innovations are simply tools that allow teachers to do the same age-old practices
but in a digital format. Examples include blog entries instead of written journals and
worksheets in online form. While these innovations may be an incremental improvement
such that there is less cost, minor classroom efficiency and general modernisation, they do
not, by themselves, change the pedagogical practice of the teachers or the schools.
These early technology–enabled innovations are usually not directly accountable for
student learning outcomes. They often cannot quantify the impact of their products and it
is uncertain to know, for sure, the impact they are having at the classroom level. Edmodo
is a good example of highly scalable technology but with less evidence of efficacy. The
platform has achieved tremendous scale with 17 million users by February 2013 but it lacks
access to student performance data, so measuring impact on learning is nearly impossible.
26 Alive in the Swamp
Assessing Digital Innovations in Education
4Navigating the swamp
New digital creatures are being born every day so the swamp is teeming with life. We
suggest six ideas for navigating the swamp.
Recommendation one: Use our Index
There are only three main components and nine subcomponents in total. You can thus
readily size up the situation. In short, work to maximise the integration of technology,
pedagogy, and systemness.
Recommendation two: Lead with pedagogy
Work on clarifying the relative roles of teachers and students. Define the pedagogical
partnership. Work on clarity and precision of the roles and the evidence relative to
impact on learning. Put pedagogy in the driver’s seat and use technology to accelerate
learning relative to particular learning outcomes.
Recommendation three: Develop capacity with respect to system support
This includes implementation assistance, leadership (especially at the school level)
and assessment and use of evidence on student learning. Ensure the support is
comprehensive, integrated and relentless.
Recommendation four: Focus on scale and embeddedness
There are plenty of boutique schools and countless ‘learning’ apps and web–based
tutorial videos. Don’t get distracted by shiny, seemingly glamorous gadgets. Spend
time and resources on the innovations that will be truly system transformative.
Recommendation five: Be open to surprises
The innovation field is dynamic. We are at the beginning of a stage of disruptive
innovations where new ideas are being spawned almost daily. Therefore, treat the next
period as a cycle of continuous improvement where one needs to simultaneously focus
on quality implementation and openness to new ideas. Take a portfolio approach to
innovation; allow for failure. As Barber, Donnelly, Rizvi point out, whole system reform
plus systemic innovation can unleash the whole system revolution.
Alive in the Swamp
27
Assessing Digital Innovations in Education
Whole system reform
Standards &
accountability
Human capital
Structure &
organisation
Globally
benchmark
standards
Recruit great
people and train
them well
Effective central
departments
Data and
accountability
Continuous
development of
pedagogical skills
Capacity to manage
change and engage
communities
Every child
on the agenda
Great school
leadership
Operations and
budgets devolved
to school
Systemic
innovation
Whole
System
Revolution
Source: Barber, Donnelly and Rizvi (2013)
Recommendation six: Clarify what it means to be a learner
in the 21st Century
Be specific. Two examples of succinct, powerful and overlapping formulae come from
Barber, and from Marc Prensky (personal communication). Michael Barber offers the
following mathematical equation:
Well–educated= E(K+T+L)
E - equals the ethical educated person
K - stands for knowledge
Well - educated =
T - for thinking or thought
E(K+T+L)
L - for leadership
Similarly Prensky offers:
eTARA :
Effective Thinking : Effective Action : Effective Relationships : Effective Accomplishment
Note in both cases the powerful conciseness, the emphasis on action and the necessity
of leadership on the part of all educated citizens. Leadership is becoming a feature of
all educated people for the future.
In sum, the swamp is not all that complicated. We don’t need a thesaurus of 21st century
skills. We need a core sense of what it means to be educated and we need to steer our way
through the learning swamp as we go. We need constant monitoring and a focus on driving
outcomes for learners and focus on implementation. Keep it simple, keep it focused and
keep learning. We do believe that 2013 signals an explosion of innovation in digital learning.
Our intention has been to provide a tool to size up the possibilities in order to help people
navigate the swamp.
28 Alive in the Swamp
Assessing Digital Innovations in Education
Appendix A: What green and red look like
Score card: Innovation Index
Criteria area
Rating
Rationale summary
Pedagogy
Clarity and quality of intended outcome
Quality of pedagogy and relationship
between teacher and learner
Quality of assessment platform and functioning
System change
Innovation Index
Implementation support
Value for money
Whole system change potential
Technology
Quality of user experience/model design
Ease of adaptation
Comprehensiveness and integration
GREEN: Good – likely to succeed and produce transformative outcomes
AMBER GREEN: Mixed – some aspects are solid; a few aspects are lacking full potential
AMBER RED: Problematic – requires substantial attention; some portions are gaps and need improvement
RED: Off track – unlikely to succeed
Alive in the Swamp
29
Assessing Digital Innovations in Education
Index criteria – Pedagogy (1/3)
Clarity and quality of learning outcome goals
Definition
What green looks like
What red looks like
•How clearly are the learning
outcomes of the innovation
defined?
Each activity, overall lesson and
broader course has clear,
quantified outcomes.
Exercises and modules do
not have specific outcomes
identified.
•Are the learning outcomes
explicit and defined for the
school, the student, the
parents and school system?
The innovation is able to
demonstrate strong benefits for
customers and/or students.
Outcomes identified lack
specificity. There is insufficient
linkage to key leading indicators
and a lack of clear trajectories.
•Is the clarity of outcome
shared by student, teacher,
parent, school and school
system?
The learning outcomes are
communicated effectively
within the school and to
parents, teachers and students.
Trajectories are produced for
key outcomes.
Where possible, modelling is
used to quantify the impact
on key national indicators of
student attainment.
The student and the teacher are
both clear about the success
criteria in relation to their
learning outcomes. (Best
practice in Ontario).
School staff are unaware of the
impact of the innovation and the
benefits to students.
There is no tracking or
monitoring system to ensure
learning is taking place and
adaptations are made.
30 Alive in the Swamp
Assessing Digital Innovations in Education
Index criteria – Pedagogy (2/3)
Pedagogy
Definition
What green looks like
What red looks like
•How refined is the
pedagogical underpinning?
Does the pedagogy reflect
the latest global research,
including the emphasis
on constructivism and
real–world examples?
Pedagogy reflects the
most advanced and proven
techniques to date. The theory
of learning is stated explicitly
in both the technology, model
design and training of teachers.
The model includes a view of
the teacher as a change agent.
Pedagogy lacks innovation and
current thinking and is often the
traditional rote method of
‘tell and test’ or ‘experience
and evoke’. Theory of learning
is difficult to detect and
incoherent.
•Do students learn through
enquiry?
•Is there a mechanism to
ensure the pedagogy is
updated?
•How is the teacher’s role
defined? Is the role reflective
of a ‘teacher as activator’
relationship? Can teachers
effectively manage all the
students?
•How is the student engaged?
Is there a student–teacher
partnership? Does the model
include an emphasis on the
necessary psychological and
intellectual processes?
Problems and questions are
placed in real–world contexts
and the emphasis is on
intellectual risk–taking and trial
–and–error problem solving.
There is a healthy partnership
between the student and
teacher. The teacher’s role is
defined as activator of learning
and centred on the learner.
Teachers are open to
alternatives, seek evidence
and are adaptable when new
evidence is raised.
Students are engaged through
enquiry, learning is personalised
and directed at unlocking the
passion of the learner. Students
feel supported by the teacher.
Learners are supported
psychologically and teachers
are trained to focus on the
personal experience of the
individual student and to help
uncover values and motivations.
Pedagogy is not consistent
across technology, teachers and
school model.
There is a tension between the
student and teacher based on
misaligned incentives and
teaching style.
Teachers are set on teaching
using a method that works for
them and is without evidence
base.
Students are taught to.
Curriculum and school design
are imposed. Students feel
unsupported by school or
teacher.
Alive in the Swamp
31
Assessing Digital Innovations in Education
Index criteria – Pedagogy (3/3)
Quality of assesment platform
Definition
What green looks like
What red looks like
•What is the quality of the
built–in assessment systems?
Is it adaptive? Does it include
an optimal amount of detail?
The assessment system is
completely adaptive, interactive
and integrated seamlessly into
the innovation. In some best
cases, the student does not
realise they are being assessed.
Measurement of success is
unclear or non–existent. The
assessment system is not robust
and, in some circumstances,
misleading.
•Is it clear how the outcomes
will be measured?
•How does the teacher use
the assessment system to
motivate and activate the
learner?
•How does the student use the
assessment system to monitor
and motivate his or her own
learning?
Both formative and summative
assessments are in place.
Assessment shows every
stakeholder an optimal level of
detail (students, teachers and
parents).
Assessments measure student
activity and behaviour, not just
outputs.
The assessment system is
integral to learner engagement
and their sense of engagement.
The teacher uses the
assessment system to motivate
the learner, helping them feel
small daily accomplishments
while also appropriately
targeting areas for further
refinement.
Teachers use the assessment
system as a simplistic way of
grading students and do not
integrate the results into student
learning.
Students do not have access
to the results and are unable to
monitor progress.
The system focuses only on rigid
outputs and teachers teach to
the test.
32 Alive in the Swamp
Assessing Digital Innovations in Education
Index criteria – System change (1/3)
Implementation support
Definition
What green looks like
What red looks like
•What is the nature of the
implementation support
provided?
The innovation service/
product team provides full
implementation support that
acts in constant partnership
and dialogue with the school
system.
The innovation is dropped into
the school and implementation
is up to the school system to
figure out.
•What support is provided
to ensure the technology
functions (all parts including
software, hardware,
maintenance, electricity and
connectivity)?
•How long is the
implementation support or
servicing in place for?
•Is the support based on a
culture of learning, risk–taking
and learning from mistakes?
•Does the innovation include
teacher training and
professional development?
Are teacher development
goals explicit? Is there
appropriate follow–up and
mentoring?
Technology support is timely
and effective on all aspects.
The innovation includes
professional development to
ensure the change is embedded
in the teaching force. The
innovation sees teachers as vital
change agents and integral to
the success of the innovation.
Professional development
focuses on the teacher learning
to know the impact he or she
has on every student; to provide
feedback that assists students
to progress; and to master the
motivation of students.
Teachers understand that
the training is a necessary
investment in their career
development and success.
Professional development
is constantly monitored and
added on an as–needed basis.
Focus is on capacity building
through rapid learning cycles,
fast feedback, continual
reflection and good coaching.
There is no technology support,
or the support is lacking
timeliness and reliability.
There is limited to no
professional development for
teachers. Teachers are unsure
of their goals. Students and
teachers do not understand the
model and have no forum to ask
questions.
There are too many
programmes and innovations
being implemented at once.
Alive in the Swamp
33
Assessing Digital Innovations in Education
Index criteria – System change (2/3)
Value for money
Definition
What green looks like
What red looks like
•Are there overall school
cost savings realised by the
innovation?
From a school perspective,
the innovation produces twice
the learning outcome for half
the cost, or more, of previous
methods.
The innovation adds excess cost
to the learner and the school.
The value in learning is not
demonstrable and it is difficult
to tell how the learner will be
positively impacted by the
innovation. The end consumer
of the innovation would not
allocate resources to purchase
the innovation.
•Is the product of sufficient
value, demonstrated by
learning outcomes, to justify
change?
•How expensive is the product
or design change itself?
The school and learner are
significantly better off with the
innovation and would actively
choose to allocate scarce
resources towards its purchase.
There are hidden
implementation and upgrade
costs.
•Are there hidden costs such as
infrastructure upgrades?
•Does the product accelerate
learning?
Index criteria – System change (3/3)
Whole system
change potential
Definition
What green looks like
What red looks like
•Does this innovation have the
ability to scale system–wide?
The innovation scales virally to
schools throughout the system.
The innovation is difficult to
scale throughout the system.
•How does the innovation
implement in the whole
system? Is there a plan for
scale based on world–leading
change knowledge?
Little central management is
necessary to ensure innovation
is embedded and maintained.
Burdensome management
systems are necessary to
maintain, scale and embed the
innovation.
•Will clusters of schools
learn from each other? Are
developments developed and
scaled in laterally?
•Is capacity building a central
component of the strategy?
•Are teachers and schools
learning together?
Clusters of schools learn from
each other and continue to
build and improve with the
innovative product/service.
New developments in the
innovative product/service
are made collectively and the
learning is done collaboratively.
The innovation implementation
team has a robust
understanding of ‘simplexcity’
and sticks to its priorities as it
strives for scale.
Innovation is directed entirely
top down; schools are directed
how to learn new developments.
Feedback from other schools is
not considered or used.
The district and school system
is overburdened with too many
priorities and distracted away
from core goals.
34 Alive in the Swamp
Assessing Digital Innovations in Education
Index criteria – Technology (1/3)
Quality of user experience
and model design
Definition
What green looks like
What red looks like
•How is the technology for
the user? Is it easy to use
and intuitive? Is it irresistibly
engaging and elegantly
efficient?
The technology is irresistibly
engaging for the learner. The
interface is well–designed using
the most current formatting
and based on data–analysis of
learner needs and effectiveness.
The technology lacks
engagement for the learner. The
user experience feels heavily
dated and there are frequent
stop points, downloads and
interruptions. The learner
and user cannot find the
applications and tools easily.
•Does the technology
incorporate latest design
principles for user experience?
The learner has little difficulty
learning different functions
as the tools are intuitive and
questions are answered in an
easily navigable help section.
The modules feel clunky and not
fully integrated.
Digital tools are participatory,
engaging, co–creative and
collaborative.
The innovation contains
gameification elements that
capture a sense of engagement
and maintain student’s interest
(linked to assessment system).
Index criteria – Technology (2/3)
Ease of adaptation
Definition
What green looks like
What red looks like
•Is the technology adaptable?
Is it highly connective?
The technology is highly
connected to the Internet,
allowing for real time adaptation
of the programme to the learner
and access to the resources of
the Internet when appropriate.
The technology is disconnected
from the Internet and is unable
to be adapted in real time. There
is little to no communication
between devices, terminals,
users. Access to resources is
limited to that immediately
within the programme.
•Can the technology be
accessed on any device? Can
it be accessed any time?
Technology is easily assessable
from any device; users
only need to enter security
information once; and
information is stored on the
cloud for 24/7 access.
Alive in the Swamp
35
Assessing Digital Innovations in Education
Index criteria – Technology (3/3)
Comprehensiveness and integration
Definition
What green looks like
What red looks like
•It the technology integrated
and seamless?
All elements of the technology,
pedagogy and system change
knowledge are integrated.
Teachers feel very confident
that they can integrate the
technology into the classroom
and understand how it
functions.
The elements of technology and
pedagogy do not complement
each other and there is often
friction. Users must frequently
log into separate systems and
terminals that do not interact.
•Is the content comprehensive?
•Is the assessment system
integrated into the pedagogy
and curriculum?
•Is 24/7 access and learning
enabled?
The innovation is
technologically ubiquitous and
every learner has equal access
and opportunity.
The technology is embedded
in the school day and enables
teacher interactions with the
students. The technology
contains a comprehensive set
of materials and supporting
learning mechanisms e.g.,
assessment.
The assessment platform is
seamlessly integrated into the
technology and any content and
curricula are personalised based
on student performance.
The technology is crudely added
into the school day and teachers
all adapt to the technology in
their own way. The content and
assessment is split separately
and is often unaligned.
36 Alive in the Swamp
Assessing Digital Innovations in Education
References
Amabile, T. and Kramer, S. (2011) ‘The Progress Principle: Using Small Wins to Ignite Joy, Engagement
and Creativity at Work.’ Boston: Harvard Business Review Press.
Barber, M. (2009) ‘Impossible and necessary. Are you ready for this?’ London: Nesta.
Barber, M., Donnelly, K. and Rizvi, S. (2012) ‘Oceans of Innovation: The Atlantic, the Pacific, Global
Leadership and the Future of Education.’ London: Institute for Public Policy Research (IPPR).
Available from: http://www.ippr.org/publication/55/9543/oceans-of-innovation-the-atlantic-thepacific-global-leadership-and-the-future-of-education
Barber, M., Moffit, A. and Khin, P. (2010) ‘Deliverology 101: A field guide for educational leaders.’
Thousand Oaks CA: Corwin.
Fullan, M. (2013) ‘Stratosphere: Integrating technology, and change knowledge.’ Toronto: Pearson.
Hargreaves, A. and Fullan, M. (2012) ‘Professional capital: Transforming teaching in every school.’ New
York: Teachers College Press.
Havens, D. (2012) ‘Innovation and Entrepreneurship in Education: A closer look at K12 edtech
venture funding part II.’ NewSchools Ventures blog. 20 December 2012. Available from: http://www.
newschools.org/blog/closer-look
Higgins, S., Xiao, Z. and Katsipataki, M. (2012) ‘The Impact of Digital Technology on Learning: A
Summary for the Education Endowment Foundation.’ Durham: Durham University.
Laurillard, D. (2012) ‘Teaching as design science.’ London: Routledge.
Luckin, R., Bligh, B., Manches, A., Ainswort. S., Crook, C. and Noss, R. (2012) ‘Decoding learning: The
proof and promise of digital education.’ London: Nesta. Available from: http://www.nesta.org.uk/
areas_of_work/public_services_lab/digital_education/assets/features/decoding_learning_report
Mulgan, G. and Leadbeater, C. (2013) ‘Systems Innovation.’ London: Nesta. Available from: http://www.
nesta.org.uk/library/documents/Systemsinnovationv8.pdf
National Research Council (2012) ‘Education for life and work: Developing transferable knowledge
and skills in the 21st Century.’ Washington DC: The National Academies Press. Available at: http://
www7.national-academies.org/bota/Education_for_Life_and_Work_report_brief.pdf
Rincon-Gallardo S. and Elmore, R. (2012) Transforming teaching and learning through social
movement in Mexican Public Middle Schools. ‘Harvard Education Review.’
Vander Ark, T. (2012) ‘Getting smart: How digital learning is changing the world.’ San Francisco: John
Wiley.
Vander Ark, T. and Schneider, C. (2012) How digital learning contributes to deeper learning. Available
at: www.GettingSmart.com
38 Alive in the Swamp
Assessing Digital Innovations in Education
Alive in the Swamp
Assessing Digital Innovations in Education
39
Nesta
1 Plough Place
London EC4A 1DE
research@nesta.org.uk
www.twitter.com/nesta_uk
www.facebook.com/nesta.uk
www.nesta.org.uk
July 2013
Nesta Operating Company. English charity no. 7706036.
Scottish charity no. SC042833.