Praslova Assessment of Long-Term Outcomes Poster Handout

Transcription

Praslova Assessment of Long-Term Outcomes Poster Handout
Assessment of long-term educational
outcomes: Insights from the
Kirkpatrick’s four level model of training
criteria.
Ludmila N. Praslova
Vanguard University of Southern California
Accrediting organizations, governments, and
employers increasingly emphasize the long-term
outcomes of education and expect institutions of
Higher Education to prepare students for the labor
force through development of relevant
competencies, while also preparing them for civil
participation (Geary Schneider, 2012; Toutkoushian,
2005; Voorhees and Harvey 2005). Documenting
student achievement at the individual and societal
levels is a complex task.
Reactions are easily measured with indirect methods
of assessment, but are considered insufficient for
evaluation of outcomes (Kirkpatrick, 1959, 1976,
1996). Institutions are making progress in assessing
learning with direct methods. Behavior and especially
results criteria, while extremely important, are very
difficult to evaluate and conceptualize.
While organizations focus on organizational
outcomes, current evaluation on Higher Education
effectiveness is often focused on both societal and
The classic model used for evaluation of training in individual outcomes (e.g., individual Return on
business organizations, the Kirkpatrick’s four level Investment (ROI) (Gallup/Purdue, 2014; Chronicle of
model of training criteria (Kirkpatrick, 1959, 1976, Higher Education, 2015).
1996), might provide a versatile framework for
assessment in Higher Education (Praslova, 2010). Comparison between evaluation of training in
organizations Higher Education suggests several
The four levels of evaluation in Kirkpatrick’s model important questions to consider in evaluation of
are reaction criteria, learning criteria, behavior
results-level criteria:
criteria, and results criteria.
1)
Criteria
In organizations
In Higher Education
Reaction
(internal to
program)
trainee affective
reactions
& utility judgments
student affective reactions
& utility judgments
Indirect assessment/student
opinion surveys
Learning
(internal to
program)
direct measures of
learning
outcomes, typically
knowledge tests or
performance tasks
direct assessment of learning
outcomes (tests, papers,
performance tasks, other graded
work, national tests)
Behavior
(external to
program)
measures of actual
on-the-job
performance:
supervisor ratings or
objective indicators of
performance/job
outputs.
Beyond course or program level.
Evidence of student use of
knowledge and skills learned early
in the program in subsequent
work, e.g., research projects or
creative productions, application
of learning during internship,
through community involvement,
and in the workplace. Other
behaviors outside the context in
which the initial learning occurred.
Related documentation.
Results
(external to
program)
productivity gains,
increased customer
satisfaction, employee
morale for
management
training, profit value
gained by
organization
Table 1; in part based on Praslova, 2010.
Individual level/Societal level:
alumni career success, graduate
school admission, service to
society, personal stability,
fulfillment etc. Alumni surveys,
employer feedback, samples of
scholarly or artistic
accomplishments, notices of
awards, recognition of service, etc.
lpraslova@vanguard.edu
2)
3)
4)
5)
6)
Are individual level vs. group-level results the appropriate level of
analysis? If both, then what is the appropriate model to account
for interactions between these?
How does behavior translate into results, and how learning
process can best influence behavior?
Do measures such as jobs, ROI, and earnings tap into results level
as intended by the proponents?
Is national level productivity an appropriate outcome to assess?
How much variance in individual outcomes is explained by
education, individual characteristics, individual behavior, and
contextual influences?
How much variance in societal outcomes is explained by
education vs. contextual influences of the national political and
economic model and the global trends?
Careful consideration of these questions should
benefit attempts to evaluate the effectiveness of
Higher Education.
References:
Gallup/Purdue (2014). Great Jobs, Great Lives. The 2014 Gallup-Purdue Index Report.
Geary Schneider, C. (2012). Moving Democratic Learning from the Margins to the Core. Remarks Delivered at the White House.
http://www.aacu.org/press_room/cgs_perspectives/documents/MovingDemocraticLearningfromtheMarginstotheCore.pdf
Kirkpatrick, D. L. (1959). Techniques for evaluating training programs. Journal of the American Society of Training Directors, 13, 3–
9.
Kirkpatrick, D. L. (1976). Evaluation of training. In R. L. Craig (Ed.), Training and development handbook: A guide to human resource
development (2nd ed., pp. 301–319). New York: McGraw-Hill.
Kirkpatrick, D. L. (1996). Invited reaction: reaction to Holton article. Human Resource Development Quarterly, 7, 23–25.
Praslova, L. (2010). Adaptation of Kirkpatrick’s four level model of training criteria to assessment of learning outcomes and
program evaluation in Higher Education. Educational Assessment, Evaluation and Accountability, 22, 215-125, Springer
Netherlands. DOI: 10.1007/s11092-010-9098-7
Toutkoushian, R. K. (2005). What can institutional research do to help colleges meet the workforce needs of states and nations?
Research in Higher Education, 46, 955–984. doi:10.1007/s11162-005-6935-5.
Voorhees, R. A., & Harvey, L. (2005). Higher education and workforce development: a strategic role for institutional research. New
Directions for Institutional Research, 128, 5–12.