full issue - The University of the West Indies, St
Transcription
full issue - The University of the West Indies, St
Caribbean Teaching Scholar Volume 5 Number 1 April 2015 ERA EDUCATIONAL RESEARCH ASSOCIATION CONTENTS • Editorial 1 Anna-May Edwards-Henry • An investigation of the influence of teacher variables on pretraining efficacy beliefs 5 Madgerie Jameson-Charles and Sharon Jaggernauth •Pre-testing as an indicator of performance in final examinations among third year medical students 25 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak • A retrospective co-relational analysis of third year MBBS students’ performance in different modalities of assessment in haematology and final integrated examinations 37 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak • Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals Althea Richardson and Erik Blair 47 Caribbean Teaching Scholar Vol. 5, No. 1, April 2015, 1–3 ERA EDUCATIONAL RESEARCH ASSOCIATION Editorial The first issue of Volume 5 of the Caribbean Teaching Scholar features articles that explore teacher efficacy through their pre-training beliefs; the predictive value of pre-test scores and the impact of different assessment modalities on student overall performance in third year medical students; and perspectives of selected engineering students in relation to their laboratory engagement. The first article which focuses on exploring the influence of teacher variables on their pre-training efficacy beliefs is a submission by Madgerie JamesonCharles and Sharon Jaggernauth of the School of Education, The University of the West Indies, St. Augustine, Trinidad and Tobago. These researchers sought to gain insight into the efficacy perceptions of untrained teachers in the in-service Diploma in Education programme. Using two cohorts of students, they investigated the differences in perceptions of efficacy of in-service teachers with respect to variables that included gender, area of specialisation, age and years of teaching experience. The researchers used a conventional measurement scale to collect data following a lecture on teacher efficacy. The results suggest that classroom management efficacy beliefs had the lowest mean for both groups. Statistically significant differences in efficacy beliefs among curriculum specialisation were reported, with mathematics, science and modern language teachers’ efficacy lower than other curriculum areas. According to the data, there were significant differences in perceptions of efficacy based on the age of the teachers and their years of teaching experience. Results are discussed in terms of the factors that may affect teaching efficacy and how to maximise the efficacy of teachers. Researchers from the Faculty of Medical Sciences, The University of the West Indies, St Augustine, Trinidad and Tobago, contributed two articles to this issue of the journal. One article focused on the relationship between the pretest performance of medical students in the third year and their performance in the final examinations. The other article examined the relationship between assessment in selected modalities and the final integrated examinations. The researchers were Sehlule Vuma, Department of Para-clinical Sciences, Bidyadhar Sa, Centre for Medical Sciences Education, and Samuel Ramsewak, Faculty Dean. In their first article they contend that diagnostic pre-testing is a valuable tool to identify gaps in knowledge among medical students, to determine teaching requirements and direct teaching programmes to take corrective measures. Additionally, they felt that pre-test scores may predict student performance in final examinations, and this was the focus of their study. The researchers performed a retrospective descriptive correlational analysis on the performance of third year medical students who completed a diagnostic pre-test at the commencement of their third year in the programme. The pre-test proved to be a good predictor of 2 Anna-May Edwards-Henry final examination results. Significant correlation was found between the pre-test and various final examination elements. It was concluded that the pre-test grade is a reliable indicator of performance in the final examinations although it appeared that modified interventions need to be employed to encourage more individual, student engagement. Building on the work highlighted in their first article of this issue in terms of the pre-test predicting performance in final examination components, the researchers also tried to determine any relationship between and among performance in selected modalities and the final integrated examinations. They reported on an analysis of correlations in students’ performance in different modalities of assessment in haematology and multi-specialty (anatomical pathology, chemical pathology, haematology, immunology, microbiology and pharmacology) final integrated examinations. Medical educators generally believe that proper alignment between learning objectives, modes of delivery and assessment modalities is a key factor in shaping the desired outcomes. They also suggest that it is equally important that modalities of assessments are in concurrence among themselves within the assessment framework. Their analysis of the medical students’ performance in different assessment modalities in five courses found positive correlations amongst all haematology components as well as the final integrated examination. The continuous assessment elements had the strongest correlations with the total haematology component. Their analysis showed that combinations of multiple modes of assessment are important for adequate and fair assessment of knowledge and skill, and continuous assessment encourages students to work consistently throughout the course. In the final article of this issue, Althea Richardson, Department of Civil & Environmental Engineering, The University of the West Indies, St. Augustine, Trinidad and Tobago and Erik Blair, Centre for Excellence in Teaching and Learning, The University of the West Indies, St. Augustine, Trinidad and Tobago sought to gain insight to practical engagement of selected students. Specifically, they attempted to determine the perspectives of undergraduate civil engineering students who actively engage with laboratory practicals. Underpinning their study were two key philosophies. The first is that a major goal of engineering education is to produce graduates who have the appropriate level of cognitive development to allow them to manipulate processes, solve problems and produce new knowledge. The second is that teaching and learning using laboratory practicals is an approach that is designed to engage students in the investigation of real-world problems. The researchers believe that since the critical role of laboratory work can be correlated with the fact that engineering is an applied science which requires that students attain some level of hands-on skills in their chosen discipline. However, not all students see the benefit of such hands-on experiences. In the present study, Kolb’s experiential learning theory was used to assess the experiences of undergraduate civil engineering students at The University of the West Indies, St Augustine, Trinidad and Tobago who had elected to undertake laboratory-based projects. The participants in this study represent a minority group within their particular cohort Editorial 3 as they actively engaged with practical laboratory activity. This study determined that the ‘engaged’ students showed a lack of deliberation around laboratory work as a learning experience in itself, but valued its worth as a means of developing their technical skills. For these students, it was not the laboratory practical per se that was their focus nor was there any focus on the improvement of laboratory skills so that they could be more technically equipped. Rather, practicals were used as a means of increasing their future employability through the enhancement of skills relating to project management, decision making and time management. The articles in this issue highlight the need for understanding teachers’ professional site as a means of understanding their practice; the insights that can be gained by analysing assessment data, and the underlying issue of how students engage to create meaningful learning. Anna-May Edwards-Henry Executive Editor Caribbean Teaching Scholar Vol. 5, No. 1, April 2015, 5–24 ERA EDUCATIONAL RESEARCH ASSOCIATION An investigation of the influence of teacher variables on pre-training efficacy beliefs Madgerie Jameson-Charles and Sharon Jaggernauth School of Education, The University of the West Indies, St. Augustine, Trinidad and Tobago This study investigated efficacy perceptions of untrained in-service Diploma in Education teachers. Two cohorts of students (2011-2012 and 2012-2013) were studied to determine whether perceptions of efficacy for in-service teachers (n=326) differed by (a) gender (b) area of specialisation (c) age and/or (d) years of teaching experience. The Teacher Efficacy Scale (long form) (Tschannen-Moran & Woolfolk Hoy, 2001) was the data collection instrument, and was administered during the first week of training, immediately following a lecture on teacher efficacy. The results suggest that classroom management efficacy beliefs had the lowest mean for both groups. Statistically significant differences in efficacy beliefs among curriculum specialisation were reported, with mathematics, science and modern language teachers’ efficacy lower than other curriculum areas. There were statistically significant differences in perception of efficacy based on the age of the teachers: younger teachers (20-30 years) perceptions of efficacy were significantly lower than older teachers (41-59 years). There were also statistically significant differences of perceptions of efficacy based on years of teaching experience. Results are discussed in terms of the factors that that may affect teaching efficacy and how to maximise the efficacy of teachers. Key words: teacher efficacy, in-service secondary teachers, teacher-training Introduction ‘I think I can, I think I can, I think I can… the little engine climbed until at last they reached the top of the mountain’ (Piper, 1930, pp. 18-21). This excerpt from ‘The Little Engine That Could’ embodies the little engine’s belief about its ability to climb the mountain even though it had never done so before. This kind of belief is referred to self-efficacy, a construct that is used in education to explain students’ achievement (Bandura, 1977; 1997). Self-efficacy research has focused on a set of learned beliefs that individuals hold about their ‘capabilities to organize and execute the courses of action required to manage prospective situations’ (Bandura, 1997, p. 3). Research about self-efficacy suggests that an individual displays high self-efficacy when they are convinced that they can accomplish a task in a given circumstance, even if it initially appears insurmountable (Snyder & Lopez, 2007). Over the last 40 years there has been extensive research into how teachers’ beliefs influence how they feel about their work (Hoy, 2004); their effort on the job; persistence in overcoming obstacles, and resilience in the face of failure. Teachers 6 Madgerie Jameson-Charles and Sharon Jaggernauth often mirror the beliefs of their own teachers and bring these beliefs with them into the classroom, which influence their behaviours and decision making (Hart, Smith, Smith & Swars, 2007). A specific set of beliefs that teachers possess relate to as teacher efficacy, and these beliefs influence teachers’ perceptions about their ability to perform various teaching tasks ‘at a specified level of quality in a given specified situation’ (Dellinger, Bobbett, Olivier, & Ellett, 2007, p.2). Although individual teachers develop these beliefs differently, they begin to crystallise early in their careers. Teachers with strong efficacy beliefs have positive attitudes towards their work (Gresham, 2008); dedicate more time to planning their lessons (Allinder, 1995); experiment with student-centred instructional strategies (Turner, Cruz & Papakonstantinou, 2004); better manage their classrooms, and are more committed to teaching their students (Swars, 2007). While teachers’ efficacy beliefs change over their career and changing contexts and their exposure to professional development and teacher training (Bayraktar, 2009), teachers with low teacher efficacy tend to become less tasks-oriented and motivated over time, and view themselves as less competent than their peers (Bandura, 1997). Theoretical framework Teacher efficacy for classroom management Teacher efficacy for classroom management refers to teachers’ perceived ability to respond to disruptive student behaviour, and to establish expectations and rules that guide classroom behaviour. Highly efficacious teachers manage their classrooms effectively (Sridhar & Javan, 2011); negotiating control with their students (Hami, Czerniak, & Lumpe, 1996), and often giving them autonomy (Ross & Gray, 2006). They rely on positive strategies such as interacting with their students, demonstrating patience, and sharing responsibility with them, rather than insisting on appropriate behaviour and resorting to punitive strategies to maintain classroom control (Henson, 2003). Additionally, they promote positive and proactive approaches to conflict management that were mutually beneficial for them and their students (Morris-Rothschild & Brassard, 2006). Their classroom management style is a reflection of their instructional strategies (Woolfolk & Weinstein, 2006). Teacher efficacy for instructional strategies Teachers’ beliefs influence their choice of specific instructional activities and strategies (Bandura, 1977). Teacher efficacy for instructional strategies refers to teachers’ perceived ability to create classrooms that are conducive to learning, by making decisions about instruction that engage students in meaningful learning. Highly efficacious teachers are able to gauge students’ comprehension, and to meet the students’ needs by adjusting their questions, strategies, explanations, and assessment methods. They plan lessons to provide learning experiences that promote students’ cognitive development and to develop their self-efficacy (Gibson & Dembo, 1984; Shaukat & Iqbal, 2012). They experiment widely with student- An investigation of the influence of teacher variables on pre-training efficacy benefits 7 centred instructional strategies and resources (Colbeck, Cabrera, & Marine, 2002; Tschannen-Moran & Woolfolk Hoy, 2007), rather than teacher-centred strategies (Plourde, 2002; Rule & Harrell, 2006; Shaukat & Iqba, 2012). Highly efficacious teachers also make decisions about improving their practice based on feedback from parents and administration (Tschannen-Moran & Woolfolk Hoy, 2007). Teacher efficacy for student engagement Teacher efficacy for student engagement refers to teachers’ perceived ability to develop and nurture relationships with their students; to motivate them to think creatively; to value learning, and to improve their understanding. Highly efficacious teachers develop relationships with their students and believe all students can learn. They persist in their efforts to support and encourage, rather than avoid or abandon struggling students (Woolfson & Brady, 2009). They recognise achievements rather than condemn shortcomings (Gibson & Dembo, 1984), and develop intrinsic rather than extrinsic motivation to learn (Woolfolk & Hoy, 1990). Conversely, inefficacious teachers tend to group students by ability, and spend more time with the high-ability students (Ashton, Webb, & Doda, 1983; Tschannen-Moran & Woolfolk Hoy, 2007). They avoid topics in which their content knowledge is weak (Garvis & Pendergast, 2011) and tend to avoid students’ questions (Rice & Roychoudhury, 2003). Teacher efficacy and teacher characteristics Research into the relationship between dimensions of teacher efficacy and teacher characteristics is ongoing and, to some extent, remains inconclusive. However, some studies reported that a teacher’s gender does not significantly influence their teacher efficacy (Yeo, Ang, Chong, Huan, & Quek, 2008), while others reported that female teachers had stronger efficacy beliefs than males (Cheung, 2006). In fact, Klassen and Chiu (2010) reported male teachers held stronger efficacy beliefs than females in classroom management, but not in instructional strategies and student engagement. Edwards and Robinson (2012) associated stronger teacher efficacy beliefs with younger teachers than older teachers, while Tschannen-Moran and Woolfolk Hoy (2007) found no such relationship existed. Teaching experience may also strengthen teacher efficacy (Blackburn & Robinson, 2008; Tschannen-Moran & Woolfolk Hoy, 2007; Wolters & Daugherty, 2007), as teachers accrue mastery experiences and successes with students over time (Wolters & Daugherty, 2007). Wolters and Daugherty (2007) reported a small effect of teaching experience on efficacy for instructional strategies and classroom management, but not for student engagement. Background to the current study The current study examined the teacher efficacy beliefs of secondary school teachers in Trinidad and Tobago, who were newly enrolled in an in-service Postgraduate Diploma in Education (DipEd) programme (2011-2013) of the School of Education, 8 Madgerie Jameson-Charles and Sharon Jaggernauth The University of the West Indies. The DipEd was developed in 1973 at the request of the Ministry of Education of Trinidad and Tobago to address the initial training needs of secondary school teachers. The teachers who enrol in the programme have subject content knowledge but no formal teacher training in secondary education. The programme exposes participants to the theory and practice of education, to provide ‘a solid theoretical base in the foundation disciplines, curriculum theory, and methodology … [and] the opportunity to improve their control of specific content relevant to teaching in their subject area’ (Faculty of Humanities & Education, Postgraduate Regulation & Syllabuses, 2012-13, p.72). DipEd participants’ school and classroom practice are supervised by a faculty member, and they engage in classroom-based action research to investigate subject specific pedagogical strategies in mathematics, science, English, modern languages, social studies, educational administration, visual and performing arts, and information technology. Teachers differ in their entry qualifications to the teaching profession, but only teachers who possess an undergraduate degree (or its equivalent) in their subject of specialisation matriculate into the government-funded DipEd. These teachers are practicing secondary teachers who have been teaching for at least two years, and whose participation is completely voluntary. Purpose of the study Literature on teacher efficacy tends to focus on the experiences of pre-service primary school teachers. There seems to be a dearth in research that investigates the efficacy beliefs of experienced untrained secondary school teachers and there is little evidence of teacher efficacy research from a Caribbean context. The purpose of this study was to examine in-service DipEd teacher efficacy beliefs across disciplines, and teacher demographic characteristics of age, gender and years of teaching experience. The DipEd teachers were used for this study to determine experienced untrained teachers’ efficacy beliefs prior formal teacher training. Research questions The research was guided by the following three questions: 1. How strong are the relationships between teacher efficacy for classroom management, teacher efficacy for student engagement, and teacher efficacy for instructional strategies for in-service teachers? 2. What are the relationships for the in-service teachers’ rating of teacher efficacy with age, gender, years of teaching experience and curriculum major? 3. How well do gender, years of teaching experience and curriculum major predict in-service teachers’ overall teacher efficacy? An investigation of the influence of teacher variables on pre-training efficacy benefits 9 Procedure This research was part of larger study designed to investigate teacher efficacy before and after training. The focus of this paper is to report only on the teachers’ rating of their efficacy before training. Teachers’ perceptions of their efficacy as they relate to gender, age, years of teaching experience and curriculum major, were examined. Data were collected during the first week of the DipEd immediately following a lecture on teacher self-concept and teacher efficacy. Teachers were given consent forms to review; received full briefing information, and, if they agreed, were administered the questionnaire. They were assured of confidentiality of their responses, since only the last four digits of their student identification numbers were required to keep the pre-exposure to pedagogy and post-exposure to pedagogy data coordinated. They were required to complete the questionnaire during their break. One student was assigned to collect all the questionnaires and return then to the lecturer. Participants The entire population of students (n=400) enrolled in the DipEd programme over two years who attended the lecture on teacher self-concept and teacher efficacy was invited to participate. In total 339 questionnaires were completed and returned, reflecting a response rate of 85%. The participants were two cohorts of in-service postgraduate student teachers (Cohort 1: 2011-2012, n=157; Cohort 2: 2012-2013, n=178). They ranged in age from 20 to 59 years (35% were aged 20 to 30 years; 41% were aged 31 to 40 years; 24% were aged 41 to 59 years). There were 74 males and 263 females. Their areas of specialisation were mathematics (n=45); science (n=64); English (n=50); educational administration (n=25), visual and performing arts (n=29), social studies (n=76); modern languages (n=30); and information technology (n=16). The years of teaching experience ranged from one to 30 years. Of the participants surveyed, 9% reported that they had been teaching for more than 21 years, and 66% reporting fewer than 11 years teaching experience. Instrumentation The instruments used for this study consisted of a demographic questionnaire and the Teachers Sense of Efficacy Scale (TSES) (long form) (Tschannen-Moran & Woolfolk Hoy, 2001). The demographic questionnaire captured teachers’ age, gender, area of curriculum specialisation, and number of years of teaching experience at the secondary level. The 24 item TSES was developed to measure teacher efficacy for classroom management (8 items); student engagement (8 items); instructional strategies (8 items), and overall teacher efficacy as a composite score of the entire scale. Each item was scored on a 9-point scale from ‘nothing’ (1) to ‘a great deal’ (9). Reliabilities were high: 0.90 for classroom management, 0.87 for student engagement, 0.91 for instructional strategies, and 0.94 for overall teacher efficacy (Tschannen-Moran & Woolfolk Hoy, 2001). 10 Madgerie Jameson-Charles and Sharon Jaggernauth Results Development of sub-scales A principal component analysis (PCA), principal axis factoring method, with Varimax orthogonal rotation was conducted on the 24 items of the TSES. The Kaiser-Meyer-Olkin (KMO) measure verified the sampling adequacy for analysis, KMO= .95. Bartlett’s test for Sphericity indicated that correlations between the items were significantly large for PCA. An initial analysis was conducted to obtain eigenvalues for each component in the data. Four components had eigenvalues over Kaiser’s criterion of 1 and in combination explained 61% of the variance. Items with loadings larger than .40 are presented in Table 1. Table 1. Rotated component matrix for factor analysis Scale Items 1 Factors 2 3 1. How much can you do to get through to the most difficult students? .668 2. How much can you do to help your students think critically? .602 3. How much can you do to control disruptive behaviour in the classroom? 4. How much can you do to motivate students who show low interest in school work? 5. To what extent can you make your expectations clear about student behaviour? 6. How much can you do to get students to believe they can do well in school work? 7. How well can you respond to difficult questions from your students? 8. How well can you establish routines to keep activities running smoothly? .401 9. How much can you do to help your students’ value learning? .400 10. How much can you gauge student comprehension of what you have taught? .506 11. To what extent can you craft good questions for your students? .627 12. How much can you do to foster student creativity? 13. How much can you do to get children to follow classroom rules? 14. How much can you do to improve the understanding of a student who is failing? 15. How much can you do to calm a student who is disruptive or noisy? .763 16. How well can you establish a classroom management system? .654 17. How much can you do to adjust your lessons to the level of individual students? 18. How much can you use a variety of assessment strategies? 19. How well can you keep a few problem students form ruining an entire lesson? .745 .800 .478 .467 .534 .677 .425 .557 .606 .421 .787 .534 .433 .647 .665 .759 20. To what extent can you provide alternative explanations to confused students? 21. How well can you respond to defiant students? 22. How much can you assist families in helping their children do well in school? .488 .465 23. How well can you implement alternative strategies in your classroom? .492 .629 24. How well can you provide appropriate challenges for very capable students? Eigenvalues %variance 4 .578 .617 .682 10.50 1.67 1.39 1.11 43.77 6.95 5.79 4.62 An investigation of the influence of teacher variables on pre-training efficacy benefits 11 There were minor variations in the results obtained in the current study as compared with the factors demonstrated by Tschannen-Moran and Woolfolk Hoy (2001). Table 2 compares the established item distribution according to Tschannen-Moran and Woolfolk Hoy with the initial factor analysis in this study. Table 2. Comparison of data reduction of items from current study and established scale Factors Factor structure (established scale) Teacher efficacy in classroom management 3,5, 8,13,15,16,19,21 Teacher efficacy in student engagement 1,2,4,6,9,12,14,22 Teacher efficacy in instructional practices 7,10,11,17,18,20,23,24 Additional factor Factors structure (current study) 3,5,13,15,16,19,21 1,2,4,6,9,12,14,22 7,8,11,17,20 10,18,23,24 The items shown as a fourth ‘Additional factor’ were all related to assessment, which suggests that respondents in this study did not associate assessment with instructional strategies. However, after a Varimax rotation with restriction for three factors all items reverted to the dimensions identified by Tschannen-Moran and Woolfolk Hoy (2001). This suggests that the three factor structure was appropriate to represent in-service teachers’ efficacy beliefs in this research. These three factors accounted for 54.2% of variance, and were consistent with those identified by its developers. None of the 24 items yielded loading values less than .40, therefore, they were all retained for analysis. Table 3 illustrates the three components in their final scale and the accounted-for variance. Table 3. Eigenvalues and variance percentages and scale reliability values for current study Measures of teacher efficacy Instructional strategies Student engagement Classroom management Eigenvalue % of Variance Cumulative % 4.318 3.950 3.926 41.922 6.569 5.752 41.922 48.491 54.243 Subscale reliabilities for teacher efficacy for instructional strategies (α=0.91), classroom management (α=0.86) and student engagement (α=0.90) were high. Similarly, the overall scale reliability was high (α=0.94). Table 4 compares the factor and overall reliabilities of the current study with the established scales. The overall consistency of the established scale and the current study was the same. It is also evident that both established scale and the current study had high factor reliabilities (greater than 0.80). 12 Madgerie Jameson-Charles and Sharon Jaggernauth Table 4. Comparison of factors reliabilities of items from current study and established scale Measures of efficacy Engagement Instruction Management Overall internal consistency Cronbach’s α for established scale .81 .86 .86 .94 Cronbach’s α for current study .87 .91 .90 .94 Variables In this study the dependent metric variables were overall teacher efficacy (TE), teacher efficacy for instructional strategies (IS), classroom management (CM), and student engagement (SE). The independent categorical variables were cohort groups (cohort), gender, age, years of teaching experience, and curriculum area of specialisation (major). Data analysis Prior to analysis, data were screened for accuracy, completeness, consistency, and reasonableness, to ensure that inferences premised on them were reliable and valid. Statistical analyses using IBM SPSS20 included descriptive statistics, tests of normality, tests of association, and tests of the underlying assumptions of statistical tests. There were few missing completely at random (MCAR) data points for which series mean values were imputed by SPSS. Outliers were found to lie within 3 SD of the mean, and were not deleted. Variables were found to be approximately normally distributed, and acceptable skewness and kurtosis eliminated the need for transformation. Question 1: How strong are the relationships between efficacy in classroom management, efficacy in student engagement and efficacy in instructional practices? Preliminary analyses revealed no violation of the underlying assumptions of normality, linearity and homoscedasticity. Pearson product-moment correlations were computed to explore the relationship among in-service teachers’ overall teacher efficacy, and teacher efficacy for classroom management, instructional practice, and student engagement. Table 5 presents the findings. There were moderate levels of significant inter-correlation among the three dimensions of teacher efficacy, and strong significant relationships between the three dimensions and overall teacher efficacy. An investigation of the influence of teacher variables on pre-training efficacy benefits 13 Table 5. Pearson product-moment correlation between measures of teacher efficacy Measures of teacher efficacy Student engagement Instructional strategies Classroom management Overall teacher efficacy Student engagement Instructional strategies .868** .691** .589** .880** - - .856** .587** - ** p < .001 (2-tailed) Question 2: What are the relationships for the in-service teachers’ rating of efficacy with gender, age, years of teaching experience and curriculum major? An independent samples t-test compared the means of male and female in-service secondary teachers’ responses about their overall teacher efficacy, and teacher efficacy for classroom management, instruction, and student engagement. The results indicate that while male teachers (M=6.680, SD=1.101) reported higher scores than female teachers (M=6.582, SD=.971) on overall teacher efficacy, the difference was not statistically significant, t (305)= .992, p= .357. Similar nonsignificant results were observed for teacher efficacy for classroom management, instructional practices, and student engagement. The mean for teacher efficacy for classroom management was the lowest. A summary of the results is illustrated in Table 6. Table 6. Summary of findings of teacher efficacy beliefs on all dimensions according to gender Measures of teacher efficacy Male Female df Instructional strategies Student engagement Classroom management Overall teacher efficacy 329 323 325 305 M SD 6.9271.651 6.286 1.104 5.997 1.188 6.680 1.101 M SD 6.7401.161 6.247 1.045 5.887 1.084 6.582 .971 t 1.097 .278 .824 .992 sig. .274 .781 .410 .357 A one-way between-subjects ANOVA compared the effect of age on teacher efficacy across measures, and indicates that age was significant factor for teacher efficacy across all measures. Table 7 summarises teachers’ perceptions of efficacy by age. 14 Madgerie Jameson-Charles and Sharon Jaggernauth Table 7. Summary of results of teachers’ perception of efficacy on all dimensions according to age Measures of 20-30 31-40 41-60 efficacy Mean SD Mean SD Mean SD Instruction 6.497 .886 6.762 1.060 7.255 1.894 strategies Student 6.073 .908 6.308 1.102 6.471 1.167 engagement Classroom 5.693 .868 5.987 1.034 6.150 1.167 management Overall teacher 6.343 .792 6.648 .989 6.866 1.204 efficacy df F sig 2 8.495 .000 2 3.826.033 2 5.064 .007 2 6.369 .002 Tukey HSD post hoc comparisons indicated the mean score for overall teacher efficacy for in-service teachers aged 20-30 years were significantly lower than teachers in the age ranges 41-60 years and 31-40 years. However, there was no significant difference between teacher aged 41-60 years and 31-40 years. Post hoc comparisons also indicated that in-service teachers aged 41-60 years reported significantly higher teacher efficacy across dimensions, than those aged 20-30. Teachers aged 31-40 reported higher teacher efficacy for classroom management than those aged 20-30. Teachers aged 41-60 reported significantly higher teacher efficacy for instructional strategies and classroom management than those aged 3140 years. Taken together the results suggest that age was a factor for teacher efficacy. It is noteworthy that the mean for teacher efficacy for classroom management was the lowest across measures. A one-way between-subjects ANOVA compared the effects of years of teaching experience across measures of teacher efficacy. Findings suggest that years of teaching experience was a significant factor for all measures of teacher efficacy. Of interest, the mean for teacher efficacy for classroom management efficacy was the lowest for all groups. Table 8 summarises these findings. Table 8. Summary of teacher efficacy beliefs on all dimensions according to years of teaching experience Measures of 0-10 11-20 21-30 efficacy MeanSD Mean SD Mean SD df Instructional strategies Student engagement Classroom management Overall teacher efficacy F sig 6.586 .979 6.924 1.434 7.738 2.153 2 12.139 .000 6.172 1.038 6.295 1.043 6.727 1.357 2 3.478 .032 5.817 .974 5.918 1.126 6.500 .997 2 5.993 .003 6.460 .925 6.624 1.000 7.318 1.206 2 8.920 .000 An investigation of the influence of teacher variables on pre-training efficacy benefits 15 Tukey HSD post hoc comparisons indicated overall teacher efficacy for inservice teachers with 20-30 years of teaching experience was significantly greater than those with 0-10 years and 11-20 years of teaching experience. Overall teacher efficacy of teachers with 11-20 years of teaching experience was significantly higher than those with 0-10 years of teaching experience. Teachers with 21-30 years of teaching experience reported significantly higher teacher efficacy than those with 0-10 years of teaching experience across all dimensions, but significantly higher teacher efficacy for instructional strategies and classroom management than those with 11-20 years of teaching experience. These results suggest that years of teaching experience was a factor for all measures of teacher efficacy. It should be noted that years of teaching experience has to be high to see the effect in terms of teacher efficacy for student engagement. One-way between-subjects ANOVA determined that there were significant differences in all measures of teacher efficacy across curriculum specialisation. Table 9 presents the means and standard deviations of teacher efficacy by curriculum major (mathematics [M], science [S], English [E], social studies [SS], modern languages [ML], educational administration [EA], and information technology [IT] and visual and performing arts [V]). Table 9. Summary of mean and standard deviation for measures of teacher efficacy by curriculum major Measure of teacher efficacy Student engagement Curriculum major M S E SS ML EA IT V 5.819 5.8216.4936.2216.1337.2446.5436.252 (.835) (1.821) (.948) (.948) (1.031) (.945) (.907) (1.055) Instructional 6.326 6.5566.7686.5176.8217.9857.2956.772 strategies (.931) (.923)(.990)(1.047)(.921)(2.331)(1.798)(1.197) Classroom management 5.567 5.5905.9905.8565.9136.8706.0915.904 (.769) (1.003) (1.023) (1.069) (1.291) (.781) (.818) (1.033) Overall teacher6.182 6.2656.7066.4616.5777.0677.0666.939 efficacy (.709) (.869) (.926) (1.010) (.989) (1.094) (.922) ( .918) SD shown in parentheses An inspection of the means for measures of teacher efficacy indicates that educational administration and information technology majors reported the highest teacher efficacy and mathematics and science majors reported the lowest teacher efficacy across measures. Further, the means for teacher efficacy for classroom management was lowest across all curriculum specialisations. The ANOVA results in Table 10 indicate significant differences in teacher efficacy by curriculum specialisation across all dimensions. 16 Madgerie Jameson-Charles and Sharon Jaggernauth Table 10. ANOVA measures of teacher efficacy by curriculum major Measures of teacher efficacy df F p Instructional strategies Student engagement Classroom management Overall teacher efficacy 7 7 7 7 6.013 7.782 5.321 7.503 .000 .000 .000 .000 Tukey HSD post-hoc comparisons indicated that educational administration majors reported significantly higher overall teacher efficacy than those majoring in all other curriculum specialisations, except information technology. Otherwise, there were no significant differences in teacher efficacy among other curriculum majors. Table 11 summarises means for groups in homogeneous subsets for overall teacher efficacy. Table 11. Summary of means for groups in homogeneous subsets for overall teacher efficacy after Tukey HSD post hoc analysis Curriculum area N Subset for alpha = 0.05 1 2 Mathematics 436.1589 Science 57 6.2544 Social studies 68 6.4865 6.4865 Modern languages 29 6.5603 6.5603 English 496.69736.6973 Visual and performing arts 26 Information technology 12 Educational administration 22 6.8093 3 6.8093 7.0104 7.0104 7.6837 Tukey HSD post hoc comparisons revealed that mathematics majors reported significantly lower teacher efficacy for instructional strategies than educational administration majors. Education administration teachers reported significantly higher teacher efficacy for instructional strategies than all other majors, except information technology. There were no significant differences in teacher efficacy for instructional strategies among the other curriculum specialisations. Table 12 gives a summary of means for groups in homogeneous subsets for teacher efficacy for instructional strategies after Tukey HSD post hoc analysis. An investigation of the influence of teacher variables on pre-training efficacy benefits 17 Table 12. Summary of means for groups in homogeneous subsets for instructional strategies efficacy after Tukey HSD post hoc analysis Curriculum area N Subset for alpha = 0.05 1 2 3 Mathematics 43 6.3256 Social studies 75 6.5167 6.5167 Science 63 6.5556 6.5556 English 50 6.7675 6.7675 Modern languages 30 6.8208 6.8208 Information technology 15 7.1500 7.1500 7.1500 Visual and performing arts 28 7.2946 7.2946 Educational administration 25 7.9850 The post-hoc revealed that those studying for educational administration majors reported significantly higher teacher efficacy for student engagement than those studying for mathematics, science, modern languages and social studies majors. Table 13 summarises the means for groups in homogeneous subsets for teacher efficacy for student engagement after Tukey HSD post hoc analysis. Table 13. Summary of means for groups in homogeneous subsets for student engagement efficacy after Tukey HSD post hoc analysis Curriculum area N Subset for alpha = 0.05 1 Mathematics 45 Science 615.8217 2 3 5.8194 Modern languages 30 6.1333 6.1333 Social studies 72 6.2205 6.2205 English 50 6.4925 6.4925 6.4925 Visual and performing arts 29 6.5431 6.5431 6.5431 Information technology 14 6.8304 6.8304 Educational administration 23 7.2446 18 Madgerie Jameson-Charles and Sharon Jaggernauth Post hoc analysis revealed that educational administration majors reported significantly higher teacher efficacy for classroom management than all other majors, except information technology. There were no significant differences among other groups. It must be noted that mathematics teachers reported the lowest teacher efficacy for classroom management. Table 14 summarises the means for groups in homogeneous subsets for teacher efficacy for classroom management after Tukey HSD post hoc analysis. Table 14. Summary of the means for groups in homogeneous subsets for classroom management efficacy after Tukey HSD post hoc analysis Curriculum area N Subset for alpha = 0.05 1 Mathematics 45 5.5667 Science 62 5.5988 Social studies 73 5.8562 Modern languages 29 5.9138 English 49 5.9898 Visual and performing arts 29 6.0905 Information technology 14 6.2321 Educational administration 24 2 6.2321 6.8698 From the above results, it is reasonable to conclude that the area of curriculum specialisation was a significant factor for all measures of teacher efficacy. Moreover, educational administration and information technology majors reported the highest teacher efficacy across all curriculum concentrations, while mathematics and science majors reported the lowest. Further, educational administration and mathematics majors differed significantly across all measures of teacher efficacy. Mathematics teachers in the sample expressed significantly lower efficacy beliefs across all dimensions. Overall, it was apparent that teachers’ teacher efficacy beliefs were significantly influenced by their age, years of teaching experience at the secondary level, and the area of curriculum specialisation in which they majored. Question 3: How well do gender, years of teaching experience and curriculum major, predict teachers’ overall teacher efficacy? A multiple regression analysis was conducted to determine the extent to which the dependent variable, overall teacher efficacy, could be predicted by the linear combination of independent teacher variables of gender, years of teaching experience, and curriculum major, as well as which of these teacher variables An investigation of the influence of teacher variables on pre-training efficacy benefits 19 were significant predictors of overall teacher efficacy. The underlying assumptions of multicollinearity, normality, outliers, linearity, homoscedasticity, and independence of residuals (Tabachnick & Fidell, 2013) were examined in advance. The Kolmogorov-Smirnov test of normality was non-significant, D(339)= .047, p= .067, indicating that overall teacher efficacy was normally distributed. The variance inflation factor (VIF) was low and tolerance was high for all independent variables (see Table 15). Table 15. Tolerance and VIF values of gender, years of teaching experience and curriculum major Overall teacher efficacy Gender Years of teaching experience Tolerance.970 .947 VIF 1.031 1.056 .956 1.046 Pearson’s correlation coefficient for pairs of the independent variables were less than 0.9, and were statistically significant for teacher efficacy and years of teaching experience; teacher efficacy and curriculum major; and years of teaching experience and curriculum major (Table 16). These results suggest multicollinearity was absent among variables. Table 16. Pearson product-moment correlation between variables Teacher efficacy Gender -.041 Years of teaching experience Curriculum major .292** Years of teaching experience .200** .177*** *p < .05 **p < .001 Three outliers were investigated using Mahalanobis distances, and were found to lie below the critical value of 16.27 for three independent variables (Tabachnick & Fidell, 2013), so they were not deleted prior to analysis. The overall rectangular shape of the scatter plot (Figure 1) indicates that the underlying assumptions of linearity and homoscedasticity were satisfied. The independence of residuals assumption was satisfied with the Cook’s distance of less than 1 for the three outliers. Since all of the assumptions were satisfied, the contributions of independent variables on dependent variable were examined. 20 Madgerie Jameson-Charles and Sharon Jaggernauth Regression Standardized Residual 6 4 2 0 -2 -4 -2 -1 0 1 2 Regression Standardized Predicted Value 3 4 Figure 1. Scatterplot of the residual of overall teacher efficacy Multiple linear regression analysis was used to develop a model for predicting overall teacher efficacy from teachers’ gender, years of teaching experience and curriculum major. Table 17 shows the regression coefficients for the predictor variables. Table 17. Results of standard multiple regression analysis for gender, years of teaching experience and curriculum major predicting overall teacher efficacy (n=339) Gender Years of teaching experience Curriculum specialisation B Std. error B Beta Part correlation t -.048 .039 .072 .127 .008 .025 -.020 .262 .155 .020 .255** .152* .038 4.910 2.929 Note. R2 = .108 B = unstandardised regression coefficient; Beta = standardised regression coefficient Dependent variable = overall teacher efficacy total score from teacher efficacy beliefs questionnaire *p < .05 **p < .001 The regression model was a poor fit (R2 adjusted = 10%), but the significant predictors of overall teacher efficacy were years of teaching experience and curriculum major. The original three-factor model explained 10.8% of the variance in overall teacher efficacy (R2= .108), but years of teaching experience was the strongest predictor, explaining 6.5% of the variance, with curriculum major a weaker predictor explaining 2.3% of the variance; gender was the least predictor of teacher efficacy, explaining less than 1% of the variance. An investigation of the influence of teacher variables on pre-training efficacy benefits 21 Discussion This study explored teacher efficacy of in-service secondary teachers, examining potential relationships among three dimensions of teacher efficacy and teacher variables of gender, years of teaching experience, age, and area of curriculum specialisation. It also sought to identify the predictors of overall teacher efficacy among the in-service teachers who participated in the research. To appropriately explain the relationship among the variables, a variety of statistical procedures were employed to provide answers to three research questions. The results indicated significant relationships among all dimensions of teacher efficacy, corroborating earlier research that correlated teacher efficacy with various aspects of their work: their perceived ability to manage their classroom; to make sound instructional decisions, and to engage their students in meaningful learning. The study explored gender-based differences in teacher efficacy, and revealed no significant differences in any of the dimensions of teacher efficacy between the male and female teachers; hence, it was concluded that for teachers in this sample, gender did not influence their efficacy beliefs. Since the literature on teacher efficacy does not speak specifically to the role of gender on teachers’ perception of their ability to perform specific teaching tasks, it is proposed that further research is conducted to more closely examine the role of gender in teacher efficacy beliefs. The influence of teachers’ age on their teacher efficacy was evident in the higher means of efficacy beliefs of older teachers than younger teachers across all dimensions. It was concluded that efficacy beliefs strengthened with age, and change over time (Bandura, 1977). Similarly, teacher efficacy beliefs strengthened as their years of teaching experience increased, corroborating Bandura’s (1997; 2006) proposition that teachers’ efficacy beliefs improve with experience. It was concluded that as in-service teachers’ exposure to the teachinglearning environment increased over time their efficacy beliefs would be positively influenced. This finding, however, is worthy of further exploration. Of equal importance was that participants’ curriculum major significantly influenced their efficacy beliefs; specifically participants in educational administration and information technology, who reported significantly higher teacher efficacy than those in the other curriculum majors. Teachers pursuing educational administration usually have been teaching longer than other teachers; have had more opportunities to lead than their colleagues, and have reported more successes with respect to student outcomes. Though this could account for their strong efficacy beliefs, this conclusion would be worth exploring further through interviews with them. Further research is also warranted to determine the reasons that mathematics and science teachers reported the lowest levels of teacher efficacy in all measures. This is critical so that teacher-training providers may be guided in the content and pedagogical experiences and support they provide teachers who pursue these programmes, since highly efficacious teachers tend to more successfully and positively influence student outcomes than their counterparts. 22 Madgerie Jameson-Charles and Sharon Jaggernauth While curriculum major appears to be the most significant influence on teacher efficacy in this study, years of teaching experience was the strongest predictor of teacher efficacy, and gender was the weakest predictor. Three variables, curriculum major, years of teaching experience and gender, did not together explain a large proportion of the variance in teacher efficacy; and this seems to support the absence of any interaction effect of years of teaching experience and curriculum major, with gender displaying no significant effect on teacher efficacy at all. Perhaps a different combination of demographic characteristics may produce a more appropriate model to be explored in the future. Summary and conclusion Further research in the area of teacher efficacy is critical at this time. The results of this study indicate the need to probe the construct of teacher efficacy using an interpretive lens. It is necessary to understand how untrained secondary teachers describe their experience in the learning environment. This is especially important to understand the reason for low classroom management efficacy across certain demographics. In addition research on teacher agentic behaviour may help establish the influence of personal, proxy and collective agency on untrained teacher efficacy beliefs. Also a comparative analysis of untrained and trained secondary teacher efficacy could be conducted to determine differences among the two groups. The purpose of this paper was to examine teacher efficacy beliefs across disciplines and teacher demographic characteristics of age, gender and years of teaching experiences. Other scholars, who have reviewed the construct of teacher efficacy, suggest that teacher efficacy appears to be informed by mastery and vicarious experiences (Bandura, 1997; Shaukat & Iqual, 2012). The results of this study suggest that teaching experience is a critical in understanding the teacher efficacy beliefs of untrained teachers. One noteworthy finding was that teacher efficacy for classroom management were lowest across all groups, which raises questions about the factors in the classroom environment that influence teacher classroom management efficacy beliefs. It is proposed that a qualitative approach to elucidate this relationship would be appropriate. While this study interfaced with a small subset of the secondary school teacher population, it is likely that it is representative of the wider community of teachers, providing an opportunity for larger scale research among both secondary and primary school teachers in Trinidad, Tobago and the wider Caribbean. An investigation of the influence of teacher variables on pre-training efficacy benefits 23 References Allinder, R. (1995). An examination of the relationship between teacher efficacy and curriculum-based measurement and student achievement. Remedial and Special Education, 16(4), 247-254. Ashton, P. T., Webb, R. B. & Doda, N. (1983). A study of teacher’s sense of efficacy (Final report to the National Institute of Education, Executive Summary). Gainesville, FL:The University of Florida. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman and Company. Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents (Vol. 5, pp. 307-337). Greenwich, CT: Information Age Publishing. Bayraktar, S. (2009). Pre-service primary teachers’ science teaching efficacy beliefs and attitudes toward science: The effect of a science methods course. The International Journal of Learning, 16(7), 383-396. Blackburn, J. & Robinson, J. (2008). Assessing teacher self-efficacy and job satisfaction of early career agriculture teachers in Kentucky. Journal of Agricultural Education, 49(3), 1-11. Cheung, H. Y. (2006). The measurement of teacher efficacy: Hong Kong primary in-service teachers. Journal of Education for Teaching, 32(4), 435-451. Colbeck, C. L., Cabrera, A. F. & Marine, R. J. (2002). Faculty motivation to use alternative teaching methods. Paper presented at the Annual Meeting of the American Educational Research Association. New Orleans, Louisiana. Dellinger, A. B., Bobbett, J. J., Olivier, D. F. & Ellett, C. D. (2008). Measuring teachers’ self-efficacy beliefs: Development end use of the TEBS-Self. Teaching and Teacher Education, 24(3), 751766. Edwards, M.C. & Robinson, J.S. (2012). Assessing the teacher self–efficacy of agriculture instructors and their early career employment status: A comparison of certification types. Journal of Agricultural Education, 53(1), 150-161. Garvis, S. & Pendergast, D. (2011). An investigation of early childhood teacher self-efficacy beliefs in the teaching of arts education. International Journal of Education & the Arts, 12(9). Retrieved from http://www.ijea.org/v12n9/index.html Gibson, S. & Dembo, M. H. (1984). Teacher efficacy: A construct validation, Journal of Educational Psychology, 76 (4), 569-582. Gresham, G. (2008). Mathematics anxiety and mathematics teacher efficacy in elementary pre-service teachers. Teaching Education, 19(3), 171-184 Hami, J., Czerniak, C. & Lumpe, A. (1996). Teacher beliefs and intentions regarding the implementation of science education reform strands. Journal of Research in Science Teaching, 33, 971- 993. Hart, L.C., Smith, M.E., Smith, S.Z. & Swars, S.L. (2007). A longitudinal study of elementary preservice teachers’ mathematics pedagogical and teaching efficacy beliefs. School Science and Mathematics, 107(8), 325-335. Henson, R. K. (2003). Relationships between pre-service teachers’ self-efficacy, task analysis, and classroom management beliefs. Research in the Schools, 10, 53-62. Hoy, A. W. (2004). What pre-service teachers should know about recent theory and research in motivation? Paper presented at the Annual Meeting of the American Educational Research Association, San Diego, CA. Klassen, R. M. & Chiu M.M. (2010). Effects on teachers’ self-efficacy and job satisfaction: Teacher gender, years of experience, and job stress. Journal of Educational Psychology, 102(3), 741-756. 24 Madgerie Jameson-Charles and Sharon Jaggernauth Morris-Rothschild, B. & Brassard, M. R. (2006). Teachers’ conflict management styles: The role of attachment styles and classroom management efficacy. Journal of School Psychology, 44 (2), 105-121. Plourde, L.A. (2002). The influence of student teaching on pre-service elementary teachers’ science self-efficacy and outcome expectancy beliefs. Journal of Instructional Psychology, 29, 245253. Rice, D. C. & Roychoudhury, A. (2003). Preparing more confident pre-service elementary science teachers: One elementary science methods teacher’s self-study. Journal of Science Teacher Education, 14(2), 97-126. Ross, J. A. & Gray, P. (2006). School leadership and student achievement: The mediating effects of teacher beliefs. Canadian Journal of Education, 29(30), 798‐822 Rule, A. C. & Harrell, M. H. (2006). Symbolic drawings reveal changes in pre-service teacher mathematics attitudes after a mathematics methods course. School Science and Mathematics, 106, 241-258. Shaukat, S. & Iqbal, M.H. (2012). Teacher efficacy as a function of student engagement. instructional strategies and classroom management. Pakistan Journal of Social and Clinical Psychology, 10(2), 82 - 85. Snyder, C. R., & Lopez, S. J. (2007). Positive psychology: The scientific and practical explorations of human strengths. Thousand Oaks, CA: Sage. Sridhar, Y. N. & Javan, S. (2011). Teacher efficacy and its relationship to classroom management style among secondary school teachers of Kigali city, Rwanda. Journal of Education and Practice, 2(2), 55-60. Swars, S. L. (2007). Examining perceptions of mathematics teaching effectiveness among elementary pre-service teachers with differing levels of mathematics teacher efficacy. Journal of Instructional Psychology, 32(2), 139-147. The University of the West Indies, Faculty of Humanities & Education. (2012). Postgraduate Regulations & Syllabuses. Tschannen-Moran, M. & Woolfolk Hoy, A. (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17, 783-805. Tschannen-Moran, M. & Woolfolk Hoy, A. (2007). The differential antecedents of Self-efficacy beliefs of novice and experienced teachers, Teaching & Teacher Education, 23(6), 944-956. Turner, S., Cruz, P. & Papakonstantinou, A. (2004). The impact of a professional development program on teachers’ self-efficacy. Research Precession of the National Council of Teachers of Mathematics Annual Meeting, Philadelphia, PA. Wolters, C. A. & Daugherty, S. G. (2007). Goal structures and teachers’ sense of efficacy: Their relation and association to teaching experience and academic level. Journal of Educational Psychology, 99, 181-193. Woolfolk, A. E. & Hoy, W. K. (1990). Prospective teachers’ sense of efficacy and their beliefs about control. Journal of Educational Psychology, 82, 81-91. Woolfolk, A. E. & Weinstein, C. S. (2006). Student and teacher perspectives in classroom management. In C. M. Evertson, & C. S. Weinstein (Eds.), Handbook of classroom management: Research, practice, and contemporary issues (pp. 685-709). Mahwah, NJ: Lawrence Erlbaum Associates. Woolfson, L. & Brady, K. (2009). An investigation of factors impacting on mainstream teachers’ beliefs about teaching students with learning difficulties. Educational Psychology, 29(2), 221-238. Yeo, L. S., Ang, R. P., Chong, W. H., Huan, V. S. & Quek, C. L. (2008). Teacher efficacy in the context of teaching low achieving students. Current Psychology, 27(3), 192-204. Caribbean Teaching Scholar Vol. 5, No. 1, April 2015, 25–35 ERA EDUCATIONAL RESEARCH ASSOCIATION Pre-testing as an indicator of performance in final examinations among third year medical students Sehlule Vumaa, Bidyadhar Sab and Samuel Ramsewakb Department of Para-clinical Sciences, The University of the West Indies, St Augustine, Trinidad and Tobago; bCentre for Medical Sciences Education, The University of the West Indies, St Augustine, Trinidad and Tobago a This paper reports on the relationship between pre-testing and student performance in final examinations. Diagnostic pre-testing is a valuable tool that identifies gaps in knowledge among students; identifies teaching requirements, and helps direct teaching programmes to take corrective measures. Pre-test scores may also help to predict student performance in final examinations. A retrospective descriptive correlational analysis was conducted on third year medical students’ performance in the total haematology components of selected multi-specialty final integrated examinations in four third year courses and one related first year course. These students had previously been given a diagnostic pre-test in haematology at the start of their third year programme. Of the 159 students eligible for the study, 130 passed and 29 failed the pre-test. Some students responded to the interventions instituted after the diagnostic pre-test and others did not. The pre-test proved to be a good predictor of final results. Correlation between the pre-test and the total haematology components of the different final integrated examinations ranged from r= .264 to r= .475) and between the pretest and the final integrated examinations ranged from r= .375 to r= .467. It was concluded that the pre-testing grade is a reliable indicator of performance in the final examinations. However interventions need to be revised to encourage more individual, student engagement. Key words: Academic performance, correlations, prediction, success Introduction Pre-testing stimulates and enhances student learning (Richland, Kornell & Kao, 2009). Simkins and Allen (2000) found that pre-test scores were a useful indicator of students’ success in a course and helped to predict student performance in final examinations. This may be because pre-testing can direct or focus students’ attention to information that is most explicitly relevant to them. Further, Beckman (2008) showed that pre-testing was an effective way of giving students the learning objectives of a course. Helle, Nivala, Kronqvist, Ericsson and Lehtinen (2010), in a pre-test/post-test study among undergraduate pathology students, found that the only factor that predicted performance on the post-test was performance in the pre-test. On the other hand, many other factors may predict end of course scores. In a third year internal medicine course, for example, Gupta et al (2007) showed that 26 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak in-course assessments correlated positively with end of semester assessments. This paper uses a diagnostic pre-test to assess the recall of material from a prerequisite course; evaluate student learning; identify teaching requirements and teaching methods that need to be changed, and highlight relevant changes that are needed in the curriculum. It also shows how an instructor-based pre-test can be used – not only as a diagnostic tool but also as a correlation tool. In this undergraduate setting, haematology is taught as part of the Paraclinical Sciences course. Para-clinical Sciences bridge the gap between the pre-clinical and the clinical years of medical training. They integrate different sub-specialties such as anatomical pathology, chemical pathology, haematology, immunology, microbiology and pharmacology. In the current system, first-year students from medical, dental, and veterinary medicine take a six week introductory integrated course called Basic Para-clinical Sciences (BPS). Students are introduced to normal haematology and basic concepts in haematology (see Table 1). In the second year there is no haematology teaching (there is some teaching in the second year for the other sub-specialties). In the third year the medical students then examine the pathophysiological basis of disease applying the basic knowledge from first year. There is no time set to revisit the material covered in the first year. The first year course material is a pre-requisite for third year courses and students have to have passed the first year integrated BPS course. Table 1. Courses in the Para-clinical Sciences and details of haematology component of these courses Year Course Semester Haemotology component: Course content Course teaching/ delivery methods 1 Basic Para-clinical Sciences (BPS) Semester 1 • Peripheral blood and blood components, haemopoiesis, introduction to haemostasis, introduction to basic laboratory procedures in haematology • Problem based learning • Didactic lectures 3 Applied Para-clinical Sciences I (APS-I) First half of Semester 1 • Anaemias • Problem based learning • Didactic lectures 3 Applied Para-clinical Sciences II (APS-II) Second half of Semester 1 • Haemostasis, thrombosis and transfusion medicine • Problem based learning • Didactic lectures 3 Applied Para-clinical Sciences III (APS-III) Semester 2 • Haematological malignancies and bone marow failure syndromes • Problem based learning • Didactic lectures 3 Integrated Paraclinical Sciences (IPS) Semester 1 & 2 (Rotations throught the year) • Integrates APS-I, II and III • Clerkships • Clinical and practical application and exposure Pre-testing as an indicator of performance in final examinations among third year medical students 27 The various assessments undertaken across the first and third year courses are shown in Table 2. Each sub-specialty contributes equally to the integrated examinations. Table 2. Assessments in first and third year courses in multi-specialty integrated course of Para-clinical Sciences Year * Continuous assessment (CA) End of course Final score 1 Basic Para-clinical Sciences (BPS) * Observed practical skills examination (OPSE) Problem-based learning Mulitple choice questions (MCQs) Short answer questions (SAQ) 100% 3 Applied Paraclinical Sciences I (APS-I) Progressive disclosure questions (PDQ) (20%) Problem-based learning (5%) MCQ (50%) Extended matching questions (EMQ) (25%) 100% 3 Applied Paraclinical Sciences II (APS-II) PDQ (20%) Problem-based learning (5%) MCQ (50%) EMQ (25%) 100% 3 Applied Paraclinical Sciences III (APS-III) PDQ (20%) Problem-based learning (5%) MCQ (50%) EMQ (25%) 100% 3 Integrated Paraclinical Sciences (IPS) Varies by sub-specialty (25%) [Includes different combinations of case presentation, case writeup, MCQs and **OSPE] OSPE (45%) 70% Clinical skills assessed (30%) Not included in this analysis; ** Haematology component of CA The pre-test was given at the start of the third year to facilitate the recall of the material covered in the first year. According to constructivist learning theories, learning builds upon knowledge that the learner already has from previous experiences; however, knowledge cannot be built effectively without a firm foundation and it is important to identify and address fragile prior knowledge (Simkins & Allen, 2000). This study shows the correlations between the results of the pre-test and the results of final examinations. The research was conducted among third year medical students of the academic year 2010-2011. The authors could find few similar studies in the literature and no similar studies have been conducted in the faculty. The research question What is the association between the pre-testing results in haematology and student academic performance in the haematology components of the multi-specialty final integrated examinations in the first year course (BPS) and the various third year courses (APS-I, APS-II, APS-III and IPS)? 28 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak Methodology Ethical approval was obtained from the Ethics Committee, and Office of the Dean, Faculty of Medical Sciences, The University of the West Indies, St. Augustine. This was a descriptive retrospective correlational, cross sectional analysis of medical students’ examination results. Data was accessed from the Assessment Unit of the Faculty – maintaining the anonymity of the students. The data included results of the haematology components and the overall final integrated courses BPS, APS-I, APS-II, APS-III, IPS. Students who failed the first year course, BPS, and did not move on to third year with this class were excluded from the final analysis. Students who did not take all the third year final examinations were also excluded from the final analysis. At the start of the third year, students were given a diagnostic pretest. They were given feedback within a week of the test. Using the pre-test results, the following interventions were introduced: 1. The instructor took time in each class to highlight the relevant prerequisites, and their relevance, and paid particular attention to the failing students in class – whether it was in the large didactic class sizes (approx. 250 students) or in the smaller clerkship groups (20-25 students). 2. All students were encouraged to seek help from the lecturers – particularly the failing students. 3. The instructor chose the students with the lowest pre-test scores in their groups to be the group leaders during clerkships (clerkship groups are usually assigned alphabetically). The reason why students were chosen to be group leaders was not revealed to the class as students’ tests scores are confidential. The responsibility meant that these students had to be in regular contact with the instructor concerning the group activities. The instructor took the opportunity to hold discussions with them whenever they made contact about the group activities. 4. The instructor increased students’ interaction and engagement in both the didactic lectures in the large classes and the smaller clerkship group activities. The results of the haematology pre-test, the continuous assessment (CA), the total haematology component (THC) and the multi-specialty final integrated examination (FIE) were analysed. The associations between the pre-test the THC and the FIE were analysed. A Chi square test (χ2) of independence was performed between the pre-test and the THC of the third year courses, APS-1, II, III and IPS. Correlational analysis of the results of the pre-test, the CA and the FIE results was performed. Correlations were also performed between the pre-test and the first year BPS course (the CA results for BPS were not available for this analysis). Pre-testing as an indicator of performance in final examinations among third year medical students 29 Correlational analysis was performed by using the Pearson product moment correlation method. Linear regression analysis was performed for the pre-test as an indicator or predictor of final performance. Linear regression analysis was also performed between the THC and the FIE. Results One hundred and ninety-four medical students took the first year integrated BPS examinations. Of these medical students, 33(17%) failed the integrated course (fail is defined by a score of less than 50%). One student did not take the pre-test and one did not take APS-III. Thus the final number of students eligible for analysis was 159. Table 3 shows the total possible scores and the group mean scores for the total haematology components of the first and third year courses. The best mean scores were in the third year courses APS-II and III and the worst was in the first year course, BPS. Table 3. Results of the pre-test and the total haematology components in one first-year course and the third-year courses First year Basic Paraclinical Sciences (BPS) Pre-test Third year Applied Para-clinical Sciences-I (APS-I) Applied Para-clinical Sciences-II (APS-II) Applied Para-clinical Sciences-III (APS-III) Integrated Para-clinical Sciences (IPS) Total score possible 14 49 18 27 32 15 Mean 7.38 (52.7%) 32.16 (65.6%) 10.29 (57.2%) 18.36 (68.0%) 21.61 (67.5%) 8.69 (57.9%) Std deviation 2.099 7.28 2.50 4.09 3.97 2.11 Table 4 shows χ2 of independence for pre-test versus APS-I (χ2 = 0.184, p>0.05), APS-II (χ2 = 3.020, p>0.05), APS-III (χ2 = 1.518, p>0.05) and IPS (χ2 = 3.064, p>0.05) was found to be statistically insignificant. This shows that for the group, there was no significant difference between the results of the pre-test and the results of the third year courses. However the researchers did not perform further statistical analysis of the two groups that failed or passed the pre-test and then failed or passed the third year courses separately. Table 5 shows the number of students who passed and failed the pre-test and their results in the final examinations in the third year courses. Here we can see that 75.4% of students in IPS and 90% of students in APS-II who passed the pre-test, passed the final examinations. When it comes to the students who failed the pre-test, 93.1% passed APS-III compared to 58.6% in APS-I. 30 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak Table 4. Results of the pre-test and the total haematology components in the four third-year courses and χ2 of independence No. of students who FAILED (Score <50%) No. of students who PASSED (Score ≥50%) TOTAL χ2 of independence Pre-test 29 (18.2%) 130 (81.8%) 159 NA Applied Paraclinical Sciences-I 32 (20.1%) 127 (79.9%) 159 *0.184 Applied Paraclinical Sciences-II 18 (11.3%) 141 (88.7%) 159 *3.020 Applied Paraclinical SciencesIII 21 (13.2%) 138 (86.8%) 159 *1.518 Integrated Paraclinical Sciences 42 (26.4%) 117 (73.6%) 159 *3.064 (p>0.05) Table 5. Number of students who failed or passed the haematology pre-test and their subsequent results in the total haematology components of multispecialty final integrated examinations in four third-year courses Pre-test FAIL PASS Total 29 (18.2%) 130 (81.8%) 159 FAIL PASS FAIL PASS Applied Para-clinical Sciences-I 12 (41.4% of fails) (7.5% of cohort) 17 (58.6% of fails) (10.7% of cohort) 20 (15.4% of passes) (12.6% of cohort) 110 (84.6% of passes) (69.2% of cohort) 159 Applied Para-clinical Sciences-II 5 (17.2% of fails) (3.1% of cohort) 24 (87.8% of fails) (15.1% of cohort) 13 (10% of passes) (8.2% of cohort) 117 (90% of passes) (73.6% of cohort) 159 Applied Para-clinical Sciences-III 2 (6.9% of fails) (1.3% of cohort) 27 (93.1% of fails) (17% of cohort) 19 (14.6% of passes) (11.9% of cohort) 111 (85.4% of passes) (69.8% of cohort) 159 Integrated Para-clinical Sciences 10 (34.5% of fails) (6.3% of cohort) 19 (65.5% of fails) (11.9% of cohort) 32 (24.6% of fails) (20.1% of cohort) 98 (75.4% of passes) (61.7% of cohort) 159 Linear regression analysis was performed and results are shown in Table 6. The pre-test predicted best the APS-I results followed by APS-II, then APS-III for both the THC and the FIE. For the practical examination, the observed structured practical examination, the pre-test was better at predicting the FIE than the THC results. Pre-testing as an indicator of performance in final examinations among third year medical students 31 Table 6. Pre-test as indicator of performance in the total haematology component and the multi-specialty final integrated examinations Linear regression analysis Total haematology component β Linear regression analysis Final integrated examination β Applied Para-clinical Sciences-I .48 .47 Applied Para-clinical Sciences-II .37 .41 Applied Para-clinical Sciences-III .30 .38 Integrated Para-clinical Sciences .32 .46 (p=0.0001*) Table 7 shows the correlations between the pre-test and the THC, the FIE and the CA-haematology component. In BPS, there was a weak positive correlation between the THC and the pre-test (r=.264) and a strong correlation between the final integrated examination and the pre-test (r=.434). In APS-I, there was a strong correlation (r=.475) between the THC and the pre-test and a strong correlation (r=.467) between the final integrated examination and the pre-test. In APS-II, there was a moderate correlation (r=.374) between the THC and the pre-test and a strong correlation (r=.409) between the final integrated examination and the pre-test. In APS-III there was a weak correlation (r=.295) between the THC and the pre-test and a moderate correlation (r=.375) between the final integrated examination and the pre-test. In IPS there was a moderate correlation between THC and pre-test (r=.322), a strong correlation between the final integrated examination and the pre-test (r=.457). Correlations between the pre-test and the CA for the individual courses ranged from r=.211 to r=.442. Table 7. Correlations between the pre-test, and the total haematology component plus pre-test and the final integrated examination in five courses and the CA-haematology component Total haematology component Final integrated examination CA-haematology component Basic Para-clinical Sciences (BPS) .264** .434** NA Applied Para-clinical sciences-I (APS-I) .475** .467** .442** Applied Para-clinical Sciences-II (APS-II) .374** .409** .331** Applied Para-clinical sciences-III (APS-III) .295** .375** .283** Integrated Para-clinical Sciences (IPS) .322** .457** .211** ** Correlation is significant at the 0.01 level (2-tailed). (Pearson product moment correlation method used) 32 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak Discussion Despite the fact that all students had been informed of the pre-test in advance, giving them the opportunity to review the material privately during their break at the end of second year and before the start of third year, 69 (43.4%) members of the study group (n=159) failed the haematology component of the integrated first-year course (BPS) and 29 (18.2%) failed the pre-test. It may be that the time during the break, when they had no other examinations to focus on, helped them revisit the prerequisite material with better focus. This is further emphasised by the fact that some of the students who passed the pre-test failed the total haematology component and the final integrated multispecialty examinations in the third year, when they had other sub-specialties to study for. It might also be that students who generally study for an examination ‘short-term’ do not retain knowledge ‘longterm’ and, therefore, do not have a firm foundation for the study of the subject. A number of factors are involved in final examination grades and correlations of the pre-test and this first year prerequisite course are weakly positive (r=.264). The pre-test predicted best the APS-I results followed by APS-II and then APS-III for both the total haematology component and the multi-specialty final integrated examination. APS-I is the first course in the first semester, APS-II is in the latter half of the first semester and APS-III is in the second semester. The results may also be getting better through student maturation. This is echoed by the fact that the best mean scores are in the later courses. The students may also be getting more comfortable with the assessments. Furthermore, in APS-II, more students would have rotated through haematology clerkships at this point than in APS-I, and all students would have gone through all clerkships by the time APS-III examinations were held, hence have a better understanding of their course material after getting the practical and clinical applications, and learning in much smaller groups with closer contact with the instructor and other lecturers. This is similar to the findings of Beckman (2008) where students’ scores were better in the later semesters and it was concluded that the students were becoming more accustomed to the tests. A recommendation for departments may then be to allocate more time to clerkships as this is where most learning seems to occur. Class sizes in the third year are between 200 and 250 students and clerkship groups are between 20-25 students – with clinical and practical exposure. Therefore, during clerkship, students see the relevance of their lecture content – something that is likely to improve the retention of knowledge. In this paper, due to time constraints, it was not possible to create smaller groups for the failing students however the instructor took advantage of the existing smaller groups during clerkships (20-25 students) to pay closer attention to the failing students. All clerkship sessions required interaction and active participation of all students and the instructor. The instructor chose the students with the lowest pre-test scores in their groups to be the group leaders. This responsibility meant that these students had to be in regular contact with the instructor concerning the group activities. The instructor took the opportunity to hold discussions with Pre-testing as an indicator of performance in final examinations among third year medical students 33 them whenever they made contact about the group activities. One such student, who had one of the lowest pre-test scores, after the first few discussions, voluntarily started to seek help from the instructor. In-fact, this student went on to have a final Honours Grade in APS-II and a distinction in APS-III, similar to some of the high attaining students. This student had never had such results before. The extra attention seemed to help him understand better and focus his studies better. The responsibility and accountability as group leader increased his self-confidence, selfesteem and motivation. He had never had the opportunity to be a leader at medical school. He did at some point enquire why he had been chosen as group leader and he actually guessed the reason. It may also be true that he was further motivated when he saw the dedication and willingness of the instructor to take time with him. Other benefits of pre-testing exercise Like Shepard (2001, in Beckman, 2008), the lead author felt that pre-testing improved their own teaching effectiveness. Firstly, it identified the objectives that needed to be re-visited in the pre-requisite course. The results of the analysis made the instructor more reflective in the teaching practice and identified teaching methods that needed to be changed and developed. It motivated the instructor to adopt and implement further innovative strategies, try a variety of techniques, and research other possible teaching techniques. While the pre-test did take time from the third year course slots, and initially the instructor was concerned that they wouldn’t be able to ‘cover all the material’, in the end the instructor was better able to identify the most important learning objectives to be covered in class and the ones could be done by students at home. Limitations 1. No similar analysis has been performed in the previous years, where no diagnostic pre-test was given by the researchers, for comparison. 2. The number of students in the study is small. 3. There may be a number of other reasons that may explain performance in final examinations besides the pre-test. 4. Statistical analysis in the two different groups that failed or passed the pre-test and then failed or passed the third year courses needs to be done (as shown in Table 5). 5. In this analysis, the effect of combining multiple sub-specialties in one big integrated examination was not analysed. 6. As this was a retrospective analysis, no formal analysis of the views and feelings of the students’ about the pre-test. 34 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak Conclusion No statistical difference between the pre-test and the third year course results was found. Some students seemed to respond to the intervention and others did not (Table 5). The pre-test identified students with low attainment, their baseline and their particular deficiencies. The interventions introduced were designed to increase student interaction and engagement. While all students, particularly those with low attainment, were encouraged to seek help from the lecturers, in actual fact the students who generally sought help voluntarily were usually the students who were high achievers already. This type of situation further confounds the challenge for the teacher, who has to be extra creative in cultivating an attitude that leads to better student engagement. In such a scenario where failing students are less likely to seek help voluntarily, the instructor could actively create extra sessions tailored to these students (as a smaller group), their particular weaknesses, at a pace that is comfortable for them, thus working to ensure their engagement. While this is time consuming, and requires dedication by both staff and students, it may well be valuable for these students. Warrier, Schiller, Frei, Haftel and Christner (2013) showed that short term and long term examination scores improved significantly after team-based learning and Johnson and Johnson (2009) showed that cooperative learning leads to higher achievement and greater productivity, as the teams become more caring, supportive, and committed to the group activities. As the groups get smaller students become more interactive, communicate more and are more accountable. In smaller groups, students encourage each other to achieve the goals as they challenge each other and give each other feedback, and the ‘social-loafing’ which happens in large classes decreases. Diagnostic pre-testing is a valuable tool to help in directing a teaching programme as well as stimulating student learning. Appropriate interventions can be implemented to help those in need. Some students responded to the intervention, others may require other strategies to motivate improved engagement. More class interaction in didactic lectures, no-matter how big the class sizes, should be encouraged and there is a need to explore new technologies in teaching to encourage more student engagement as some low attaining students are less comfortable approaching lecturers directly face-to-face. Small group activities may also help these students’ engagement. Rahman, Khalil, Jumani, Ajmal, Malik and Sharif, (2011) after a pre-test/post-test exercise, concluded that the discussion method of teaching was more effective than the didactic lecture method. In a study by Gray, Fana, Campbell, Hakim, Borok and Aargaard (2014) they suggested that ‘team-based learning’ was a good tool that would stimulate learning and was useful in a setting with high ratios of students to teachers, as is the case in our setting. In the case of the students who failed the pre-test but ended up with a passing grade in final examinations, this may be due to the fact that pre-testing stimulates future learning (Richland, Kornell & Kao, 2009). In this study, pretesting may also have helped students to appreciate the relevance to the third year of the material covered in the first year. Interestingly Hudson and McIntire (1977) Pre-testing as an indicator of performance in final examinations among third year medical students 35 showed that pre-testing was a better indicator of failure than it was of success and concluded that a highly motivated student can indeed overcome a deficiency in the prerequisite material. The motivation could be any number of things, including the pre-test itself, stimulating learning. When a student realises, right at the beginning, the gap between what they need to know and what the already know, they may be motivated by a form of panic. This is in keeping with Shepard (2001, in Beckman, 2008) who report that pre-testing encourages students to reflect upon what they currently know. This study was an attempt at highlighting the significance of learning foundations and it is recommended that others use pre-testing as a means of establishing the foundations for future learning. References Beckman, W.S. (2008). Pre-testing as a method of conveying learning objectives. The Journal of Aviation/Aerospace Education and Research, 17(2), Article 5. Retrieved from: http://commons.erau.edu/jaaer/vol17/iss2/5 Gray, J., Fana, G.T., Campbell, T.B., Hakim, J.G., Borok, M.Z. & Aargaard, E.M. (2014). Feasibility and sustainability of an interactive team-based learning method for medical education during a severe faculty shortage in Zimbabwe. BMC Medical Education, 14 (63). Retrieved from http://www.biomedcentral.com/1472-6920/14/63 Gupta, E., Sze-Piaw, C., Supamiam, A., Khaw, L.B., Loh, R., Tong, K.S., Keng, C.S., Zaher, Z.M.M., Yek, T.T., Myint, K.Y., Menon, V., Chai, K.K., Hoe, W.M., Krishnan, C., Idris, N., Yin, L.K., Jutti, R.C. & Palayan, K. (2007). How closely do in course assessments correlate with end of semester exams? International e-Journal of Science, Medicine & Education, Abstracts from the International medical education conference 2007, PA-14, pA46. Retrieved from: http://web.imu.edu.my/ejournal/approved/Abs_PA01-15.pdf Helle, L., Nivala, M., Kronqvist, P., Ericsson, K. A. & Lehtinen, E. (2010). Do prior knowledge, personality and visual perceptual ability predict student performance in microscopic pathology? Medical Education, 44, 621-629. Hudson, H.T. & McIntire, W.R. (1977). Correlation between mathematical skills and success in physics. American Association of Physics Teachers, 45(5), 470-471 Johnson, D.W. & Johnson, R.T. (2009). An educational psychology success story: Social interdependence theory and cooperative learning. Educational Researcher, 38 (5), 365-379. Rahman, F.H., Khalil, J.H., Jumani, N.B., Ajmal, M., Malik, S. & Sharif, M. (2011). Impact of discussion method on students’ performance. International Journal of Business and Social Science 2(7), 84-94. Richland, R.E, Kornell, N. & Kao, LS. (2009). The pre-testing effect: Do unsuccessful retrieval attempts enhance learning? Journal of Experimental Psychology, 15(3), 243-257. Simkins, S. & Allen, S. (2000). Pre-testing students to improve learning. International Advances in Economics Research, 6(1), 100-112. Warrier, K.S., Schiller, J.H., Frei, N.R., Haftel, H.M. & Christner, J.G. (2013). Long-term gain after team-based learning experiences in a pediatric clerkship. Teaching and Learning in Medicine: An International Journal, 25(4), 300-305. Caribbean Teaching Scholar Vol. 5, No. 1, April 2015, 37–46 ERA EDUCATIONAL RESEARCH ASSOCIATION A retrospective co-relational analysis of third year MBBS students’ performance in different modalities of assessment in haematology and final integrated examinations Sehlule Vumaa, Bidyadhar Sab and Samuel Ramsewakb Department of Para-clinical Sciences, The University of the West Indies, St Augustine, Trinidad and Tobago; bCentre for Medical Sciences Education, The University of the West Indies, St Augustine, Trinidad and Tobago a This paper reports an analysis of correlations in students’ performance in different modalities of assessment in haematology and multi-specialty (anatomical pathology, chemical pathology, haematology, immunology, microbiology and pharmacology) final integrated examinations. It is broadly agreed among medical educators that proper alignment between learning objectives, modes of delivery and assessment modalities is a key factor in shaping the desired outcomes. It is equally important that modalities of assessments are in concurrence among themselves within the assessment framework. A descriptive retrospective correlational analysis of 159 third-year Bachelor of Medicine and Bachelor of Surgery (MBBS) students’ results in different assessment modalities in five courses covering Applied Para-clinical Sciences, Integrated Para-clinical Sciences and Basic Para-clinical Sciences was performed. Results show positive correlations amongst all haematology components as well as the final integrated examination and the continuous assessment element had the strongest correlations with the total haematology component. It was concluded that combinations of multiple modes of assessment are important for adequate and fair assessment of knowledge and skill and that continuous assessment encourages students to work consistently throughout the course. Key words: Assessment, correlations, haematology, performance Introduction It is broadly believed that increasing the number of different forms of assessment helps student attainment, as students have different strengths in different assessment methods. Different medical schools use different combinations of multiple choice questions (MCQs), extended matching questions (EMQs), essays or free response short answer questions (SAQs), progressive disclosure questions (PDQs), observed structured clinical examinations (OSCE), observed structured practical examinations (OSPE) and oral examinations. Different forms of assessment may be better suited to assessing different objectives in the curriculum. Some forms of assessment have advantages over others. MCQs have several advantages over SAQs: they are easy to mark as marking is done by machine, and they avoid the marking discrepancies that might come with SAQs. Constructing MCQs, however, is more 38 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak time consuming for the teachers: though less time consuming for students to answer. MCQs are able to examine more content in the curriculum, whereas, SAQs present the problems of sampling. Whatever the chosen modality, assessments need to be reliable and valid. Epstein (2007) said using multiple methods of assessment can also assess students’ learning needs and help to identify those that need help. However Epstein also reports that the question of summative versus formative assessments remains challenging. Different schools thus use different combinations of assessment to cover both the depth and breadth of their curriculum, both in terms of knowledge and skills depending on modes of delivery of the curriculum. von Bergmann, Dalrymple, Wong & Shuler (2007), for example, used MCQs and a computer-based objective test for assessment of their problem based learning content and they showed the combination to be significantly reliable. Whatever the chosen combinations of assessment modalities, it is important that they are well aligned, hence correlations should be performed. Wass, McGibbon & Van der Vleuten (2001) showed that correlations differed between different forms of assessment. They showed the correlation between MCQs and EMQs was 0.43, between MCQs and SAQs was 0.46 and between EMQs and SAQs was 0.60. They reported that MCQs might be more suited for testing factual knowledge and SAQs more suitable for problem solving skills. Adeniyi, Olgi, Ojabo and Musa (2013) showed that in a physiology course continuous assessment (CA) (also known as in-course or formative assessment), had the best correlation (r = 0.801), while the oral examination had the least correlation (r = 0.277), with overall performance. They also showed that the essay component was the best predictor of overall performance (r = 0.421), followed by MCQ, while the practical component was the least reliable predictor of performance. Similar results were shown in a third year internal medicine course by Gupta et al (2007) who showed that incourse assessments correlated positively with end of semester assessments. As seen with essays and orals, correlational analysis is more varied in assessments that evaluate practical skills like the OSCE due to inter-observer subjectivity. Inter-rater reliability limits the ability to achieve high correlations (Campos-Outcalt, Watkins, Fulginiti, Kutob & Gordon, 1999), highlighting the challenge for examiners to increase objectivity in the OSCE. Pepple, Young and Carroll (2010), in a study among physiology students, showed that, in the Caribbean, for most students, the strong correlation between results in MCQs and essay questions was independent of the assessment modality. Our study focuses on third year Bachelor of Medicine and Bachelor of Surgery (MBBS) students, at a major university in the Caribbean. No similar studies have been done in our setting before. The study provides useful information for the faculty as a whole. In this setting third year students take Applied and Integrated Para-clinical Sciences (pathology) courses which integrate the sub-specialties of anatomical pathology, chemical pathology, haematology, immunology, microbiology and pharmacology. Para-clinical Sciences bridge the gap between the pre-clinical and the clinical years. The study evaluates the students’ performance in the haematology components against the final integrated examinations. A retrospective co-relational analysis of third year MBBS students’performance in different modalities of assessment in haematology and final integrated examinations 39 In the current system, first-year students take a six-week introductory integrated course, Basic Para-clinical Sciences (BPS) (see Table 1). In haematology, students are introduced to the basic concepts in their first year. In the second year there is no haematology teaching (there is some teaching in the other sub-specialties in the second year). In third year students discuss the pathophysiological basis of disease applying the basic knowledge from first year. Table 1. Courses in the Para-clinical Sciences and details of haematology component of these courses Year Course Semester Haematology component: Course content Course teaching/ delivery methods 1 Basic Para-clinical Sciences (BPS) Semester 1 • Peripheral blood and blood components, haemopoiesis, introduction to haemostasis, introduction to basic laboratory procedures in haematology • Problem based learning • Didactic lectures 3 Applied Paraclinical Sciences I (APS-I) First half of Semester 1 • Anaemias • Problem based learning • Didactic lectures 3 Applied Paraclinical Sciences II (APS-II) Second half of Semester 1 • Haemostasis, thrombosis and transfusion medicine • Problem based learning • Didactic lectures 3 Applied Paraclinical Sciences III (APS-III) Semester 2 • Haematological malignancies and bone marrow failure syndromes • Problem based learning • Didactic lectures 3 Integrated Paraclinical Sciences (IPS) Semesters 1&2 (Rotations throughout the year) • Integrates APS-I, II and III • Clerkships • Clinical and practical application and exposure In haematology students are exposed, for the first time in their medical training, to real live patients. In the other sub-specialties there is no live patient contact at this stage (except for isolated laboratory procedures). The live patient exposure in haematology is aimed at giving the students practical application of the material covered in lectures and the material introduced in problem based learning scenarios. Such exposure is designed to increase student interaction and understanding. Medical schools are placing more and more emphasis on skills training (Wass, McGibbon & Van der Vleuten, 2001) as a means of developing competency. Tools for the assessment of skills include in-course assessments (known here as “continuous assessment”) as well as final end of course examinations. Each subspecialty contributes equally in the combined integrated examinations (see Table 2). Assessments need to assess factual knowledge, clinical and practical application of knowledge and need to be well correlated. Prior to 2010, SAQs were used. However because of issues raised by teachers regarding discrepancies in marking and the time needed to facilitate SAQs, they were replaced by EMQs and PDQs for third year courses in 2010. Similarly oral examinations were also removed from 40 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak the syllabus. Correlations between the old and new combinations of assessment modalities in the current system need to be analysed. Table 2. First and third year course assessments in multi-specialty integrated course of Para-clinical Sciences Year Continuous assessment (CA) End of course Final score 1 Basic Paraclinical Sciences (BPS) */+Observed practical skills examination (OPSE) *Problembased learning *Multiple choice questions (MCQs) *Short answer questions (SAQ) 100% 3 Applied Para-clinical Sciences I (APS-I) **Progressive disclosure questions (PDQ) (20%) *Problembased learning (5%) *MCQ (50%) **Extended matching questions (EMQ) (25%) 100% 3 Applied Para-clinical Sciences II (APS-II) ** PDQ (20%) *Problembased learning (5%) * *MCQ (50%) ** EMQ (25%) 100% 3 Applied Para-clinical Sciences III (APS-III) ** PDQ (20%) *Problembased learning (5%) * MCQ (50%) ** EMQ (25%) 100% 3 Integrated Para-clinical Sciences (IPS) Varies by sub-specialty (25%) [Includes different combinations of case presentation, case write-up, MCQs and ***OSPE] *OSPE (45%) 70% Clinical skills assessed (30%) *Old assessment modality; **Newly introduced assessment modality; ***Haematology component of CA; +Not included in this analysis; Research question What is the correlation between the different assessment modalities in haematology and the multi-specialty integrated examinations in the first year and third year Para-clinical Sciences courses? Methodology This study was conducted with 159 third year students during the academic year 2010-2011. Ethical approval was obtained from the Ethics Committee, and the Office of the Dean of the Faculty of Medical Sciences. This was a descriptive retrospective, correlational, cross-sectional analysis. Data was accessed from the Assessment Unit of the Faculty maintaining the anonymity of the students. The data included the results of the haematology components; the overall results of the integrated third year courses APS-I, APS-II, APS-III and IPS, and the results from the first year course, BPS. The focus on haematology in this integrated setting, was due to convenience, as the principal investigator is a lecturer in haematology. This study could be used as an initial base line for the rest of the department. Students who did not take all four third year courses, and students who did not start first year with this current group, were excluded from the study. For the IPS, only the A retrospective co-relational analysis of third year MBBS students’performance in different modalities of assessment in haematology and final integrated examinations 41 pathology clerkships were analysed and the results of the clinical skills component were not analysed. The CA results for the BPS course were not available for analysis. Correlations between the haematology components of the different assessments and the total haematology component, as well as the multi-specialty final integrated examinations results of the courses were analysed. Correlational analysis was performed by the Pearson product moment correlation method. Linear regression analysis was also performed between the total haematology component and the multi-specialty final integrated examination. Results Table 3 shows the breakdown of the different assessments in the department; the breakdown of the haematology components of these assessments, and the students’ haematology mean scores in each assessment. Table 3. Course assessments and scores in the haematology components of BPS, APS I, II & III and scores in multi-specialty final integrated examinations CA MCQ SAQ EMQ OSPE THC FIE First year - Basic Para-clinical Sciences (BPS) (CA-OSPE) Total score possible 9 5 14 100% Mean NA 4.5 (50%) 2.89 (57.8%) 7.38 (52.7%) 63.98% Std deviation 1.618 .99 2.099 7.63 Third year – Applied Para-clinical Sciences I (APS-I) (CA-PDQ) Total score possible Mean Std deviation 6 9 3 18 100% 3.6 (60%) 4.15 (46.1%) 2.58 (86%) 10.29 (57.2%) 62.53% 1.22 1.47 .741 2.50 9.35 Third year – Applied Para-clinical Sciences II (APS-II) (CA-PDQ) Total score possible Mean Std deviation 15 8 4 27 100% 11.42 (76.1%) 4.44 (55.5%) 2.58 (64.5%) 18.36 (68%) 66.65% 2.71 1.42 1.18 4.09 9.02 Third year – Applied Para-clinical Sciences (APS-III) (CA-PDQ) Total score possible Mean Std deviation 15 14 3 32 100% 9.58 (63.9%) 9.73 (69.5%) 2.36 (78.7%) 21.61 (67.5%) 69.99% 2.28 2.10 .71 3.97 8.55 Third year - Integrated Para-clinical Sciences (IPS) (CA-OSPE) OSPE Total score possible Mean Std deviation OSPE FIE 8 7 15 45% 70% 4.80 (60%) 3.90 (55.7%) 8.69 (57.9%) 28.23% 1.50 1.30 2.11 4.53 45.97% 5.65 CA-continuous assessment, SAQ-short answer questions, PDQ-progressive disclosure questions, MCQ-multiple choice questions, EMQ-extended matching questions, OSPE-observed structured practical examination, THC-total haematology component, FIE-final integrated examination 42 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak For APS-I and APS-III the EMQs had the best mean scores. For APS-II the CA had the best mean scores. For the overall multi-specialty final integrated examination the mean scores for BPS, APS-I, and APS-III are higher than the total haematology component. Table 4 shows the correlations between the haematology components of different assessments, the total haematology component as well as the multi-specialty final integrated examination. Table 4. Correlations between the haematology components of the different assessments, total haematology components, and multi-specialty final integrated examinations Haematology components SAQ MCQ All sub-specialties combined EMQ OSPE (Final) THC Final integrated examination .889** .600** .663** .456** First year - Basic Para-clinical Sciences (BPS) MCQ .248** SAQ THC .679** Third year - Applied Para-clinical Sciences I (APS-I) PDQ (CA) .225** MCQ .208** .695** .480** .292** .802** .592** .559** .503** EMQ THM .750** Third year - Applied Para-clinical Sciences II (APS-II) PDQ (CA) .317** MCQ .233** .866** .634** .316** .640** .574** .587** .533** EMQ THC .800** Third year - Applied Para-clinical Sciences III (APS-III) PDQ (CA) MCQ .318** .255** .794** .565** .354** .794** .679** .519** .527** EMQ THC .784** Third year - Integrated Para-clinical Sciences (IPS) OSPE (CA) Final OSPE THC Final Integrated OSPE .134 Final integrated OSPE Final integrated exam .794** .233** .314** .709** .583** .550** .523** .561** .960** ** Correlation is significant at the 0.01 level (2-tailed) CA-continuous assessment; SAQ-short answer questions; PDQ-progressive disclosure questions; MCQ-multiple choice questions; EMQ-extended matching questions; OSPE-observed structured practical examination; THC-total haematology component** Correlation is significant at the 0.01 level (2-tailed) A retrospective co-relational analysis of third year MBBS students’performance in different modalities of assessment in haematology and final integrated examinations 43 In BPS, APS-I, APS-II, and APS-III, between individual assessments (SAQ, PDQ, MCQ, and EMQ), correlations were weak to moderately strong (range r=.208 to .354). In IPS, between the haematology observed structured practical examination and the haematology component of the observed structured practical examination, correlations were negligible with r=.134. However between the individual components and the total haematology component and the final integrated examination, correlations were strong to very strong (range r=.456 to .889). The final integrated OSPE and the final integrated IPS examination had the highest correlations of r=.960. The MCQs had higher correlations than EMQs with the total haematology component and multi-specialty final integrated examination for all three courses APS-I, II and III. The CA for haematology in APS-II, APS-III and IPS had the strongest correlations with the total haematology component. Discussion There are positive correlations among all the different haematology components of the different assessments, as well as the final integrated multi-specialty examinations. This suggests that the different combinations of the current assessment forms are useful both in haematology alone and in the multi-specialty courses. Combinations of multiple assessment modalities are known to be better for the students than the individual assessments; since students may be stronger in some assessment modes than in others. The introduction of EMQs and PDQs fits in well with the rest of the assessments and we are able to examine more depth and breadth of the curriculum in the third year. This combination eliminated some of the problems we had in the past with SAQs. For example, the problems of sampling with SAQs meant that less of the curriculum was examined. Hence, SAQs may help students attempt to ‘spot’ what might be in the examination and not study the whole course content. While no similar study has been performed involving correlations with SAQ used prior to 2010 for the third year courses, the advantages of not using this assessment modality are well noted. The correlations performed in the first year course give a control, though limited, for comparison. The correlations between the MCQ and SAQ in BPS was r=.248. Whereas, for the third year courses, the correlations between the MCQ and PDQ ranges between r=.225 and r=.318, and MCQ and EMQ correlations range between r=.292 and r=.354. This is higher than the SAQs, while reducing the time taken by students to answer SAQ, as well as by the staff to mark them, with fewer discrepancies in marking. EMQs are a form of MCQs and have many of the advantages of MCQs (objectivity, computer marking) but transform the questions into items that help engage students in higher level mental tasks and ask students to solve problems rather than recall isolated pieces of information (which is in keeping with our problem based learning method of teaching). EMQs can also help to prevent students answering by elimination rather than actually knowing the answer. 44 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak Fenderson, Damjanov, Robeson, Veloski and Rubin (1997) report that EMQs help in reducing the effect of ‘cueing’ where students can answer a question by recognizing the correct option that they otherwise would not have answered without being given the options. Furthermore, item analysis can be performed to demonstrate reliability and validity. Wass, McGibbon and Van der Vleuten (2001) studied the construct validity of EMQs and found they could measure clinical problem solving, because they correlated highly with clinical tests and problem solving questions and moderately with a factual knowledge testing. In our study, EMQs had strong correlations (r= .503 to r=.533) with the practical related OSPE in the multispecialty final integrated examination. In this study, the MCQs have higher correlations, than EMQs, with the total haematology component and the multi-specialty final integrated examination for all three courses: APS-I, II and III. This may be due to the fact that the MCQs make up a higher percentage of the multi-specialty final integrated examination than the EMQs, or it could be that the EMQs require higher order cognitive skills. From this, the recommendation would be to increase the EMQ content in the examination papers, as suggested also by Wass, MacGibbon and Van der Vleuten (2001). Furthermore, this may suggest that students need more practice in doing the EMQs (which are new to them) throughout the year so that they are as comfortable with them by the end of the course. The PDQ features an evolving case scenario, and tests the candidates’ problem solving and reasoning ability, rather than mere factual recall, which is in keeping with the PBL philosophy. In our study the PDQs had strong correlations (r=.480 to r=.634) with the practical related OSPE, which assess not just factual recall but also clinical and practical application of knowledge. PDQs are easier to set than MCQs (Palmer, Duggan, Devitt & Russell, 2010) however, compared to MCQs, they have issues regarding ‘sampling’ whereas with MCQs more content can be tested (depending on the way the case scenario evolves). Thus the combination of PDQ, MCQ and EMQ in the department allows us to test the depth and breadth of the curriculum. Epstein (2007) said using multiple methods of assessment can also assess students’ learning needs and can help to identify those that need help. Wass, McGibbon and Van der Vleuten al (2001) made a similar point: the use of a variety of assessments improves content validity and reliability which can be further improved by balancing the of number of test items and length of test. PDQs used as continuous assessment are easier to mark than SAQs and feedback is given to students quickly giving students the opportunity to correct themselves ahead of final examinations. Assessments should motivate students (O’Farrell, n.d.) and guide them and should be useful when it comes to evaluating teaching and learning effectiveness. Like Adeniyi, Ogli, Ojabo & Musa (2013), our department believes in maintaining the CA. This encourages students to work consistently throughout the course and reduces the tendency to memorize information at the last minute. Indeed the CA component of the examination of APS-I, II and III has since increased from 25% to 30%. The CA for haematology in APS-II, APS-III and IPS had the strongest correlations with the total haematology A retrospective co-relational analysis of third year MBBS students’performance in different modalities of assessment in haematology and final integrated examinations 45 component. The strong correlations with the final multi-specialty integrated examinations results also suggest that haematology assessments are well aligned with other specialties within the Para-clinical Sciences. Limitations 1. The number of students in the study is small. 2. Correlations among the other sub-specialties in the Para-clinical Sciences have not been conducted. 3. The effect of combining multiple sub-specialties in one big integrated examination was not analysed. 4. The difficulty level of the examination questions themselves were not analysed in this paper. 5. Generalisability of correlations of assessment modalities: Wass, McGibbon and Van der Vleuten (2001) raised the question of overall reliability in correlating examinations that are so varied in terms of length of examination, content, structure of the examination, duration of examination and the actual weighting in the final combined result. Conclusion The new assessment methods are well aligned. The different combinations of different assessment modalities are useful and help the students. It is recommended that examiners increase the EMQ component of the final examinations and that the teachers give students practice throughout the year on assessment modalities such as EMQs, particularly in those areas students are not comfortable with. Assessments that have high inter-rater variability have low correlations. This may be increased if the objectivity of these assessments is increased, as highlighted by CamposOutcalt et al (1999). Furthermore, the CA is important, and should be maintained, as it encourages continuous learning among students. Our correlational analysis provides the opportunity to evaluate the content composition of the examination. As stated before, so far the weighting of the CA has since been raised from 25 to 30%; therefore, the EMQ component could also be increased in terms of number of items and duration of the examination. For the authors as teachers, the challenge to improve objectivity in all assessments that have high subjectivity and interexaminer reliability has been emphasized. MCQs and EMQs are highly objective. PDQs may show some subjectivity (although much less than seen with SAQs); however, this can be reduced with the use of very clear answer keys. Assessments do take up a significant amount of time and resources. Currently the PDQ is paper based, however going forward, the authors are investigating the possibility of a computer-based PDQ. This correlational analysis motivated the authors to analyse the EMQ, MCQ and PDQ examination papers and students’ performance results. While the authors note the clear benefits of introducing the EMQ and PDQ as assessment methods, as this was a retrospective analysis, no formal study was conducted among the students to document their views on these new assessments. 46 Sehlule Vuma, Bidyadhar Sa and Samuel Ramsewak However informal comments from some of them suggest that they appreciated the clinical application of knowledge required instead of just the basic recall of facts. References Adeniyi, O.S., Ogli, S.A., Ojabo, C.O. & Musa, D.I. (2013). The impact of various assessment parameters on medical students’ performance in first examination in physiology. Nigerian Medical Journal, 54(5), 302-305. Campos-Outcalt, D., Watkins, A., Fulginiti, J., Kutob, R. & Gordon, P. (1999). Correlations of family medicine clerkship evaluations and objective structured clinical examination scores and residency directors’ ratings. Educational Research and Methods (Family Medicine), 31(2), 90-94. Epstein, R.M. (2007). Assessment in medical education. New England Journal of Medicine, 356, 387396. Fenderson, B.A., Damjanov, I., Robeson, M.R., Veloski, J.J. & Rubin, E. (1997). The virtues of extended matching and uncued tests as alternatives to multiple choice questions. Human Pathology, 28, 526–532. Gupta, E., Sze-Piaw, C., Supamiam, A., Khaw, L.B., Loh, R., Tong, K.S., Keng, C.S., Zaher, Z.M.M., Yek, T.T., Myint, K.Y., Menon, V., Chai, K.K., Hoe, W.M., Krishnan, C., Idris, N., Yin, L.K., Jutti, R.C. & Palayan, K. (2007). How closely do in course assessments correlate with end of semester exams? International e-Journal of Science, Medicine & Education, Abstracts from the International medical education conference 2007. Retrieved from: http://web.imu.edu.my/ejournal/approved/Abs_PA01-15.pdf O’Farrell, C. (n.d.). Enhancing student learning through assessment: A toolkit approach. Retrieved from http://www.tcd.ie/teaching-learning/academic-development/assets/pdf/250309_ assessment_toolkit.pdf Palmer, E.J., Duggan, P., Devitt, P.G. & Russell, R. (2010). The modified essay question: Its exit from the exit examination? Medical Teacher, 32, 300-307. Pepple, D.J., Young, L.E. & Carroll, R.G. (2010). A comparison of student performance in multiplechoice and long essay questions in the MBBS stage I physiology examination at The University of the West Indies (Mona campus). Advances in Physiology Education, 34, 86-89. von Bergmann, H,. Dalrymple, K.R,. Wong, S. & Shuler, C.F. (2007). Investigating the relationship between PBL process grades and content acquisition performance in a PBL dental program. Journal of Dental Education, 71(9), 1160-1170. Wass, V., McGibbon, D. & Van der Vleuten, C. (2001). Composite undergraduate clinical examinations: how should the components be combined to maximize reliability? Medical Education, 35, 326-330. Caribbean Teaching Scholar Vol. 5, No. 1, April 2015, 47–59 ERA EDUCATIONAL RESEARCH ASSOCIATION Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals Althea Richardsona and Erik Blairb Department of Civil & Environmental Engineering, The University of the West Indies, St. Augustine, Trinidad and Tobago; bThe Centre for Excellence in Teaching and Learning, The University of the West Indies, St. Augustine, Trinidad and Tobago a A key goal of engineering education is to produce graduates who have the appropriate level of cognitive development to allow them to manipulate processes, solve problems and produce new knowledge. Teaching and learning using laboratory practicals is an approach that is designed to engage students in the investigation of genuine problems. The importance of the laboratory experience in engineering education cannot be over-emphasised since the critical role of laboratories can be correlated with the fact that engineering is an applied science which requires that students attain some level of hands-on skills in their chosen discipline. However not all students see the benefit of such hands-on experiences. In the present study, Kolb’s experiential learning theory was used to assess the experiences of undergraduate civil engineering students at The University of the West Indies, St Augustine, Trinidad and Tobago who had elected to undertake laboratory-based projects. The participants in this study represent a minority group within their particular cohort in that they actively engaged with practical laboratory activity. This study determined that the “engaged” students showed a lack of deliberation around laboratory work as a learning experience in itself but valued its worth as a means of developing their technical skills. For these students, it was not the laboratory practical per se that was their focus nor was there any focus on the improvement of laboratory skills so that they could be technically more able. Rather, practicals were used as a means of increasing their future employability through the enhancement of skills relating to project management, decision making and time management. Key words: Engineering education, active engagement, Kolb cycle, experiential learning Introduction Civil engineers play a critical role in the development of a nation because of their level of involvement and responsibility for the design, construction and maintenance of the development infrastructure which supports the entire built environment. By virtue of its location, the Caribbean is particularly susceptible to natural hazards which have an adverse impact on the built environment; therefore the strategic importance of the civil engineering industry to the region is clear. The vulnerability of the region to hurricanes and earthquakes presents civil engineers with many challenges which usually require the application of learned practical skills. In this 48 Althea Richardson and Erik Blair regard, effective engineering education is of importance and, therefore, the task of engineering educators is to ensure that the expected educational outcomes are achieved (Malan, 2000). One of the key goals of engineering education is to produce graduates who have the appropriate level of cognitive development to allow them to manipulate processes, solve problems and produce new knowledge (Gondim & Mutti, 2011 as cited in Lashari, Alias, Akasah, & Kesot, 2012). The importance of using the laboratory experience and other practical exercises in engineering education cannot be over-emphasised and Davies (2008, p.2) suggests that the practical component in the engineering curriculum should, by design: motivate students and stimulate their interest in the subject; deepen their understanding by linking theory to practice; provide opportunities for team work which students can use to analyze and solve engineering problems and foster in graduate engineers the skills and attitudes which will enable them to operate effectively and professionally in the engineering workplace. The successful integration of practical work into the engineering curriculum can be challenging for many reasons including: the cost of procuring adequate and appropriate materials and equipment; the time required for the organisation, management and assessment of the laboratory work; limited laboratory work space; large student cohorts that require subdivision into smaller working groups; a marked disconnect between practical sessions and the associated theoretical lectures, and a distorted link between theory and practice in the mind of the student (Davies, 2008). Notwithstanding the many challenges, the application of theory using hands-on practicals remains fundamental to the delivery of the engineering curriculum. The critical role of laboratories can be linked to the fact that engineering is an applied science which requires that students attain some level of hands-on skills. The practical component of engineering courses should therefore facilitate students’ engagement in deep learning and well-designed laboratory practicals may be used to improve the hands-on skills of graduate engineers. The University of the West Indies (UWI) St Augustine (STA) is no different from any other institution in that practical exercises are a major component of most courses offered in the undergraduate civil engineering programme. Students at all levels of the civil engineering programme are required to participate in at least three compulsory practical sessions each semester. Laboratory teaching can account for as much as 50% of the contact time and a great deal of a student’s study time is spent writing and researching for practical and project reports. Marks are awarded on the basis of assessment tasks associated with practical work. Generally, student learning is assessed through their submission of three or four individual laboratory reports for 15-20% of the total course marks. Additionally, in the summative final exam, one or two questions incorporate aspects of the practical work in either the calculations or process description. Through their studies, UWI STA undergraduate civil engineering students are expected to achieve certain goals. Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals 49 A cursory review of these goals shows the significance of practical work to student learning and that these civil engineering students are expected to: 1. Design and conduct experiments, as well as to analyse and interpret data in several areas which include air quality, water and land quality and environmental and human health impacts. 2. Identify, formulate, and solve engineering problems and to design a system, component, or process to meet desired needs. 3. Effectively convey technical material through oral presentations and written communication. 4. Obtain the broad education necessary to understand the impact of engineering solutions in a global and societal context. 5. Gain knowledge of contemporary and emerging environmental issues and recognise the need for, and an ability to engage in, lifelong learning. 6. Apply scientific and engineering principles as well as contemporary technology to the discipline. 7. Use the techniques, skills, and modern engineering tools necessary for engineering practice with an integrated understanding of professional, societal, and ethical responsibilities. 8. Understand their professional and ethical responsibility and the importance and function of multidisciplinary teams in professional practice. 9. Acquire the necessary background for admission to engineering or other professional graduate programmes. Despite the fact that practical sessions are interwoven into the civil engineering programmes and integral to students meeting learning outcomes, one of the challenges encountered by teaching staff is the apparent indifference students show towards laboratory practicals. This nonchalance might be a display of either their demotivation or their general attitude toward the use of laboratory practicals as a teaching and learning strategy, based on their perception of the required tasks (Maehr & Meyer, 1997). Whatever the reason, a lack of motivation generally has a direct negative effect on learning and behaviour (Pintrich & Schunk, 1996). Accounts of disengaged students are well-documented with an extensive body of literature on disengagement among undergraduate students (Pekrun, Goetz, Frenzel, Barchfeld, & Perry, 2011; Salanova, Schaufeli, Martinez & Bresó, 2010; Schmitt, Oswald, Friede, Imus, & Merritt, 2008; Schreiner, Noel, Anderson, & Cantwell, 2011). Such studies have examined both psychological and psychosocial factors and have used tools such as formalised Likert-type questionnaires and structured equation modelling of GPA and exam scores; however, to fully understand the situation, it is worth problematising how we might record the lived experiences of those students who display levels of engagement with practical laboratory exercises. In addressing this particular area, this research focuses on the perceptions of civil engineering 50 Althea Richardson and Erik Blair students who actively engage with laboratory practicals and seeks to find out what it is that motivates them to pursue a course to which many of their peers seem to attach such little significance. Being an engaged student The central purpose of practical work is to help students link two spheres: the sphere of knowledge - which they experience from things they can see and do, and the sphere of thoughts, concepts and ideas - which they acquire from their experiences (Millar, 2009). Practicals are effective when students are able to observe and do what was intended using the equipment and material provided; make appropriate observations which will allow them to recall and describe what they did and what they observed; think about what they are doing and observing while using the concepts intended or inherent in the activity, and, at the end, discuss the activity using the specific scientific ideas which the activity aimed to develop (Tiberghien, 2000 as cited in Millar, 2009). Such students might be described as “engaged” with the learning activity and might form a proto-typical extreme: with disengaged peers as their counter. Smith, Sheppard, Johnson and Johnson (2005) note that the more engaged the student the greater the level of knowledge acquisition and general cognitive development. This is in line with Astin’s (1984) involvement theory which emphasises that students’ active participation in the learning process will have a positive impact on their development and learning and increase their retention rates. Astin’s theory was later reinforced by Bonwell and Eison (1991) who stated that when students participate they are able to engage in higher-order thinking tasks which foster the cognitive development of skills such as analysis, synthesis and evaluation. Krause (2005) held a similar view that student engagement is a fundamental pillar of higher education and a vital element in the teaching and learning framework since it encourages deeper learning outcomes. Although Biggs (1996) reports that motivated and scholarly students are generally deep learners regardless of the teaching methods used, Lucke (2010) also used the principles of active learning to make the point that students who are actively engaged with their learning are more likely to understand the concepts that they are being taught. In drawing together the concepts of engagement and activity, this study is theoretically grounded in the experiential learning cycle (Kolb, 1984). The Kolb cycle presents learning as a cyclical model (see Figure 1) consisting of four sequential stages which include having a real experience, followed by observation and reflection during which learners review the “who, what, when, where and why” of an experience. Next, learners engage in creating new concepts from the experience and finally they plan and apply the concepts learnt to active experimentation. For Kolb (1984, p.38) “learning is a process whereby knowledge is created through the transformation of experience” and experience is central to a learning process where knowledge results from a learners’ ability to examine and transform experience. Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals 51 The Kolb cycle presents learning as a continuous process in which all four aspects are necessary for effective learning (Fry, Ketteridge & Marshall, 2008.  CONCRETE EXPERIENCES TESTING IMPLICATIONS OF CONCEPTS IN NEW SITUATIONS OBSERVATION & REFLECTION FORMATION OF ABSTRACT CONCEPTS & GENERALISATION Figure 1. The Kolb cycle (Kolb, 1984) Although, the cycle can be entered at any stage, for learning to occur all stages must be followed in sequence and there should be adequate balance of the stages. Since engineering is fundamentally an experiential discipline, the Kolb cycle is a suitable construction for understanding engineering education and is used here as a theoretical framework. The research problem Environmental engineering courses offered in the UWI STA civil engineering programme have clear practical components and all students are required to participate in the compulsory laboratory exercises and fieldwork. However, informal reports from the laboratory technician, facilitator and demonstrator tell that over the years many students seem reluctant to participate fully in hands-on laboratory exercises. Since students tend to learn more when they are actively engaged (Cross, 1987; Ericksen, 1984) and the level of learning that occurs in a classroom is directly proportional to the quality and quantity of students’ involvement throughout the educational programme (Astin, 1984) this study was borne out of a need to determine what led to students actually engaging with laboratory exercises. The value of laboratory exercises to the engineering discipline was not in question here since this study was not intended to evaluate the effectiveness of engineering educators who use laboratory practicals for teaching. The concern was whether engaged students saw the link between practical work and their ultimate goals, interests, and concerns, which will become the focus of their practice as engineers. In seeking to understand what engaged students actually get from the experience of undertaking hands-on practical work, this research considered whether students who actively participated in laboratory practicals felt that their learning experience was enhanced because of this engagement. 52 Althea Richardson and Erik Blair Research question What are “engaged” students’ perceptions of the benefits of hands-on laboratory work? Methodology Context This study was carried out in the Department of Civil and Environmental Engineering at UWI STA over a period of the two consecutive semesters within the 2013/2014 academic year. Civil engineering is a main branch of engineering offered by UWI STA. At UWI STA, laboratory practicals are integral to most of the courses offered in the civil engineering curriculum. Additionally, in their final semester, each student is required to participate in a major practical research project which may include a hands-on laboratory component. It was within this context that the research took place. Participants The participants in this study were final year students. Final year students in the three-year civil engineering programme have the greatest experience of laboratory work and this is likely to increase the validity of their reflections. From the target population of 90 students registered for an environmental engineering course in the programme, ten were identified by the laboratory technician as being actively engaged with projects that involved the use of major hands-on laboratory work. The definition of engagement here is based on the criteria established by Tiberghien (2000 as cited in Millar, 2009). The identification of the participants was based on pragmatic sampling procedures, in that individuals were drawn from the set of all final year students studying the civil engineering course. The selection of the ten participants was therefore based on insider information balanced against the level of autonomy shown by students (Davies, 2008) and the types of laboratory events that the students were involved in (Millar, Tiberghien & Le Maréchal, 2002). Data collection The research data was collected using semi-structured face to face interviews. Each participant was interviewed once in an informal, private setting. Since the data collection was semi-structured, questions provided the impetus for an informal conversation between the researcher and participant. During the interview, participants were asked to reflect on their participation in laboratory based activities; their learning, and the value of the laboratory practicals to their entire learning experience. In a few instances, it was necessary for some prompting and probing to be utilised during the interview and, at the end, each participant was allowed to expand on the issues and to discuss their additional concerns. Participants were apprised of the objectives of the exercise as well as how the data would be used. Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals 53 Analytical tools The data produced during the interviews were analysed using two tools – an holistic overview and a template analysis. Since the data were qualitative in nature, these approaches focused on discovering the meaning within the text itself. The holistic overview pulls together all the data and attempts to find a global message that is sympathetic to all ten participant perspectives. The template analysis was theoretically grounded in the Kolb’s experiential learning theory (Kolb, 1984). It was also used to identify trends within the data, and the four stages of the Kolb cycle were used to code the data. Miles and Huberman (1994) report that coding qualitative data can reduce data overload and can help clarify key responses. In this regard the four stages of the Kolb cycle were used to thematically code the interview data so as to highlight frequencies of occurrence. Adopting this approach allowed the student perspective to emerge. Table 1 illustrates the four stages of the Kolb cycle in relation to the template codes applied and the indicators that guided the application of these codes: Table 1. Relating the Kolb cycle to the template coding process Term Code Indicator Concrete experience CE What students actually do in the practicals (Accounts of having experiences) Reflective observation RO What students learn when they perform particular tasks (Reflections on experience) Abstract conceptualisation AC What students actually learn from their reflections (Accounts that try to theorise experience) Actual experimentation AE How ‘new’ knowledge is applied in new situations (Accounts that show the ‘testing’ of new theories) Ethical considerations Informal approval was obtained from the Head of Department to interview the participants. Prior to the commencement of the interview process each participant was advised on the purpose of the exercise, the degree of confidentiality placed on all information shared, the voluntary status of their participation, their freedom to choose not to respond to any question and their guaranteed anonymity. Identification information beyond a coded name (initials only) was not required and therefore not solicited. All participants gave their informed consent. Limitations of the study As with all research, this study was limited by: time and timing; sample dynamics; group dynamics; the kinds of questions asked; availability of participants; the cooperation of others; facilities, equipment and ethical considerations. Since this research examines a unique situation these factors are not limitations in the true sense of the word but boundaries within which the study takes place. 54 Althea Richardson and Erik Blair Data and data analysis The interview data is reported here following the two data analysis tools: (1) an holistic overview of responses, and (2) a priori coding using the codes drawn from of the Kolb cycle. Holistic overview of responses Generally, participants said that, through laboratory work, they learnt technical skills in addition to project management, decision making, time management and team work. KB said, “I learnt a lot, time management, the correct way to use equipment and machinery; the ability to use what I was not taught in the classroom; teamwork and the use of proper protocols and procedures” and RM said, “I have learnt that decision making needs careful consideration - that a project supervisor needs to be technically knowledgeable and that it is very important to understand the culture of an organisation”. Conversely, one participant said that he had not learnt anything new. Even though there were issues regarding whether participants “liked” doing laboratory work, and it was reported by two students that they only did laboratory work because they “had to”, most of the participants recognised that the laboratory exercises provided an important hands-on link to theory. VP said, “As I progressed I saw the purpose as providing a practical link to the theory” and RM said, “As much as I did not like having to do labs I now have a better understanding of the theory and I could see all the links”. Two participants reported that the laboratories had no real value as much of the material learnt was lost in the transition from text to practice and two thought that fieldwork was more valuable than laboratory work. However, four students believed that all the laboratory work done over the three years was important and six felt that final year laboratories were particularly relevant. It is interesting that these responses are from students who were deemed to be “engaged” yet they show somewhat negative conceptualisations of laboratory work. All of the participants said that they would apply the technical skills which they learnt. Skills such as communication, project management and team work were also part of the learning experience for some participants with FP reporting, “I am now able to link all the theory to the practical aspects and that has enhanced my knowledge significantly. I now have a working knowledge of correct techniques and it would be nice to be able to use all that I have learnt in my practice” and VJ responding, “I will always use what I learned as a check to make sure that I am on the right track. I will always focus, most of all, on getting accurate test results”. Here we can see some of the practical benefits that have been drawn from the laboratory work. It might be these students, whilst not being particularly drawn to laboratory work, see the value in the long run whereas students who do not choose to do laboratory work either cannot see such benefits or feel that the effort outweighs the outcome. Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals 55 Participants were asked to suggest ways in which improvements, that might support future student development, could be made to laboratory exercises. Class size was an issue for four of the participants and three suggested the use of technology in the laboratories as a solution. Five participants suggested that new equipment and an infrastructure upgrade were needed, for example SG thought that “most of the lab equipment used was out dated and needed upgrading”. Four participants had issues with the programme structure in terms of work load and time allocated to develop and submit reports. Six participants thought that the lectures needed to be changed and that there needed to be an improvement in teaching methods in order to provide more effective coordination between the theory and the practicals. This position was emphasised by DT, who reported that, “Theory should be coordinated with lab exercises and each topic should include a field trip or site visit. Lecturers need to inspire students to get excited about topics….Lack of interest in theory leads to lack of interest in labs”. Overall, the picture painted here is rather bleak. In trying to understand student perspectives on the link between practical work and their future goals we can see that those who are deemed to be “engaged” students have many issues with aspects of laboratory work but that they are able to balance long-term outcomes against short-term effort and decide that, despite the challenges, they should apply themselves. Analysis using a priori coding In trying to understand engaged students perspectives on the value of laboratory work, the four aspects of the Kolb cycle were used to code qualitative responses. In applying the template derived from the Kolb cycle (see Table 2), we can see that the part of the cycle that was most coded was reflective observation (RO). Table 2. Aspects of data coded using Kolb cycle template Template codes No. of codings Concrete experience (CE) 30 Reflective observation (RO) 73 Abstract conceptualisation (AC) 27 Active experimentation (AE) 7 There were 73 aspects of data coded as RO with some participants making strong statements about the value of doing labs such as “I found that when I put myself into my lab reports...I got the highest marks” and “practical exercises made the course more interesting and much easier for me to recall information”. One student expressed the view that “labs had no real value because success as a student is all about recall rather than the application of any concepts”. All of the 56 Althea Richardson and Erik Blair participants treated their lab experience as one event by making reference to “doing labs”. Generally, responses to concrete experience (CE) varied from “dislike” to “not seeing the point of labs”. Only one participant said, “I like all aspects of labs”. In terms of conceptualisation (AC), one participant said, “I can relate to all I did before”. Concrete experiences ranged from the acquisition of various skills such as technical or theory application to decision making, teamwork skills and time management. One student said that he undertook his project because he wanted to get the experience of working with a particular professor. Most of the responses were coded as showing reflective observation (RO) with participants describing their laboratory experience with terms such as “useful” and “applicable” - language that betrays an idea of practical work as a conduit to future activity. One participant reported that their laboratory project, “will be useful to the civil engineering industry” and another said, “I now have the confidence to be able to communicate with others in the field”. Most of the participants had ideas of areas in which they could “try out” the many skills they learnt through active experimentation. Applications varied from the technical (beach morphology or water treatment) to the use of Gantt charts or focusing on obtaining accurate test results. A noted example of abstract conceptualisation came from a participant who said, “In my project, I actually saw the application of the practical aspects” – a statement that shows theory/practice integration. In all we can see that the coded data suggests that these engaged students did not focus much on activity but preferred to use the laboratory work as a means of reflecting on the value it offered them in the long term. The high instance of data coded as showing RO suggests that, for these students, there was a great deal of focus on how laboratory work made them better and more employable engineers. Discussion The research found that generally those students who were identified as being “engaged” in civil engineering laboratory projects recognised the value and importance of practicals to their learning and that they felt their active participation in these exercises would have a positive impact on their overall performance. But that the key benefit in taking part in laboratory work was that they could use the skills learned to make them stronger practitioners in their future engineering careers. The emphasis was not on laboratory work per se but the application of the skills learned through reflecting on laboratory experiences. Although the learning experience appeared to be different for each participant there were many similarities in terms of gaining technical and professional skills and abilities. Participants also seemed to agree with Tobin (1990) that it is necessary to use laboratory practicals to teach some engineering concepts, since the laboratory activities allow students to learn with understanding and at the same time engage in a process of knowledge construction, for example, one participant stated that, “Lab exercises should be coordinated with theory and each topic should include a field trip or site visit”. Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals 57 The Kolb cycle suggests that an effective learning process requires learners to go through all four stages of the cycle in sequence and proved to be applicable to this research. Ideally, for practical exercises, the learning process may be represented as shown in Figure 2: CE RO AC AE CE Figure 2. The flow of the learning process Whilst a cursory look at this model suggests a consistent and regular movement through the learning process, it was found that the engaged UWI STA civil engineering students appear to spend most time engaged in reflection (RO) rather than practical activity (CE) and theory building (AC). It was also found that students gave very little focus to testing knowledge in new situations (AE). Here we can see that these “engaged” students” may perceive the benefits of hands-on laboratory work in regards to future application rather than laboratory work per se. Ideally, the reflective stage of the process (RO) should start after having the experience in order for there to be something to reflect upon. This might imply that if the physical experience is stymied then reflection and by extension the student’s learning may be limited; however the students in this research seemed able to project future outcomes with minimal present effort. Conclusion This study determined that it was unclear whether the learning experiences of “engaged” undergraduate civil engineering students at UWI STA who were doing laboratory projects that required hands-on work was enhanced in itself. The lack of deliberation around concrete experiences and the small number of extracts coded as showing active experimentation suggest that these students were not engaged with laboratory work as a learning experience in itself, but as a means of developing their potential as future employees. It was found that it did not matter to the participants whether they did or did not enjoy laboratory work, what was important was that they could use practical work as a means to understand links between theory and practice, and they could draw from laboratories technical skills such as project management, decision making, time management and team work that would be used to strengthen their future careers. 58 Althea Richardson and Erik Blair References Astin, A.W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25(4), 297-308. Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347364. Bonwell, C. C. & Eison, J. A. (1991). Active learning: Creating excitement in the classroom. Washington, DC: Graduate School of Education and Human Development, George Washington University. Cross, K. P. (1987). Teaching for Learning. AAHE Bulletin, 39, 3-7. Davies, C. (2008). Learning and teaching in laboratories. Higher Education Academy Engineering Subject Centre, Loughborough University. Ericksen, S.C. (1984). The essence of good teaching. San Francisco: Jossey-Bass. Fry, H., Ketteridge, S. & Marshall, S. (Eds.). (2008). A handbook for teaching and learning in higher education: Enhancing academic practice. New York: Routledge. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development (Vol. 1). Englewood Cliffs, NJ: Prentice-Hall. Krause, K. (2005). Engaged, inert or otherwise occupied: Deconstructing the 21st century undergraduate student. Proceedings of Sharing Scholarship in Learning and Teaching: Engaging Students. Cairns, Australia: James Cook University. Lashari, T. A., Alias, M., Akasah, Z. A., & Kesot, M. J. (2012). An affective-cognitive teaching and learning framework in engineering education. Journal of Engineering Education, 1(1), 1124. Lucke, T. (2010). Using engineering practicals to engage students (Doctoral dissertation). Maehr, M. L. & Meyer, H. A. (1997). Understanding motivation and schooling: Where we’ve been, where we are, and where we need to go. Educational Psychology Review, 9, 371- 408. Malan, S. P. T. (2000). The “new paradigm” of outcomes-based education in perspective. Journal of Family Ecology and Consumer Sciences/Tydskrif vir Gesinsekologie en Verbruikerswetenskappe, 28(1), 22-28. Millar, R., Tiberghien, A. & Le Maréchal, J.-F. (2002). Varieties of labwork: a way of profiling labwork tasks. In Teaching and learning in the science laboratory, D. Psillos, D. & H. Niedderer (eds). Dordrecht: Kluwe. Millar, R. (2009). Analysing practical activities to assess and improve effectiveness: The Practical Activity Analysis Inventory (PAAI). York: Center for Innovation and Research in Science Education, University of York. Miles, M. B. & Huberman, M. (1994). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage. Pekrun, R., Goetz, T., Frenzel, A. C., Barchfeld, P. & Perry, R. P. (2011). Measuring emotions in students’ learning and performance: The Achievement Emotions Questionnaire (AEQ). Contemporary Educational Psychology, 36(1), 36-48. Pintrich, P. R. & Schunk, D. H. (2013). Motivation in education: Theory, research, and applications (4th ed.). Englewood Cliffs, NJ: Merrill. Salanova, M., Schaufeli, W. B., Martınez, I., & Bresó, E. (2010). How obstacles and facilitators predict academic performance: The mediating role of study burnout and engagement. Anxiety, Stress, & Coping, 23(1), 53-70. Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals 59 Schmitt, N., Oswald, F. L., Friede, A., Imus, A. & Merritt, S. (2008). Perceived fit with an academic environment: Attitudinal and behavioral outcomes. Journal of Vocational Behavior, 72(3), 317-335. Schreiner, L. A., Noel, P., Anderson, E. & Cantwell, L. (2011). The impact of faculty and staff on highrisk college student persistence. Journal of College Student Development, 52(3), 321-338. Smith, K. A., Sheppard, S. D., Johnson, D. W. & Johnson, R. T. (2005). Pedagogies of engagement: Classroom‐based practices. Journal of Engineering Education, 94(1), 87-101. Tobin, K. (1990). Research on science laboratory activities: In pursuit of better questions and answers to improve learning. School Science and Mathematics, 90(5), 403-418.