(61mb pdf)
Transcription
(61mb pdf)
iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org EDITORS: Aaron E. Walsh, Immersive Education Initiative Saadia Khan, Teachers College, Columbia University Richard Gilbert, Loyola Marymount University (LMU) Mike Gale, Boston College iED 2013 ImmersiveEducation.org i PROCEEDINGS COPYRIGHT, LICENSE AND USAGE NOTICE Copyright © 2013, All Rights Reserved 2013 Copyright held by the individual Owners/Authors of the constituent materials that comprise this work. Exclusive publication and distribution rights held by the Immersive Education Initiative. Copyrights for components of this work owned by others than the Author(s) must be honored. Abstracting with credit is permitted. OPEN ACCESS USES: Permission to make digital or hard copies of this work for personal or classroom use, at no cost (no fee), is granted by the Immersive Education Initiative under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. Conditions and terms of the license, in human-readable and legal form, are defined at http://creativecommons.org/licenses/by-nc-nd/4.0/ and summarized below: • Attribution: You must give appropriate credit and provide a link to the license. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. • Non-Commercial: You may not use the material for commercial purposes. • No Derivatives: You may not remix, transform, or build upon the material, or otherwise alter or change this work in any way. ALL OTHER USES: Specific permission and/or a fee is required to copy otherwise, to re- publish, and/or to post or host on servers or to redistribute to lists, groups or in any other manner. Request permissions from the Director of the Immersive Education Initiative at http://ImmersiveEducation.org/director iED 2013 ImmersiveEducation.org ii PROCEEDINGS About the Immersive Education Initiative The Immersive Education Initiative is a non-profit international collaboration of educational institutions, research institutes, museums, consortia and companies. The Initiative was established in 2005 with the mission to define and develop standards, best practices, technology platforms, training and education programs, and communities of support for virtual worlds, virtual reality, augmented and mixed reality, simulations, game-based learning and training systems, and fully immersive environments such as caves and domes. Thousands of faculty, researchers, staff and administrators are members of the Immersive Education Initiative, who together service millions of academic and corporate learners worldwide. Chapters support the rapid and continued growth of Immersive Education throughout the world, and constitute the geographically distributed structure of the organization through which regional and local members are supported and enriched. Chapters organize officially sanctioned Summits, Days, workshops, collaborations, seminars, lectures, forums, meetings, public service events and activities, technical groups, technical work items, research, and related activities. iED 2013 ImmersiveEducation.org iii PROCEEDINGS About Immersive Education Summits Immersive Education (iED) Summits are official Immersive Education Initiative conferences organized for educators, researchers, administrators, business leaders and the general public. iED Summits consist of presentations, panel discussions, break-out sessions, demos and workshops that provide attendees with an in-depth overview of immersion and the technologies that enable immersion. iED Summits feature new and emerging virtual worlds, game-based learning and training systems, simulations, mixed/augmented reality, fully immersive environments, immersive learning and training platforms, cutting-edge research from around the world, and related tools, techniques, technologies, standards and best practices. Speakers at iED Summits have included faculty, researchers, staff, administrators and professionals from Boston College, Harvard University (Harvard Graduate School of Education; Berkman Center for Internet and Society at Harvard Law School; Harvard Kennedy School of Government; Harvard University Interactive Media Group), Massachusetts Institute of Technology (MIT), MIT Media Lab, The Smithsonian Institution, UNESCO (United Nations Educational, Scientific and Cultural Organization), Federation of American Scientists (FAS), United States Department of Education, National Aeronautics and Space Administration (NASA), NASA Goddard Space Flight Center, IBM, Temple University, Stanford University, Internet 2, Synthespian Hollywood Studios, Cornell University, Loyola Marymount University, Kauffman Foundation, Amherst College, Teacher’s College at Columbia University, Boston Library Consortium, Stratasys Ltd., Duke University, Oracle, Sun Microsystems, Turner Broadcasting, Open Wonderland Foundation, Gates Planetarium, Carnegie Mellon University, Vertex Pharmaceuticals, Intel Corporation, University of Maryland College Park, Southeast Kansas Education Service Center, South Park Elementary School, University of Colorado Boulder, Boston Public Schools (BPS), Berklee College of Music, Computerworld, Stanford University School Of Medicine, University of Notre Dame, Bill & Melinda Gates Foundation, Emerson College, Carnegie Museum of Natural History, Imperial College London (UK), University of Zurich (Switzerland), realXtend Foundation iED 2013 ImmersiveEducation.org iv PROCEEDINGS (Finland), The MOFET Institute (Israel), Keio University (Japan), National University of Singapore (NUS), Coventry University (UK), Curtin University (Australia), Giunti Labs (Italy), European Learning Industry Group, Open University (UK), Universidad Carlos III de Madrid (Spain), University of Oulu (Finland), Royal Institute of Technology (Sweden), École Nationale Supérieure des Arts Décoratifs (EnsAD; France), Interdisciplinary Center Herzliya (Israel), Graz University of Technology (Austria), University of Ulster (Ireland), National Board of Education (Finland), Eindhoven University of Technology (Netherlands), University of West of Scotland (UK) , University of St. Andrews (Scotland), University of Essex (UK), Universidad Complutense de Madrid (Spain), Ministry of Gender, Health and Social Policies (Andalucía, Spain), Manchester Business School (UK), City of Oulu (Finland), University of Vienna (Austria), University of Barcelona (Spain), Government of New South Wales (Australia), Eötvös Loránd Tudományegyetem (Hungary), Universidade Federal do Rio Grande do Sul (UFRGS; Brazil), Universidad Politécnica de Madrid (Spain), and many more world-class organizations. iED 2013 ImmersiveEducation.org v PROCEEDINGS iED 2013 The world's leading experts in immersion and immersive technology convened June 03-06 at Boston College (USA) for the 8th Immersive Education Summit. Building on the success of the previous seven years of Immersive Education conferences the four-day iED 2013 event featured 3 tracks (Practitioner, Research and Business tracks) and an entire day dedicated to hands-on workshops. Topics of interest addressed through the iED 2013 Call for Proposals (CfP) included: Learning Games, Serious Games and Game-based Teaching and Training Game-based Learning and Training Technologies/Systems (e.g., PC, Mac, XBox, Playstation, Nintendo, iPad, etc.) Full-body immersion (e.g., caves, domes, natural interfaces, XBox 360 Kinect, etc.) Augmented Reality (AR) Mixed Reality (MR) Head-mounted Displays (HMDs) Haptics, Natural Interfaces & Touch Interfaces (e.g., iPad, iPhone, Tablets, Android, etc.) Mobile teaching and training apps (e.g., iPad, iPhone, Tablets, Android devices, etc.) Robotics 3D Printing, Design, Rapid Prototyping (e.g., MakerBot, Shapeways, Materialise, Sculpteo, etc.) Computational Thinking and Learning Systems (e.g., Scratch, Alice, Greenfoot, etc.) Stealth Learning Virtual Worlds Virtual Reality (VR) Simulation Psychologically Beneficial Immersive Environments as defined by iED PIE.TWG Business in the Age of Immersive Education Pedagogy in the Age of Immersive Education Research in the Age of Immersive Education Assessment in the Age of Immersive Education iED 2013 ImmersiveEducation.org vi PROCEEDINGS Program Committee and Reviewers Colin Allison University of St Andrews Chair, Immersive Education Virtual Worlds Group (VIR.TWG) Barbara Avila Universidade Federal do Rio Grande do Sul Maria Badilla Universidad Católica de la Santísima Concepción Melissa A. Carrillo Smithsonian Latino Center Chair, Mid-Atlantic USA Chapter of Immersive Education (iED Mid-Atlantic) Henry B.L. Duh University of Tasmania Chair, Australian Chapter of Immersive Education (iED Australia) Kai Erenli University of Applied Sciences Vienna Chair, Immersive Education Virtual Worlds Group (VIR.TWG) Richard Gilbert Loyola Marymount University (LMU) Chair, Western USA Chapter of Immersive Education (iED West) Daniel Green Oracle Corporation Chair, MidAmerica Chapter of Immersive Education (iED MidAmerica) Christian Guetl Graz University of Technology European Chapter of Immersive Education (iED Europe) Board Member Sherry Hall Cognizant Technology Solutions Western USA Chapter of Immersive Education (iED West) Board Member Jeffrey Jacobson PublicVR Chair, Immersive Education Full/Augmented/Mixed Reality Group (FAM.TWG) Saadia Khan Teachers College, Columbia University Chair, New York Chapter of Immersive Education (iED NY) Daniel Livingstone University of the West of Scotland European Chapter of Immersive Education (iED Europe) Board Member Pasi Mattila Center for Internet Excellence (CIE) European Chapter of Immersive Education (iED Europe) Board Member Barbara Mikolajczak Boston College Chair, Immersive Education Initiative K-12 and Scratch Groups iED 2013 ImmersiveEducation.org vii PROCEEDINGS Aaron E. Walsh Immersive Education Initiative Boston College Rich White Greenbush Education Service Center Chair, MidAmerica Chapter of Immersive Education (iED MidAmerica) Fatimah Wirth Georgia Institute of Technology Colin Wood New South Wales (NSW) Department of Education and Communities Chair, Australian Chapter of Immersive Education (iED Australia) Nicole Yankelovich WonderBuilders, Inc. Open Wonderland Foundation Jerome Yavarkovsky Journal of Immersive Education (JiED) New York Chapter of Immersive Education (iED NY) Board Member Steven Zhou National University of Singapore Chair, Singapore Chapter of Immersive Education (iED Singapore) Editors Aaron E. Walsh Director, Immersive Education Initiative Faculty, Boston College Saadia Khan Faculty, Teachers College, Columbia University Chair, New York Chapter of the Immersive Education Initiative (iED NY) Richard Gilbert Faculty, Loyola Marymount University (LMU) Chair, Western USA Chapter of the Immersive Education Initiative (iED West) Mike Gale Staff, Immersive Education Initiative iED 2013 ImmersiveEducation.org viii PROCEEDINGS TABLE OF CONTENTS KEYNOTES The Literacy Moonshot: Learning to Read Beyond the Reach of Schools ............................................. 001 Cynthia Breazeal The Smithsonian Latino Virtual Museum: Crossing Borders/ Mixed Realities ……………………………. 004 Melissa A. Carrillo Sowing The Seeds for a More Creative Society ………………...………………………….……………...… 006 Mitchel Resnick The Rise of 3D Printing ………………...…........………………………………………………………………. 008 Bruce Bradshaw PAPERS : Research, Technical and Theoretical Papers PolySocial Reality for Education: Addressing the Vacancy Problem with Mobile Cross Reality ….…… 010 Colin Allison, CJ Davies, Alan Miller Enhancing Learning and Motivation through Surrogate Embodiment in MUVE-based Online Courses 030 Saadia A. Khan Promoting Engagement and Complex Learning in OpenSim ……………………………………………… 042 L. Tarouco, B. Avila, E. Amaral From Metaverse to MOOC: Can the Cloud meet Scalability Challenges for Open Virtual Worlds? ....... 051 Colin Allison, Iain Oliver, Alan Miller, C.J. Davies, John McCaffery Virtual World Access and Control: A Role Based Approach to Securing Content In-World …………… 060 C. Lesko, Y. Hollingsworth, S. Kibbe Formative Assessment in Immersive Environments: A Semantic Approach to Automated Evaluation of User Behavior in Open Wonderland ……………………………………………………………………….. 070 Joachim Maderer, Christian Gütl, Mohammad AL-Smadi A Technological Model for Teaching in Immersive Worlds: TYMMI’s Project ……………………………. 084 M.G. Badilla, C. Lara Towards a Theory of Immersive Fluency ……………………………………………………………………... 099 Nicola Marae Allain iED 2013 ImmersiveEducation.org ix PROCEEDINGS Immersive Learning in Education ……………………………………………………………………………… 109 Olinkha Gustafson-Pearce, Julia Stephenson Making it Real: The Cosmic Background Radiation Explorer App ………………………………………… 123 R. Blundell Classroom Group-directed Exploratory Learning through a Full-body Immersive Digital Learning Playground …………………………………………………………………………………… 134 Chin-Feng Chen, Chia-Jung Wu, De-Yuan Huang, Yi-Chuan Fan, Chi-Wen Huang, Chin-Yeh Wang, Gwo-Dong Chen The Vari House: Digital Puppeteering for History Education ………………………………………………. 146 J. Jacobson, D. Sanders OUTLIERS : Novel Late-Breaking Research, Technology and Techniques Human Identity in the New Digital Age: The Distributed Self and the Identity Mapping Project ……..... 170 Richard L. Gilbert iAchieve, Empowering Students through Interactive Educational Data Exploration …………………….. 179 Josephone Tsay, Zifeng Tian, Ningyu Mao, Jennifer Sheu, Weichuan Tian, Mohit Sadhu, Yang Shi POSTERS Mobile Assignment at the Museum: How a Location-based Assignment can Enhance the Educational Experience ………………………………………………………………………… 192 Amir Bar Serious Games: A Comprehensive Review of Learning Outcomes ……………………………………….. 199 Katherine Petersen Testing Manual Dexterity in Dentistry using a Virtual Reality Haptic Simulator ………………………….. 216 G. Ben Gal, A. Ziv, N. Gafni, E. Weiss PRESENTATIONS, DEMOS AND WORKSHOPS Presentations ……………………………………………………………………………………………………. 220 Demos ……………………………………………………………………………………………………………. 220 Workshops ……………………………………………………………………………………………………….. 220 iED 2013 ImmersiveEducation.org x PROCEEDINGS iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org KEYNOTE The Literacy Moonshot: Learning to Read Beyond the Reach of Schools Cynthia Breazeal1, 2, 3 Affiliations: 1 Associate Professor of Media Arts and Sciences, MIT 2 Founding Director, Personal Robots Group at MIT Media Lab 3 Senior Advisor, Journal of Immersive Education (JiED) iED 2013 ImmersiveEducation.org PROCEEDINGS 1 In her keynote address, The Literacy Moonshot: Learning to Read Beyond the Reach of Schools, Cynthia will share a personal and provocative example of the power of child-driven learning with iED 2013 attendees. She will present findings and learning outcomes from the first year of deployment of tablet computers in two remote Ethiopian villages, where children live beyond the reach of school and the villages are entirely illiterate. Her ground-breaking work, conducted in collaboration with Maryanne Wolf (Tufts University) and Robin Morris (Georgia State University), harnesses the power of mobile technology to foster literacy learning for children who live where school is not an option and no adults can teach them how to read. Cynthia Breazeal is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab. She is a pioneer of social robotics and Human Robot Interaction. She has authored the book “Designing Sociable Robots”, has published over 100 peer-reviewed articles in journals and conferences on the topics of autonomous robotics, artificial intelligence, human robot interaction, and robot learning. She serves on several editorial boards in the areas of autonomous robots, affective computing, entertainment technology and multi-agent systems. She is also a member of the advisory board for the Science Channel and an Overseer at the Museum of Science, Boston. In 2013 she was appointed Senior Advisor of the Journal of Immersive Education (JiED), the publication of record for the international Immersive Education Initiative. Her research focuses on developing the principles, techniques, and technologies for personal robots that are socially intelligent, interact and communicate with people in human-centric terms, work with humans as peers, and learn from people as an apprentice. She has developed some of the world’s most famous robotic creatures ranging from small hexapod robots, to embedding robotic technologies into familiar everyday artifacts, to creating highly expressive humanoid robots and robot characters. Her recent work is investigates the impact of social robots on helping people of all ages to achieve personal goals that contribute to quality of life, in domains such as physical performance, learning and education, health, and family communication and play over distance. Dr. Breazeal is recognized as a prominent young innovator. She is a recipient of the National Academy of Engineering’s Gilbreth Lecture Award, Technology Review’s TR35 Award, and TIME magazine’s Best Inventions of 2008. She has won numerous best paper and best technology inventions at top academic conferences. She has also been awarded an ONR Young Investigator Award, and was honored as finalist in the National Design Awards in Communication. iED 2013 ImmersiveEducation.org PROCEEDINGS 2 The Literacy Moonshot: Learning to Read Beyond the Reach of Schools Children are the most precious natural resource of any nation. Their ability to read represents one of the single most important skills for them to develop, if they are to create a foundation for the rest of learning and to thrive as individuals and global citizens. Literacy opens the mind of a child to a potential lifetime of knowledge in all its varieties, personal growth, and critical and creative thinking. This is the positive side of the literacy equation; the insidious, converse side is that literally millions of children will never learn to read with consequences that are evident around the world. It is estimated that around 67 million children live in poor, remote areas where there is no access to schools and where everyone around them is illiterate. There are at least another 100 million children who live where schooling is so inadequate, that they also fail to learn to read in any meaningful manner. There are, and always will be, places in every country in the world where good schools will not exist and good teachers will not want to go. Teacher training cannot be, therefore, more than an important but insufficient solution to this problem, particularly in the most remote areas. Even in developed countries such as the United States, literacy rates, especially in areas of poverty, are unacceptably low. We need a fundamentally different approach to this set of issues. Advances in new, affordable mobile computer technologies, growing ubiquity of connectivity to the Internet with cloud computing and big data analytics, and modern advances in cognitive neuroscience that reveal how the brain learns to read now allow us to pose this transformative question: can children learn to read together using digital tablets without access to schools, and thereafter read to learn? During my iED 2013 keynote address I will present both the vision and results to date of our team's work in the pursuit of this provocative question. We focus on our active deployment of tablet computers in two villages in remote areas in Ethiopia where children live beyond the reach of school and the entire village is illiterate. This is a story of technological innovation, community, and the power of childdriven learning. What we can learn from this endeavor has to potential to help us think differently about education in both formal and informal learning environments, even in the most extreme contexts.” Cynthia Breazeal Associate Professor of Media Arts and Sciences, MIT Founding Director, Personal Robots Group at the Media Lab iED 2013 ImmersiveEducation.org PROCEEDINGS 3 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org KEYNOTE The Smithsonian Latino Virtual Museum: Crossing Borders / Mixed Realities Melissa A. Carrillo1,2,3 Affiliations: 1 Director of the New Media & Technology for the Smithsonian Latino Center 2 Co-chair, Immersive Education Initiative Mid-Atlantic USA Chapter (iED Mid-Atlantic) 3 Co-Chair, Immersive Education Initiative Libraries and Museums Technology Working Group (LAM.TWG) iED 2013 ImmersiveEducation.org PROCEEDINGS 4 In her keynote address, The Smithsonian Latino Virtual Museum: Crossing Borders/Mixed Realities, Melissa will share with iED 2013 attendees her experiences developing and disseminating immersive learning strategies from within a transmedia virtual museum model. She will share key milestones and current trends in digital curation from the vast body of work established under the Smithsonian Latino Virtual Museum (LVM). As the virtual museum’s creative director, Melissa developed the core conceptual framework for the project and orchestrated the development of a strategic plan in 2007 with a team of media experts from the Smithsonian Institution and external partners. The initiative is in its sixth year of development and has emerged as a bilingual resource virtual museum model for cross-platform accessible content created to meet a wide range of user experiences and related contexts. Her vision and implementation of a virtual museum model as a transmedia experience has contributed to the Smithsonian's efforts in meeting 21 Century challenges and needs for reaching diverse audiences in innovative and meaningful ways. During her iED 2013 keynote address Melissa will be joined by LVM partners in STEM: The University of Texas at El Paso, and the Smithsonian Environmental Research Center. (SERC). Other guests include Dr. Juana Roman and transmedia artist Stacey Fox, members of the LVM Advisory Council. Melissa A. Carrillo is the Director of New Media & Technology for the Smithsonian Latino Center (SLC) and co-chair of the Immersive Education Initiative’s Libraries and Museums Technology Working Group (LAM.TWG). Ms. Carrillo has directed and designed interactive and immersive (bilingual) online experiences for the SLC for thirteen years that encompass representing and exploring Latino cultural identity through the lens of new media and emerging technologies. Ms. Carrillo spearheads the Smithsonian Latino Virtual Museum (LVM); an online and immersive transmedia virtual museum model for exploring and representing cultural identity in the age of the social web. These shared experiences augment and showcase Smithsonian Latino collections and resources, opening new possibilities for distance learning applications and reaching remote target audiences. iED 2013 ImmersiveEducation.org PROCEEDINGS 5 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org KEYNOTE Sowing The Seeds for a More Creative Society Mitchel Resnick1,2 Affiliations: 1 Professor of Learning Research, MIT Media Lab 2 Director, Lifelong Kindergarten Group, MIT Media Lab iED 2013 ImmersiveEducation.org PROCEEDINGS 6 In his iED 2013 keynote address, Sowing The Seeds For A More Creative Society, Mitchel will discuss new technologies and activities designed specifically to help children learn to think creatively, reason systematically, and work collaboratively so that they are prepared for life in the Creative Society. Mitchel's keynote will have a special focus on Scratch, an authoring tool and online community developed by the MIT Lifelong Kindergarten research group that he directs. In today's rapidly changing society, people must continually come up with creative solutions to unexpected problems. More than ever before, success and happiness are based not on what you know, but on your ability to think and act creatively. In short, we are living in the Creative Society. But there is a problem. Most activities in children's lives, whether it's lessons in the classroom or games in the living room, are not designed to help children develop as creative thinkers. In my iED 2013 keynote address I will discuss new technologies and activities designed specifically to help children learn to think creatively, reason systematically, and work collaboratively, so that they are prepared for life in the Creative Society. I will focus particularly on Scratch, an authoring tool and online community that enables children (ages 8 and up) to create their own interactive stories, games, animations, and simulations -- and share their creations with one another online (http://scratch.mit.edu). In the process, children develop skills and ways of thinking that are essential for becoming active participants in the Creative Society.” Mitchel Resnick Professor of Learning Research, MIT Media Lab Director, Lifelong Kindergarten Group, MIT Media Lab Mitchel Resnick, Professor of Learning Research at the MIT Media Lab, develops new technologies and activities to engage people (especially children) in creative learning experiences. His Lifelong Kindergarten research group developed ideas and technologies underlying the LEGO Mindstorms robotics kits and Scratch programming software, used by millions of young people around the world. He also co-founded the Computer Clubhouse project, an international network of 100 after-school learning centers where youth from low-income communities learn to express themselves creatively with new technologies. Resnick earned a BS in physics from Princeton, and an MS and PhD in computer science from MIT. He was awarded the McGraw Prize in Education in 2011. iED CERTIFICATION : Scratch is an official Immersive Education (iED) learning environment for which a number of professional development (PD) iED 2013 hands-on workshops are offered for iED teacher/trainer certification. The corresponding iED Scratch Technology Working Group (SCR.TWG) is open to all attendees and iED members. iED 2013 ImmersiveEducation.org PROCEEDINGS 7 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org KEYNOTE The Rise of 3D Printing Bruce Bradshaw1 Affiliations: 1 Director of Marketing, Stratasys The popularity of 3D printers is on the rise across the globe. Organizations have embraced the technology in many industries including aerospace, consumer electronics, and automotive to name just a few. As a result, 3D printing has had a tremendous impact on education – from the world’s most renowned research institutions all the way through K12 classrooms. iED 2013 ImmersiveEducation.org PROCEEDINGS 8 There are countless examples of students at every level utilizing 3D printing to bring their ideas to life, from a First Robotics team collaborating to design the next “cutting edge” robot to a “Project Lead the Way” classroom energizing students about engineering all the way to MIT’s Media Lab printing art designs for the Museum of Modern Art. In his keynote address, Bruce Bradshaw, Director of Marketing for the market-leading 3D printing company Stratasys, will share with iED 2013 attendees some of the most innovative real-life case studies that are being taught today in classrooms using 3D printing. In addition, you will also learn about the differing technologies of 3D printing and hear about the exciting industry trends that have brought 3D printing to the forefront of everyday media. iED 2013 ImmersiveEducation.org PROCEEDINGS 9 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER PolySocial Reality for Education: Addressing the Vacancy Problem with Mobile Cross Reality Authors: Colin Allison1*, CJ Davies1†, Alan Miller1‡ Affiliations: 1 School of Computer Science, University of St Andrews * ca@st-andrews.ac.uk † cjd44@st-andrews.ac.uk, corresponding author ‡ alan.miller@st-andrews.ac.uk Abstract: Widespread adoption of smartphones and tablets has enabled people to multiplex their physical reality, where they engage in face-to-face social interaction, with Web-based social networks and apps, whilst emerging 3D Web technologies hold promise for networks of parallel 3D virtual environments to emerge. Although current technologies allow this multiplexing of physical reality and 2D Web, in a situation called PolySocial Reality, the same cannot yet be achieved with 3D content. Cross Reality was proposed to address this issue; however so far it has focused on the use of fixed links between physical and virtual environments in closed lab settings, limiting investigation of the explorative and social aspects. This paper presents an architecture and implementation that addresses these shortcomings using a tablet and the Pangolin virtual world viewer to provide a mobile interface to a corresponding 3D virtual environment. Motivation for this project stemmed from a desire to enable students to interact with existing virtual iED 2013 ImmersiveEducation.org PROCEEDINGS 10 reconstructions of cultural heritage sites in tandem with exploration of the corresponding real locations, avoiding the adverse temporal separation caused otherwise by interacting with the virtual content only within the classroom, with the accuracy of GPS tracking emerging as a constraint on this style of interaction. One Sentence Summary: This paper presents a system for student exploration of 3D environments in tandem with corresponding real locations. Introduction The rapid adoption of smartphones and tablets and their popularity for social interaction via the mobile Web [1] has led to people increasingly mixing their online and ‘real life’ behaviours, multiplexing traditional face-to-face social interaction with Web-based social networks and apps. The pervasive provision of these devices provides a new mechanism for people to take physical space for granted, to cerebrally occupy a Web-based location whilst their bodies are simultaneously established in a physical location [2]. The term PolySocial Reality (PoSR) has been proposed to describe these multiplexed mixed realities [3], wherein individuals interact within multiple environments [4], and to identify the extent and impact of shared and unshared experience in such situations [5]. Whilst current technologies allow PoSR involving 2D Web content to manifest, attempting the same with 3D content is marred by the ‘vacancy problem’: the inability to immerse oneself in 3D content whilst maintaining awareness of one’s physical surroundings [6]. With the majority of players of popular Massively Multiplayer Online games (MMOs) wishing they could spend more time playing, over a fifth even wanting to spend all of their time in game [7], and with social roles and the community aspect constituting key aspects of these game’s popularity [7, 8], exploring approaches for achieving 3D PoSR is prudent as demand for access to 3D social environments will only increase as 3D Web technologies further develop and more increasingly appeal to general social Web users and to educators in addition to gamers. The capacity of 3D social environments to provide extensible collaborative platforms for the reconstruction of cultural heritage sites and the potential of such reconstructions to promote understanding of and engagement with cultural heritage content both in public and classroom settings has been demonstrated [9, 10]. This research tested various deployment scenarios, leveraging different control methodologies (traditional keyboard and mouse, Xbox controllers and gesture recognition via Kinect) and display options (regular 24” desktop monitors, larger 40” televisions and still larger 150” projection) along with voice interaction with actors playing the parts of historical figures. These scenarios support three deployment modes; a network of reconstructions accessible via the Internet as part of the OpenSim hypergrid; portable LAN exhibitions where multiple computers are connected to a server via local network suitable for classroom use; and immersive installations combining projection and Kinect for use in museums and cultural iED 2013 ImmersiveEducation.org PROCEEDINGS 11 heritage centers. In all these scenarios a recurrent theme has been the relationship between the virtual reconstruction and the physicality of the corresponding physical site. Frequently projects have involved interactions with the reconstruction and subsequent visits and tours of the physical site; however the temporal separation between these activities makes it harder to appreciate the sometimes complex relationships between the two. To overcome this temporal separation of experiencing the virtual and the real it is necessary for the virtual representation to be accessible in tandem at the physical site by overcoming the vacancy problem. The cross reality concept [6, 11] was proposed as an approach to address the vacancy problem and describes the mixed reality situation that arises from the combination of physical reality with a complete [12] 3D virtual environment. Previous cross reality experiments did not address the explorative nor social elements of the paradigm as they focused on static locations at which the two environments were linked within closed lab surroundings [2]. The project described in this paper addressed these omissions with the Pangolin virtual world viewer [13] that uses a tablet computer with location and orientation sensors to provide users with a mobile cross reality interface allowing them to interact with 3D reconstructions of cultural heritage sites whilst simultaneously exploring the corresponding physical site. Scope The amount that the real and virtual environments that constitute a cross reality system spatially relate to each other is an important design decision which largely prescribes the style of interaction of the system as a whole. If the virtual environment represents a toscale replica of the real environment, an allusion to the ‘mirror world’ concept [14–16], then monitoring a user’s real position provides an implicit method of control for their virtual presence and allows navigation of the virtual content with no conscious manual control. This approach substantially lightens the cognitive load of maintaining a presence in a virtual environment, which is one of the main contributors to the vacancy problem. This paper presents a cross reality project in which there is a high degree of spatial relationship between the real and virtual environments, as it deals with bringing together virtual reconstructions of cultural heritage sites with their corresponding real locations. The backdrop for many of the experiments is the impressive ruins of the St Andrews cathedral, while the virtual environment is a ‘distorted’ [12] OpenSim simulation of the same location that presents a historically accurate reconstruction of the cathedral as it would have stood at the peak of its former glory [9, 17] (see Figure 1). This is a very large reconstruction, over 400m by 600m, of a complex multi-storey building featuring the cloisters as well as the Cannons’ living quarters. It is a challenging reconstruction for a mobile device to render and consequently a good testing environment. iED 2013 ImmersiveEducation.org PROCEEDINGS 12 The same pioneering collaborations between computer scientists, educationalists and historians that led to the creation of the St Andrews cathedral reconstruction have also led to the creation of reconstructions of; a 6th Century Spartan Basilica, Virtual Harlem (1921), Linlithgow Palace (1561), Brora Salt Pans (1599), Featherstone Fishing Station (19th century), Eyemouth Fort (1610), an Iron Age Wheel House and Caen Township (1815). These reconstructions provide a platform for interactive historical narratives, a stage for visitors to play upon and engage in both serious (and not so serious) games both alone and with other users, and serve as a focal point for educational investigations into local history and culture [9, 18]. The reconstructions have been widely used in a range of real world educational contexts. In the formal sector they have been a vehicle for investigative research, part of degree accredited university modules and used in both primary and secondary education (see Figure 2 for a depiction of a typical scenario). They have also been used as the content for interactive museum installations, art installations and community groups. This has involved further collaborations with Education Scotland, Historic Scotland, SCAPE Trust, Timespan cultural center, the Museum of the University of St Andrews (MUSA), Madras College, Linlithgow Palace and Strathkiness Primary School. The project described in this paper furthers this previous work by developing an interface to allow students to explore both a physical site and its virtual reconstruction in tandem, rather than having to explore the reconstruction from a computer in the classroom and trying to relate what they had seen to a visit to the physical site at a later date. This project, introduced in [19], developed a modified version of the Second Life viewer called Pangolin, which through use of sensors allows movement of the avatar and camera to be implicitly controlled by sensing the physical position and orientation of the tablet computer which the user carries and upon which the viewer executes. Figure 3 depicts the system in use at the St Andrews cathedral. This system promises to be beneficial in a number of scenarios; 1. Exploration of a cultural heritage site is augmented by the ability to navigate the 3D reconstruction and reflection is stimulated through the close juxtaposition of the remains and an accessible interpretation. 2. Access to other Web-based media (including video, audio and social networks, with this media adding to the experience of the real exploration and the real exploration adding to the experience of the Web-based media) is organized by the reconstruction, thereby supporting further intellectual enquiry. 3. Individuals and groups will benefit from interaction with remote participants who are connected to the reconstruction from a distant location. These remote visitors could be friends and family, or domain experts who could provide remote tours and disseminate domain specific knowledge without having to travel to the corresponding physical` site. iED 2013 ImmersiveEducation.org PROCEEDINGS 13 Figure 1: OpenSim reconstruction of the St Andrews cathedral. Figure 2: Madras College students interacting with the St Andrews cathedral OpenSim reconstruction via a traditional desktop computer with 20” monitor, keyboard and mouse. iED 2013 ImmersiveEducation.org PROCEEDINGS 14 Figure 3: The Pangolin viewer running on a tablet computer at the St Andrews cathedral, with the camera orientation of the viewer synchronised to the physical orientation of the tablet, the view of the virtual reconstruction corresponding to that of the physical ruins. Results Two plausible modalities of interaction were identified for this system, with each presenting different requirements with regards to accuracy of position tracking. The first modality is one in which a number of locations that represent points of particular interest are identified. This is already a common practice at cultural heritage sites, with such locations often bearing signs or placards presenting text and/or images explaining what can be observed from the position. With Pangolin, when a user walks within a certain range of such a point, their avatar can be moved to the corresponding location within the reconstruction (and a sound played to alert the user to the fact that there is something of interest to observe) from which they can then move the tablet around them to examine their surroundings in the reconstruction. This modality is similar to audio tours employed by many museums and cultural heritage sites, but replaces the requirement to follow a static route or type in numbers of locations with the ability to freely navigate the real environment with access to additional information being triggered automatically once within the required range of a point of interest. iED 2013 ImmersiveEducation.org PROCEEDINGS 15 The second modality is one of free roaming exploration, in which the movements of the user’s avatar within the reconstruction mimic the user’s movements within the real world as closely as possible. The first modality can be scaled to function with different accuracies of position tracking; as long as the distance between any two points of interest is at least as much as the worst case performance of the position tracking then distinguishing correctly between different points will always succeed. The second modality requires extremely accurate position tracking, arguably surpassing the capabilities of mainstream GPS technology even in ideal situations. The GPS receiver that was used for the Pangolin platform quotes performance of 2m Circular Error Probable (CEP) in ideal circumstances where additional correction data are available, falling to 2.5m CEP where these additional data are not available; this means that in ideal circumstances there is 50% certainty that the position reported by the GPS receiver is within 2m of its actual position. During the experiments the GPS receiver was unable to maintain reception of these additional correction data; when left stationary for several minutes reception was possible however subsequent movement of only a few meters at walking pace broke the connection. This reduced the theoretical maximum performance to 2.5m CEP, with observed performance being lower. Calculating the Hausdorff distance between a planned walking route around the cathedral and the route recorded by the GPS receiver when following this route provided a measure of the real world positional accuracy attainable in the particular conditions of the case study and thus which of the modalities is plausible. In this scenario, the Hausdorff distance represents the furthest distance needed to travel from any point on the route recorded by the GPS receiver to reach the nearest point on the planned route. Figure 4 depicts an aerial view of the St Andrews cathedral ruins; the blue line represents the planned route, red the route recorded by the Pangolin GPS receiver and green the route recorded by a smartphone’s GPS receiver for comparative purposes, both while walking the planned route. iED 2013 ImmersiveEducation.org PROCEEDINGS 16 Figure 4: An aerial view oriented North upward of the St Andrews cathedral ruins; the blue line represents a planned route, red the route recorded by the Pangolin GPS receiver and green the route recorded by the smartphone’s GPS receiver whilst walking the planned route. The Hausdorff distance between the planned route and that recorded by the Pangolin GPS receiver was 1.02e-04°. The ‘length’ of a degree of latitude and a degree of longitude depends upon location upon the Earth; around the location of the St Andrews cathedral 1° of latitude is equivalent to 111347.95m and 1° of longitude to 61843.88m. Thus the Hausdorff distance of 1.02e-04° can be visualized as ±11.3m of North/South inaccuracy or ±6.3m of East/West inaccuracy (or a combination of both N/S and E/W inaccuracy not exceeding a total displacement of 1.02e-04° from the planned route). The Pangolin GPS receiver did achieve better performance than that of the smartphone, which recorded a Hausdorff distance of 1.33e-04° (±14.8m N/S, ±8.2m E/W). The Hausdorff distance between the routes logged by the Pangolin receiver and the smartphone was 1.14e-04° (±12.7m N/S, ±7.0m E/W), which represents a low correlation between the inaccuracies recorded by the two receivers even though they are of similar magnitudes from the planned route. The maximum inaccuracies were recorded when walking along the South wall of the cathedral’s nave. This wall is one of the most complete sections of the building with stonework reaching some 30ft above ground level and providing an effective obstruction iED 2013 ImmersiveEducation.org PROCEEDINGS 17 to line-of-sight to half of the sky (and substantially impairing reception of signals from GPS satellites) when in close proximity to it. When considering just the sub-route shown in Figure 5, which terminates before this wall begins to significantly obstruct view of the sky, the Hausdorff distances are notably smaller; the Pangolin GPS receiver achieved a Hausdorff distance of 7.23e-05° (±8.05m N/S, ±4.47m E/W) throughout this sub-route, with the smartphone still behind with 8.99e-05° (±10.01m N/S, ±5.56m E/W). Again the Hausforff distance between the receivers showed low correlation between the inaccuracies, at 6.43e-05° (±7.12m N/S, ±3.98m E/W). Figure 5: An aerial view oriented North upward of the St Andrews cathedral ruins; the blue line represents the first sub-route of the planned route, red the sub-route recorded by the Pangolin GPS receiver and green the sub-route recorded by the smartphone’s GPS receiver whilst walking the first planned sub-route. When analyzing the tracks in the vicinity of the nave (see Figure 6) it is shown that although the receiver used by Pangolin outperformed the smartphone in terms of Hausdorff distance this relationship can be considered misleading as the smartphone track corresponded more closely in shape to the planned route even if it did stray further at its extreme. The discrepancy in the behavior of the two receivers in this situation is attributed to different implementations of dead-reckoning functionality between the receivers. Dead-reckoning is the process used when a GPS receiver loses reception of iED 2013 ImmersiveEducation.org PROCEEDINGS 18 location data from satellites and extrapolates its position based upon a combination of the last received position data and the velocity of travel at the time of receiving these data. Figure 6: An aerial view oriented North upward of the St Andrews cathedral ruins; the blue line represents the second sub-route of the planned route, red the sub-route recorded by the Pangolin GPS receiver and green the sub-route recorded by the smartphone’s GPS receiver whilst walking the second planned sub-route. Pangolin’s camera control from orientation data does not have as stringent performance criteria as the movement control from position data. Unlike augmented reality where sparse virtual content is superimposed upon a view of a real environment and the virtual objects must be placed accurately in order for the effect to work well, cross reality presents a complete virtual environment that is viewed ‘separately’ or side-by-side with the real environment and thus discrepancies between orientation of real and virtual environments have a less detrimental effect to the experience. Although the accuracy of the camera control during the experiments was reported as being sufficient, the speed at which the camera orientation moved to match physical orientation was reported as being too slow, resulting in having to wait for the display to ‘catch up’ to changes in orientation. This is attributed to the 10Hz sampling rate of the orientation sensors which, particularly after readings are combined for smoothing purposes to reduce jerky movement, resulted in too infrequent orientation updates. Frame iED 2013 ImmersiveEducation.org PROCEEDINGS 19 rates within Pangolin whilst navigating the route averaged between 15 and 20 frames per second with the viewer’s ‘quality and speed’ slider set to the ‘low’ position The style of explorative interaction with virtual content that this system employs is more resilient to input lag and low frame rates than other scenarios of interaction with virtual content such as fast paced competitive video games including First Person Shooters (FPS) [20], but overall user experience would nonetheless be improved by a faster sampling of orientation data and a higher frame rate. Additionally it should be noted that the cathedral reconstruction was created with relatively powerful desktop computers in mind as the primary deployment platform and has not been optimized for use on less powerful mobile platforms such as Pangolin. Performance of Pangolin on a less graphically complex OpenSim region (Salt Pan 2 [17]), that also depicts a reconstruction of a cultural heritage site, was better at 20 to 25 frames per second at the ‘low’ position and between 15 and 20 frames per second at ‘high’ (see Figure 7). Figure 7: A plot of Pangolin’s performance (measured in frames per second) against different graphical settings (selected via the ‘Quality and speed’ slider of the viewer) in two positions within the Salt Pan 2 region. iED 2013 ImmersiveEducation.org PROCEEDINGS 20 Interpretations The positional accuracy of 1.02e-04° attained by the Pangolin GPS receiver is sufficient for the first modality of interaction (that of distinguishing and navigating between multiple points of interest). This value of 1.02e-04° (analogous to a combination of ±11.3m of North/South inaccuracy or ±6.3m of East/West inaccuracy) represents a constraint on the granularity of the content; it is the minimum distance required between any two points of interest for them to be correctly differentiated between. This same value is not sufficient for the second modality of interaction (that of free roaming exploration with avatars mimicking their users’ movements as closely as possible). This modality would require the use of additional position tracking techniques to improve accuracy to around 1m CEP (analogous to 8.98e-06° latitude or 1.62e-05° longitude around the location of the St Andrews cathedral). Use of a GPS receiver that is lower performance than that used by Pangolin, but more common due to being of the caliber integrated into smartphones and tablets such as that used in the experiments, is still sufficient for the first modality but with a larger minimum distance required between any two points of interest. The Hausdorff distance of 1.33e-04° recorded by the smartphone used in the experiments is analogous to ±14.8m N/S or ±8.2m E/W around the location of the cathedral. Observed accuracy of the orientation tracking is sufficient for both modalities of interaction; the accuracy of orientation tracking required does not change with different positional accuracy and the accuracy of orientation attained in the experiments is sufficient for an acceptable user experience, however the experience would benefit from better graphical quality and higher responsiveness to changes in user orientation. Conclusions Manifestations of PoSR involving 2D content are commonplace, but whilst the social allures and educational benefits of 3D environments have been recognized the ability to forge PoSR situations involving 3D content remains elusive. As development of 3D Web technologies furthers, the demand for 3D PoSR will grow. The cross reality concept, when freed from static linking between physical and virtual environments, provides a technique to address this shortcoming. This technique has been investigated by the Pangolin virtual world viewer as a mobile, location and orientation aware cross reality interface to spatially related 3D virtual environments. Pangolin aimed to provide a platform for furthering previous use of such 3D environments, for allowing students to learn from reconstructions of cultural heritage iED 2013 ImmersiveEducation.org PROCEEDINGS 21 content, by allowing them to interact with such reconstructions whilst simultaneously exploring the corresponding physical environments. Performance of position tracking by GPS emerged as a constraint upon the modality of interaction possible in such systems, with commercially available non-assisted GPS receivers, of the quality built into smartphones and tablets, capable of sufficient accuracies for the ‘points of interest’ modality to function correctly but not for the free roaming exploration modality. These conclusions hold for today’s commodity technology. We can expect the resolution, processing power and rendering capability of mobile phones and tablets to continue to increase for any fixed price point. Similarly, augmented positioning systems providing greater positional accuracy are likely to emerge. Thus we conclude that the benefits of having accurate virtual interpretations of historic locations available at the sites in a mobile fashion will be available for school visits, cultural heritage investigation and tourists of the future. As mobile 3D cross reality technology becomes common place and matures, applications in education, entertainment, business and the arts will emerge that will surprise us all. References and Notes: 1. Accenture, Mobile Web Watch Internet Usage Survey 2012 (available at http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture-Mobile-WebWatch-Internet-Usage-Survey-2012.pdf). 2. S. A. Applin, A Cultural Perspective on Mixed , Dual and Blended Reality, Policy , 11–14 (2011). 3. S. A. Applin, M. D. Fischer, Pervasive Computing in Time and Space: The Culture and Context of “Place” Integration, 2011 Seventh International Conference on Intelligent Environments , 285–293 (2011). 4. S. A. Applin, PolySocial Reality!: Prospects for Extending User Capabilities Beyond Mixed , Dual and Blended Reality, Policy (2012) (available at http://www.dfki.de/LAMDa/2012/accepted/PolySocial_Reality.pdf). 5. S. Applin, M. Fischer, K. Walker, Visualising PolySocial Reality (2012) (available at http://jitso.org/2012/12/03/visualising-polysocial-reality-revised/). 6. J. Lifton, thesis, Massachusetts Institute of Technology, Department of Media Arts and Sciences (2007). 7. E. Castronova, Synthetic Worlds: The Business and Culture of Online Games (University of Chicago Press, 2006), p. 344. 8. R. Bartle, Designing Virtual Worlds R. A. Reiser, J. V Dempsey, Eds. (New Riders Games, 2004; http://www.amazon.ca/exec/obidos/redirect?tag=citeulike0920&path=ASIN/0131018167), p. 741. iED 2013 ImmersiveEducation.org PROCEEDINGS 22 9. S. Kennedy et al., in Proceedings of the 2nd European Immersive Education Summit EiED 2012, M. Gardner, F. Garnier, C. D. Kloos, Eds. (Universidad Carlos III de Madrid, Departamento de Ingeniería Telemática , Madrid, Spain, 2012), pp. 146–160. Available: http://JiED.org 10. C. Allison et al., in Proceedings of the 2nd European Immersive Education Summit EiED 2012, (2012). Available: http://JiED.org 11. J. A. Paradiso, J. A. Landay, Cross-Reality Environments, Pervasive Computing, IEEE 8, 14–15 (2009). 12. J. Lifton, J. Paradiso, in Proceedings of the First International ICST Conference on Facets of Virtual Environments (FaVE), (2009). 13. C. J. Davies, Pangolin source code (available at https://bitbucket.org/cj_davies/viewer-release-serial-io). 14. D. Gelernter, Mirror Worlds: or the Day Software Puts the Universe in a Shoebox...How It Will Happen and What It Will Mean (1993), p. 256. 15. A. Bar-Zeev, RealityPrime!» Second Earth (2007) (available at http://www.realityprime.com/articles/second-earth). 16. W. Roush, Second EarthTechnology Review (2007) (available at http://www.technologyreview.com/Infotech/18911/). 17. University of St Andrews Open Virtual Worlds group, School of Computer Science, Registration page for Open Virtual Worlds group OpenSim grid (available at http://virtualworlds.cs.st-andrews.ac.uk/cathedral/login.php). 18. K. Getchell, A. Miller, C. Allison, R. Sweetman, in Serious Games on the Move, (Springer, 2009), pp. 165–180. 19. C. J. Davies, A. Miller, C. Allison, in PGNet, (2012). 20. M. Claypool, K. Claypool, Latency and player actions in online games, Communications of the ACM 49, 40 (2006). 21. Y. Sivan, 3D3C Real Virtual Worlds Defined!: The Immense Potential of Merging 3D , Community , Creation , and Commerce 3D3C Real Virtual Worlds Defined!: The Immense Potential of Merging 3D , Community , Creation , and Commerce, Journal of Virtual Worlds Research 1, 1–32 (2008). 22. L. Micro-Star Int’l Co., MSI Global – Notebook - WindPad 110W (available at http://www.msi.com/product/nb/WindPad-110W.html). 23. R. Mautz, thesis, ETH Zurich (2012). 24. IndoorAtlas Ltd., Technology (available at http://www.indooratlas.com/Technology). 25. AzureWave, GPS-M16 (available at http://www.azurewave.com/product_GPSM19_1.asp). 26. u-blox AG, MAX-6 series (available at http://www.u-blox.com/en/gps-modules/pvtmodules/max-6.html). 27. Sarantel, SL1200 (available at http://www.sarantel.com/products/sl1200). iED 2013 ImmersiveEducation.org PROCEEDINGS 23 28. u-blox AG, MAX-6 Product Summary (2012) (available at http://www.ublox.com/images/downloads/Product_Docs/MAX-6_ProductSummary_(GPS.G6-HW10089).pdf). 29. u-blox AG, u-center GPS evaluation software (available at http://www.ublox.com/en/evaluation-tools-a-software/u-center/u-center.html). 30. HTC Corporation, HTC One S Overview - HTC Smartphones (2013) (available at http://www.htc.com/www/smartphones/htc-one-s/). 31. Qualcomm Incorporated, Snapdragon Processors | QUalcomm (2013) (available at http://www.qualcomm.eu/uk/products/snapdragon). 32. Google, My Tracks - Android Apps on Google Play (2012) (available at https://play.google.com/store/apps/details?id=com.google.android.maps.mytracks&hl=en ). 33. C. J. Davies, Database as plain-text SQL script file (available at http://straylight.cs.standrews.ac.uk/cj_davies_psql_db_sql_plaintext). 34. C. J. Davies, Database as tar archive suitable for input into pg_restore (available at http://straylight.cs.st-andrews.ac.uk/cj_davies_psql_db_tar_for_pg_restore). 35. PostGIS, ST_HausdorffDistancePostGIS 2.0.2 Manual (available at http://www.postgis.org/docs/ST_HausdorffDistance.html). 36. W. Gellert, M. Gottwald, H. Hellwich, H. Kästner, H. Küstner, The VNR Concise Encyclopedia of Mathematics (Van Nostrand Reinhold, New York, ed. 2, 1989). 37. C. J. Davies, Arduino sketch for Pangolin (available at https://bitbucket.org/cj_davies/arduino_hmc6434_ublox_max_6_tinygps). 38. M. Hart, TinyGPS (available at http://arduiniana.org/libraries/tinygps/). Supplementary Materials: Materials and Methods: Virtual Environment: The 3D virtual environment component of the Pangolin system was implemented using the Second Life/OpenSimulator (SL/OpenSim) platform, which provides a 3D social-oriented multi-user non-competitive virtual environment which focuses on the community, creation and commerce [21] aspects of many users interacting within a shared space through the abstraction of avatars, rather than the competitive natures of games and the solitary environments afforded by simulation and visualization platforms. The distributed client/server model of SL/OpenSim, wherein 3D content is stored on a grid of servers operated by a multitude of organizations and distributed to and navigated between by dispersed clients on demand when they enter a particular region rather than being pre-distributed as is the norm for games, simulations and visualizations, is analogous to the manner in which 2D social Web content is served from Web servers to client browsers and apps. This style of content delivery is necessary when considering the dynamic and ephemeral nature of consumer-generated media which constitutes the majority of the current 2D social Web and will make up the majority of expanding 3D social Web content. iED 2013 ImmersiveEducation.org PROCEEDINGS 24 Whilst SL/OpenSim encapsulates many of the desirable architectural features for 3D PoSR experiments it does not support execution upon familiar mobile platforms (Android/iOS) nor does it provision for avatar control from sensor data. However the open source nature of the SL viewer allowed modifications to be effected, enabling control of the avatar and camera from real time data collected from position and orientation sensors connected to an x86 architecture tablet computer. This ability to control navigation within the 3D virtual environment without explicit conscious input of keyboard/mouse/touch commands is integral to reducing the cognitive load required to maintain a presence within a virtual environment which is a key requirement for overcoming the vacancy problem and achieving successful mobile cross reality. As the SL viewer is only available for x86 platforms the choice of user hardware platform for the experiments was limited, with the MSI WindPad 110W presenting the most promising solution: a 10” tablet computer sporting an AMD Brazos Z01 APU (combining a dual-core x86 CPU and Radeon HD6250 GPU) [22]. The user’s position was monitored using GPS, a solution which is well suited to applications of the system within the use case of cultural heritage; such sites often constitute outdoor ruins at which a clear view of the sky allows for good GPS connectivity. For use cases where a similar modality of interaction is desired whilst indoors then an indoor positioning system would be used; a roundup of such technologies is available in [23] and of particular pertinence is the upcoming IndoorAtlas [24] technology which purports to provide accurate indoor positioning using only the magnetometers already ubiquitous in today's smartphones and tablets. GPS configuration: The 110W features an AzureWave GPS-M16 [25] GPS receiver; however poor API provision and meager documentation lead to use of a separate u-blox MAX-6 GPS receiver [26] outfitted with a Sarantel SL-1202 passive antenna [27]. The MAX-6 is of higher operational specification than the GPS-M16 and supports Satellite Based Augmentation Systems (SBAS) which improve the accuracy of location data by applying additional correction data received from networks of satellites and ground-based transmitters separate to those of the GPS system. These networks include the European Geostationary Navigation Overlay Service (EGNOS) that covers the UK where the experiments took place. The product summary for the MAX-6 claims accuracy of 2.5m Circular Error Probable (CEP) without SBAS corrections and 2m CEP with SBAS corrections “demonstrated with a good active antenna” [28]. This means that, in an ideal situation with SBAS correction data available, there would be 50% certainty that each position reported by the GPS receiver would be within 2m of its actual position. The SL-1202 antenna used is passive, however as the distance between antenna and the MAX-6 IC itself in the hardware application is only a few millimeters there would have been negligible benefit from using an active antenna. However whether the SL-1202 constitutes ‘good’ for achieving the headlining performance characteristics of the MAX-6 is debatable as the definition of ‘good’ was not provided in the product summary. The MAX-6 was operated in ‘pedestrian’ dynamic platform model, use of SBAS correction data was enabled and frequency of readings was set to the maximum of 5Hz. To determine the real world accuracy attainable with the MAX-6 outfitted with the SL-1202 in situations akin to those of the cultural heritage case study, a walking route around the St Andrews cathedral ruins, akin to the route that an individual visitor or school group might take, was iED 2013 ImmersiveEducation.org PROCEEDINGS 25 planned and then walked with the MAX-6 connected to a laptop computer via an Arduino operating as a Universal Asynchronous Receiver/Transmitter (UART) feeding the raw National Marine Electronics Association (NMEA) messages into the u-center GPS evaluation software version 7.0 [29] which logged the messages for later evaluation. Simultaneously for comparative purposes a mid-range consumer Android smartphone was used to record the same track; a HTC One S [30] containing a gpsOne Gen 8A solution within its Qualcomm Snapdragon S4 processor [31] and using Google’s My Tracks [32] app version 2.0.3 to record the data. The three sets of positional data (planned route, MAX-6 recorded route and smartphone recorded route) were entered into a PostgreSQL database [33, 34] and the PostGIS database extender’s ST_HausdorffDistance algorithm [35] was used to calculate the Hausdorff distances between the recorded routes and the planned route and between the recorded routes themselves. Because of the substantially greater inaccuracies identified in the latter part of the recorded tracks, separate Hausdorff distances were calculated both for the complete tracks and also for truncated first and second sub-tracks. GPS to OpenSim conversion: Translating real world positions, obtained via the GPS receiver as latitude and longitude pairs, into corresponding OpenSim (X,Y) region coordinates is achieved using the haversine formula [36] from spherical trigonometry. The prerequisites for this approach are that the OpenSim model is aligned correctly to the OpenSim compass as the real location is aligned to real bearings (although provision to specify an ‘offset’ within the Pangolin viewer for non-aligned models would be a trivial addition), that the model was created to a known and consistent scale and that a single 'anchor point' is known for which both the real world latitude/longitude and corresponding OpenSim (X,Y) region coordinates are known. Using the haversine formula the great-circle (or orthodromic) distance between the latitude of the anchor point and the latitude of the new GPS reading is calculated, then applying the scale of the model results in the equivalent distance in OpenSim metrics between the Y coordinate of the anchor point and the Y coordinate of the position corresponding to the new GPS reading. Repeating the same calculations with the longitude of the new GPS reading provides the distance between the X coordinate of the anchor point and the X coordinate of the position corresponding to the new GPS reading. Adding or subtracting these distances as appropriate to the OpenSim coordinates of the anchor point provides the OpenSim coordinates that correspond to the new GPS reading, to which the avatar is then instructed to move. This approach works across OpenSim region boundaries (it is not limited to a single 256x256 meter OpenSim region) and there are no restrictions for the placement of the OpenSim component of the anchor point (it can be anywhere in any region, movement of the avatar can be in any direction from it (positive and negative), it does not have to be at the center of the model or even in a region that the model occupies). The implementation ignores elevation, due to a combination of the relatively low accuracy of these data attainable via GPS (when compared to the longitudinal/latitudinal accuracy) and as the case study explored involved users navigating outdoor ruins remaining at ground level. Orientation configuration: To control the SL camera in the required fashion, sensor data is collected for the direction that the user is facing (in terms of magnetic compass bearing) and the vertical angle (pitch) at which they are holding the tablet. Magnetic compass bearing is sensed iED 2013 ImmersiveEducation.org PROCEEDINGS 26 using a magnetometer and pitch by an accelerometer. Roll data is also captured by the accelerometer, however it was expected that users would keep the tablet in a roughly horizontal fashion when interacting with it, thus using these data to control the SL camera’s roll was not deemed to be beneficial and was not implemented. The 110W does not feature a magnetometer and its tilt sensor is rudimentary (only useful for differentiating between discrete cases of landscape and portrait orientation for screen rotation). Several alternative sensors were auditioned, including the MMA8452, ADXL335, HMC5883L and eventually the HMC6343 which was adopted for the experiments. The HMC6343 combines a 3-axis magnetometer, 3-axis accelerometer and algorithms to internally apply the accelerometer’s readings to tilt compensate the magnetometer’s readings; tilt compensation is necessary for an accurate compass bearing when the device is not held in a perfectly level orientation, such as when the user tilts it up or down to view content above or below their eye level. Magnetic declination information was entered into the HMC6343 for the position of the cathedral and the date of our experiments. The HMC6343’s hard-iron offset calculation feature was used each time the hardware configuration was altered. The sampling frequency of the HMC6343 was set to its highest value of 10Hz. Orientation was set to ‘upright front’ to match the physical orientation of the IC in the experiments. Interfacing GPS/Orientation hardware with SL: The MAX-6 & HMC6343 were connected to an Arduino (the setup used throughout the experiments is shown in Figure 8) and a ‘sketch’ (the name given to programs that execute upon the Arduino platform) written to receive the data from the ICs, perform simple processing upon them and relay them to the tablet via USB connection [37]. The TinyGPS library [38] was used to abstract processing of NMEA messages from the MAX-6 to obtain the required latitude and longitude values. iED 2013 ImmersiveEducation.org PROCEEDINGS 27 Figure 8: The HMC6343, MAX-6 and SL-1202 connected via a breadboard prototyping shield to the Arduino, in the setup and configuration that was then attached to the rear of the 110W for the experiments. Existing SL avatar/camera control interfaces were explored, by for example programming the Arduino to mimic a standard USB HID joystick, however the granularity of control attainable via these methods was not sufficient. Thus the SL viewer was modified to make use of the Boost.Asio C++ library to support receiving data via serial port, giving rise to the Pangolin viewer, and further modifications were made to the viewer to use these received data to control the movement of the avatar and camera by directly interfacing with the control functions at a lower level of abstraction. The viewer’s GUI was modified with the addition of a dialogue that allows the user to specify the path of the serial device, separately enable or disable sensor-driven camera and movement control, as well as providing numerous controls for fine-tuning its behavior, including the ability to specify high-pass filters for avatar movement and specify the smoothing applied to camera control. This GUI also presents the necessary fields for input of the anchor point details and fields for diagnostic output of the received information. Figure 9 shows this GUI within the Pangolin viewer. iED 2013 ImmersiveEducation.org PROCEEDINGS 28 Figure 9: The GUI within the Pangolin viewer that allows administration of the position and orientation control of the avatar. In this screenshot Pangolin is connected to the Arduino and is receiving position and orientation data. iED 2013 ImmersiveEducation.org PROCEEDINGS 29 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Enhancing Learning and Motivation through Surrogate Embodiment in MUVE-based Online Courses Author: Saadia A. Khan1,2 *† Affiliations: 1 Teachers College, Columbia University. 3 Chair, New York Chapter of Immersive Education (iED NY) *Correspondence to: khan2@tc.columbia.edu †Department of Human Development, Box 118, Teachers College, Columbia University, New York, NY 10027. Abstract: How can we effectively use multi-user virtual environments (MUVEs) to teach online psychology courses to graduate students? Our previous research findings suggest that surrogate embodiment in MUVEs enhances learning and motivation more than no embodiment or traditional classroom learning. Based on these findings, we are currently investigating if online courses that use MUVEs lead to higher learning gains and more motivation as compared to traditional online courses that do not use MUVEs. Two online graduate psychology courses (Cognition and Learning, and Psychology of Thinking) have been enhanced by having students learn theory and understand core concepts through activities in a MUVE. Student learning and motivation in these MUVE-based online courses is being compared with student learning and motivation in traditional versions of these online courses. Data gathered so far suggest that being assigned activities in a MUVE improves students’ understanding of core concepts more than students who do not attempt these activities in a MUVE. This paper includes current iED 2013 ImmersiveEducation.org PROCEEDINGS 30 findings during Phase 1 with examples of surrogate embodiment activities used to help students learn about problem solving theory as they use their avatars to problem solve in a virtual environment. One Sentence Summary: Our findings suggest that MUVE-based online courses enhance learning and motivation more than traditional online courses. Introduction Can educators enhance students’ learning and motivation in online courses by having them learn as avatars via surrogate embodied learning experiences in multi-user virtual environments (MUVEs)? How would students enrolled in online courses that allow them to experience learning through their avatars compare with students enrolled in traditional online courses that do not use MUVEs and rely only on online discussion forums? Our previous empirical findings suggest that adult learners learn better when they experience surrogate embodied learning in a multi-user virtual environment, and surrogate embodied learning is further enhanced when learners experience positive embodied affect via virtual affective gestures [1, 2, 3, 4]. In a series of studies, we presented adult learners with novel historical text, which they learned through different types of embodiment including surrogate embodiment. When learners’ memory, comprehension, transfer, and motivation were measured after embodied learning experiences, it was found that (a) learning and memory were more enhanced when learners experienced embodied learning as compared to when learners experienced no embodiment and (b) learning and motivation were further enhanced when positive embodied affect was combined with surrogate embodiment. These findings support theories of embodied cognition, which suggest that embodiment involves the reactivation of multimodal representations and mental simulations involving vision, audition, affect, and touch, etc [5, 6, 7, 8]. The findings also support previous research findings that suggest that embodiment facilitates learning and motivation [9-18]. Black et al.’s [12] Instructional Embodiment Framework identifies different types of embodiment including Surrogate Embodiment (i.e., manipulating a surrogate or agent such as a virtual avatar), Imagined Embodiment (i.e., imagining actions), and Direct Embodiment (i.e., physically performing actions). Whereas Direct Embodiment can be used to help learners in classrooms, Surrogate Embodiment through avatars in virtual environments can be an effective way of providing distance learners with embodied learning experiences. However, traditional online courses based mostly on online discussion forums rarely provide learners with surrogate embodied experiences. In an ongoing project, we are investigating whether or not learners learn better and have higher motivation in enhanced online courses that provide learners with surrogate embodied learning experiences via avatars in MUVEs than in traditional online courses that do not provide learners with surrogate embodied learning experiences in MUVEs. Adult graduate students enrolled in online psychology courses at Teachers College, Columbia University find online courses particularly hard. The general complaint is that there is a lot of reading, and the text-based online discussion forums that these courses rely upon do not help all students understand core concepts. Therefore, we have chosen two very popular psychology courses at Teachers College, Columbia University, which have very high enrollment throughout the year, and we are offering iED 2013 ImmersiveEducation.org PROCEEDINGS 31 two versions of the courses. One version is the traditional online discussion forum-based course while the other version is an enhanced online course that is MUVE-based. The project, which is currently in its initial phase, involves building virtual environments to teach psychology theory to adult graduate students in MUVE-based online courses. During Phase 1, we have used various MUVE-based surrogate learning activities along with discussion forums to help enhance students’ learning and motivation. Our initial findings suggest that students enrolled in the MUVE-based online course report higher levels of understanding, confidence, and enjoyment than students enrolled in the traditional online course. Method Participants Forty-eight (N = 48) female (28) and male (20) adult graduate students, 21 years and older, enrolled in two versions of an online psychology course, Human Cognition and Learning, were examined. Twenty-five (17 females and 8 males) were enrolled in the MUVE-based online version of the course and twenty-three (11 females and 12 males) were enrolled in the traditional online version of the course. Procedure Two Human Cognition and Learning courses were offered online. One version used MUVEs along with a discussion forum. The other version did not use MUVEs; it used only a discussion forum. The topic examined was Problem Solving, which was taught asynchronously in both versions of the online course in a session that lasted for one week. Constructivist principles were used for instruction. Students in both courses were given the same reading material on problem solving [19], which consisted of theory and important research findings about problem solving. Students were also given a PowerPoint presentation, which they were asked to view. After this, they were given two open-ended questions about problem solving, which they were asked to respond to. These questions tested for students’ understanding of theory and application of theory. Students in both courses were asked to respond to the questions by the fifth day of the week, and they were asked to respond to and analyze other students’ discussion posts by the seventh day of the week. All students were told to respond to at least two posts by other students. The two groups differed in that the MUVE-based group was given an activity that needed to be completed as an avatar in a MUVE while the traditional online group was not assigned this activity. The MUVE activity required students to create their avatar, explore the Teachers College virtual region, TC Educator, and change their avatar’s appearance. See Figures 1 to 10. During the activity, they were told to observe their problem solving strategies, and write down how they problem solved at different points during the activity. iED 2013 ImmersiveEducation.org PROCEEDINGS 32 Figure 1. A Student’s Avatar Reads Information about Teachers College. Figure 2. A Student Chooses a Mouse as an Avatar in the MUVE-based Online Course. iED 2013 ImmersiveEducation.org PROCEEDINGS 33 Figure 3. The Student’s Mouse Avatar Dances in the Student Lounge. Figure 4. A Student’s Robot Avatar in the Student Lounge. iED 2013 ImmersiveEducation.org PROCEEDINGS 34 Figure 5. Student Avatars in the MUVE-based Online Course. Figure 6. A Student’s Avatar on the Teachers College Island, TC Educator. iED 2013 ImmersiveEducation.org PROCEEDINGS 35 Figure 7. A Student’s Avatar Interacts with a Snowman. Figure 8. A Student’s Avatar Flies above the Teachers College Building. iED 2013 ImmersiveEducation.org PROCEEDINGS 36 Figure 9. A Student’s Avatar in the Cafe inside Teachers College. Figure 10. A Student’s Cat Avatar in the Library. iED 2013 ImmersiveEducation.org PROCEEDINGS 37 Measures Data were analyzed qualitatively and quantitatively. Students’ initial replies to the openended problem solving questions were coded for comprehension in terms of whether or not they demonstrated an understanding of problem solving theory by answering the questions correctly. Maximum total score was 20. Apart from students’ understanding of core concepts related to problem solving, other dependent variables of interest were students’ confidence in their learning and their enjoyment. All students were asked to report (a) if they believed that they knew problem solving theory well by the end of the week (i.e., confidence), and (b) whether or not they enjoyed the session (i.e., enjoyment). Both measures were assessed through a response scale that ranged from high to low. Students were required to indicate their degree of confidence (high, medium, low) and their level of enjoyment (high, medium, low). Results Multivariate tests results of Type of Online Course (with MUVE-based Online Course and Traditional Online Course) as fixed factor and comprehension, confidence, and enjoyment as dependent variables were found to be statistically significant, Wilks’ Λ = .317, F(3, 44) = 31.650, p < .001, η2 = .683, indicating that the two types of online courses differed significantly. Univariate tests results were found to be statistically significant for all three dependent variables at the .001 alpha level. The results indicated that students enrolled in the MUVE-based online course demonstrated significantly better understanding of problem solving theory than students enrolled in the traditional online course, F(1, 46) = 45.405, p < .001, η2 = .497. In addition, students in the MUVE-based online course scored higher on confidence than students in the traditional online course, F(1, 46) = 47.894, p < .001, η2 = .510 and they scored higher on enjoyment than students in the traditional online course, F(1, 46) = 55.955, p < .001, η2 = .549. Overall, it seemed that surrogate embodiment through an avatar in a MUVE was related to students’ understanding, confidence, and enjoyment. See Figure 11. iED 2013 ImmersiveEducation.org PROCEEDINGS 38 Figure 11. Comprehension, Confidence, and Enjoyment Mean Scores by Type of Course. Discussion Our results suggest that using surrogate embodiment in MUVEs can facilitate adult learners’ comprehension and understanding of psychology theory. This supports our previous findings as well as empirical findings by other researchers that suggest that MUVEs provide learners with engaging experiences in dynamic environments that help enhance their learning and motivation. Students enrolled in the MUVE-based online course reported a significantly higher level of confidence in their learning than students enrolled in the traditional online course. It seemed from their responses that these students were more aware of the problem solving strategies they were employing as they answered the questions than students who were not enrolled in the MUVE-based online course. These students appeared to be more aware of their own learning process, which was not apparent in the responses of students enrolled in the traditional online course. They also seemed to be analyzing other students’ responses more deeply than the students who did not experience surrogate embodiment. The enjoyment experienced and reported seemed to be related to the MUVE experience. Students’ responses indicated that the surrogate embodiment activity facilitated their understanding of problem solving theory and they applied the theory during the surrogate embodiment activity. Students iED 2013 ImmersiveEducation.org PROCEEDINGS 39 analyzed the problem solving strategies they used during their surrogate experience in the MUVE and they were able to apply theory to respond to the questions asked. In that sense, students in the MUVE-based online course also seemed to display transfer of learning, although that data has not yet been formally analyzed quantitatively. Conclusion and Implications The results presented here are only preliminary results of one topic (problem solving) during the first phase of this research project. A large amount of data has been gathered, which could not be analyzed due to time constraints. This paper only contains data related to one session on problem solving taught in two versions of the course, Human Cognition and Learning. The results indicate that adult learners learned better, had higher levels of confidence in their learning, and enjoyed learning more when online courses were enhanced with surrogate embodiment experiences in a MUVE. This has implications for teaching online courses through surrogate embodiment in MUVEs. More research is needed on how immersive technologies such as MUVEs can be used more effectively in online courses. We are currently continuing the project to further investigate learning as avatars in MUVEs in online courses. References: 1. S. A. Khan, J. B. Black, Surrogate embodied learning in MUVEs: Enhancing memory and motivation through embodiment. Journal of Immersive Education (in press). 2. S. A. Khan, J. B. Black, Enhancing learning and motivation through positive embodied affect and surrogate embodiment in virtual environments. Paper to be presented at the annual meeting of the American Educational Research Association, San Francisco, California, April 2013. 3. S. A. Khan, J. B. Black, Learning world history through role-play: A comparison of surrogate embodiment and physical embodiment. Paper presented at the Sixth Annual Subway Summit on Cognition and Education Research, Teachers College, Columbia University, New York, NY, 2013. 4. S. A. Khan, J. B. Black, Surrogate embodied learning, paper presented at the Fifth Annual Subway Summit on Cognition and Education Research, Fordham University, New York, NY, 27 January 2012. 5. L. W. Barsalou, Grounded cognition. Annual Review of Psychology 59, 617-645 (2008). 6. L. W. Barsalou, in Embodied Grounding Social Cognitive Affective and Neuroscientific Approaches, G. R. Semin, E. R. Smith, Eds. (Cambridge Univ. Press, New York, 2008), pp. 9-42. 7. A. M. Glenberg, in Embodied Grounding Social Cognitive Affective and Neuroscientific Approaches, G. R. Semin, E. R. Smith, Eds. (Cambridge Univ. Press, New York, 2008), pp. 43-70. 8. D. A. Havas, A. M. Glenberg, M. Rinck, Emotion simulation during language comprehension. Psychonomic Bulletin Review 14, 436-441 (2007). iED 2013 ImmersiveEducation.org PROCEEDINGS 40 9. A. M. Glenberg, T. Gutierrez, J. R. Levin, S. Japuntich, M. P. Kaschak, Activity and imagined activity can enhance young children’s reading comprehension. Journal of Educational Psychology 96, 424-436 (2004). 10. R. Beun, E. de Vos, C. Witteman, in Intelligent Virtual Agents, Rist et al., Eds. (Berlin: Springer-Verlag, 2003). 11. N. Bianchi-Berthouze, W. W. Kim, D. Patel, Does body movement engage you more in digital game play? And why? Lecture Notes In Computer Science, 4738, 102-113 (2007). 12. J. B. Black, A. Segal, J. Vitale, C. Fadjo, in Theoretical Foundations of Learning Environments, D. Jonassen, S. Land, Eds. (Routledge, New York, 2011), pp. 1-34. 13. P. C. Blumenfeld, T. M. Kempler, J. S. Krajcik, in The Cambridge Handbook of Learning Sciences, R. K. Sawyer, Ed. (Cambridge University Press, New York, 2006). 14. J. Clarke, C. Dede, Making learning meaningful: An exploratory study of using multiuser environments (MUVEs) in middle school science, paper presented at annual meeting of the American Educational Research Association, Montreal, Canada, 2005. 15. C. Dede, D. Ketelhut, Design-based research on a museum-based multi-user virtual environment, paper presented at the annual meeting of the American Educational Research Association, Chicago, IL, 2003. 16. J. Hammer, G. Andrews, Z. Zhou, J. Black, C. Kinzer, Learning and video games: A process preparation for future learning approach, paper presented at the Game Developers Conference, San Francisco, CA, 8 March 2007. 17. D. Li, S. Kang, C. Lu, I. Han, J. Black, in Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009, C. Fulford, G. Siemens, Eds. (AACE, Chesapeake, VA, 2009), pp. 2209-2216. 18. S. J. Metcalf, A. Kamarainen, T. A. Grotzer, C. J. Dede, Ecosystem science learning via multi-user virtual environments, paper presented at the American Educational Research Association National Conference, New Orleans, LA, 2011. 19. J. R. Anderson, Cognitive psychology and its implications (Worth Publishers, New York, 2010). Acknowledgments: The study is a part of a research project that is based on Saadia A. Khan’s doctoral dissertation. It is partially supported by the Teachers College, Columbia University Provost’s fund awarded to John B. Black, Chair Department of Human Development. Saadia A. Khan is the principal investigator of the research project. The two online psychology courses being investigated are Human Cognition and Learning and Psychology of Thinking, which are taught by Saadia A. Khan at Teachers College, Columbia University. The current paper presents one case from Human Cognition and Learning. iED 2013 ImmersiveEducation.org PROCEEDINGS 41 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Promoting Engagement and Complex Learning on OpenSim Authors: L. Tarouco1,*, B. Avila1, E. Amaral1,2 Affiliations: 1 Universidade Federal do Rio Grande do Sul. 2 Universidade Federal dos Pampas. *Correspondence to: liane@penta.ufrgs.br Abstract: The intellectual demands of the 21st century require more advanced skills. Students, workers, and citizens must be able to solve multifaceted problems by thinking creatively and generating original ideas from multiple sources of information. This integration of knowledge, skills, and attitudes and the coordination of qualitatively different constituent skills result from what is being called complex learning. Learning to solve problems is a special kind of learning that requires different forms of teaching and learning support. The advance of social networking has entailed a considerable progress in the development of virtual worlds where humans cohabit with other users through their avatars and become also able to operate virtual devices. Virtual worlds offer a wide range of possibilities in terms of functionalities for creating problem situations to be solved by learners. This article presents results of the evaluation of a particular virtual world, Open Simulator, regarding the functionalities provided to handle virtual devices and analyzes the possible ways of handling the visualizations built in the virtual environment in terms of engagement taxonomy. iED 2013 ImmersiveEducation.org PROCEEDINGS 42 One Sentence Summary: Analysis of functionalities for building a learning environment to support problem-solving situations on OpenSim in order to promote complex thinking. Introduction The intellectual demands of the 21st century require more advanced skills, 21st century skills. Students, workers, and citizens must be able to solve multifaceted problems by thinking creatively and generating original ideas from multiple sources of information [1]. Studies point out that the best learning process occurs when basic skills are taught in combination with complex thinking skills. New technology and global competition have changed the game for 21st century workers and, consequently, the learning process must change to reflect the complexity of the real world and the social relations that take place in it: multiple interactions among people, technologies, purposes, environments where sometimes they move and act more in a virtual space than in a real presence context. Social networks have become a key part of the global online experience and are becoming the most used service on the net. The advance of social networking has entailed a considerable progress in the development of virtual worlds where humans cohabit with other users through their avatars and become also able to operate virtual devices. Teaching and learning in this new scenario has new possibilities to promote complex learning. Complex learning involves integrating knowledge, skills, attitudes and coordinating different skills, which results in the ability of transferring what is learned to real-world problem solving. This is better favored when the learning environment allows tasks to be learned based in a context that is more similar to real life. That is a possibility that an immersive environment potentially brings about, and it also provides support for cooperative work aiming at collaborative problem solving and learning by doing. Virtual worlds provide a combination of simulation tools, sense of immersion and opportunities for communication and collaboration that have a great potential for their application in education. Also, virtual worlds allow manageable representations of abstract entities to be created and, thus, help students construct mental models by direct observation and experimentation. These unique characteristics of virtual worlds, such as immersion, handling and rich feedback, allow for an enhanced and rich interactivity that mimics the real world and is able to promote complex learning. The technology available nowadays for implementing virtual worlds, especially in the case of non-commercial systems, such as Open Simulator, is analyzed in this paper regarding the functionalities provided to build a rich interactive learning environment in order to engage students in learning tasks for the promotion of complex learning [2]. Facilities to present multimedia resources in order to add rich and supportive information about the basic knowledge needed, as well as procedures, are important and were tested in this work in terms of performance and limitations. Another focus of the research was related to the potential use of virtual worlds as constructivist learning environments where users are supposed to build tools to be operated by others and possibly integrated to form more complex tools. Avatar interactivity strategies [3] with a iED 2013 ImmersiveEducation.org PROCEEDINGS 43 tool and relation with complex learning were analyzed as well as implementation approaches favored by OpenSim. The last aspect to be addressed concerns interoperability with real world devices with the purpose to combine the Internet of Things [4], allowing Human-to-Thing interconnection, which was another strategy investigated in order to bring real world information into the virtual world. 1. Complex Thinking and Engagement Taxonomy Complex learning aims at the integration of knowledge, skills and attitudes and the coordination of qualitatively different constituent skills, as well as the transfer of what is learned to daily life or work settings. In this sense, authentic learning tasks that are based on real-life tasks constitute a driving force for such complex learning [4]. In this process, Merriënboer [5] predicts that, in addition to a sound cognitive basis, students must develop technical and social skills to become able to solve problems of which the complexity level must increase as new abilities are developed. Davies [6] suggests that this ability must be combined with digital/technological fluency in order to prepare students to act as 21st century citizens. The pursuit of an environment able to give rise to learning interactions in order to meet these requirements leads to immersive environments, in which it is possible to enable interactions with multimedia resources that can make students engage in a complex learning process similar to that of the real world. The search for interaction strategies to be offered to students in this environment includes outlining an environment with multimedia elements by which different ways of interaction are enabled. The engagement taxonomy proposition suggested by Myller [7] was used as the basis for the analysis of possible interactions in the immersive environment under study, OpenSim. The Extended Engagement Taxonomy proposed by Myller establishes forms of interaction with visual resources, giving rise to different levels of student engagement. Table 1 shows the proposal for this classification of engagement levels. Starting with the initial category, where there would be no visualization (inexistent in the immersive virtual world, which is visual by nature), each subsequent category leads to greater engagement. The taxonomy proposes student authorship as the highest level of engagement. Level Description T1 No Viewing T2 Viewing T3 Controlled viewing T4 Entering input T5 Responding T6 Changing Visualization is allowed to be changed, for instance, by direct manipulation, while it is viewed. T7 Modifying Visualization is modified before it is viewed, for example, by changing the source code or an input set. iED 2013 No visualization to be viewed, material in textual format only. Visualization with no interaction Visualization is viewed and the students control visualization, for example, by selecting objects to inspect or by changing the speed of the animation The student enters input into a program/script or parameters into a script before or during their execution. Visualization is accompanied by questions related to its content ImmersiveEducation.org PROCEEDINGS 44 T8 Constructing Visualization is created interactively by the student with the use of components such as text and geometric shapes. T9 Presenting Visualizations are presented and explained to others for feedback and discussion. T10 Reviewing Visualizations are viewed for the purpose of providing comments, suggestions and feedback on the visualization itself or on the program or script. Table 1. Modified Extended Engagement Taxonomy [7] By combining these two proposals it is possible to see that, in virtual worlds, there are conditions for simulating a context closer to the real world, but students’ strategies to interact with the real world must be outlined to boost student engagement. In order to analyze possibilities and limitations of providing the interactive elements stipulated in the engagement taxonomy, ways of supplying them in the context of the Open Simulator environment were tested. This environment was selected due to the fact that it is provided in an open and free way, which allows for the possibility of occasional contributions so as to add new functionalities. It should be noted that OpenSim does not have the stability of commercially available environments yet; however, it offers basic conditions for experimentation and testing of functionalities as intended in the study. 2. Engagement levels in an immersive environment In virtual worlds, by and large, the lowest instance of the Extended Engagement Taxonomy to be considered is the T2 level, which corresponds to the visualization of the study object without any interaction with it. Nevertheless, the movement of avatars immediately offers a T3 category interaction, as it makes it possible to approach the visualized element and allows for different observation angles. The T4 category interaction can be easily implemented, since the script language used to define the behavior of elements in the virtual world accepts data that can be typed by the user into the chat window. Then, this information can be used both for changing element behavior, shifting its trajectory and movement speed, and for its aspect. Color can be changed by using the commands llSetColor, and position and rotation by the commands llSetPos and llSetRot. It is also possible to change the size and even the shape of the element. T5 level interactions can be enabled in many ways. The script associated with the element can interact with the user via messages with queries, receive replies and provide feedback. The command llSay allows text messages to be sent, which are displayed in the virtual world setting. But it is also possible to associate an element with texture such as a webpage where one or more questions are displayed to be answered by the student and the feedback of which will be provided on the page itself (through some embedded code). A T6 or T7 level interaction can be obtained by using scripts that promote the modification of an element by the way it is touched by the avatar (action triggered by the event received via the command touch). Alternatively, one can present animations or manageable graphs created with authoring software such as Geogebra, using Java, Flash HTML5, etc., as illustrated in Figure 2 [8]. These types of contents are to be displayed on iED 2013 ImmersiveEducation.org PROCEEDINGS 45 the surface of an element in the virtual world with which they are associated as texture, as is the case of the T5 category interaction. Figure 1. Laboratory for teaching Derivatives on OpenSim [8] For T8 category interactions, the student finds in the virtual world the possibility of creating elements from a set of basic geometric shapes and importing more complex elements built externally with other authoring tools, such as Sketch Up or others. The possibility of creating and modifying a behavior associated with the created elements can be made easier by using script-generating environments like Scratch for OpenSim or Scriptastic. These environments, which employ block-based visual programming, are easily learned by users, albeit limited. The next step involves the use of script editors such as LSL Editor to create and/or change scripts. To enable T9 and T10 category interactions, the environment provides the possibility for a user, through his/her avatar, to demonstrate the functioning or behavior of the elements he/she created, configured and programmed, also allowing other users, represented in the environment by their respective avatars, to handle them. Interactions on the chat window can serve as a vehicle for this communication. Additionally, the user can make the elements created by him/her available in the environment itself, allowing other users to use them or reuse their parts by editing the elements in order to adapt them for purposes other than those initially intended. However, there are also repositories outside the virtual world in which the created elements can be stored and downloaded for inclusion in other worlds. iED 2013 ImmersiveEducation.org PROCEEDINGS 46 4. Examples and results The development of elements on OpenSim that are able to offer the previously described interaction options produced the results illustrated in the next figures. Figure 1 showed settings of an environment created for teaching Derivatives. Screens with textures presenting information, exercises and manageable simulations were used for creating a learning space that students consider engaging. In Figure 2, results from another virtual laboratory [9] are shown in which virtual elements are combined with information from the real world. In this case, a camera activated by movement detection produces images that are captured and stored on a server and directly displayed in the virtual world. Other devices, such as a scale and a pressure gauge, also produce realworld data, but these are treated outside the virtual world, and the results are used to configure elements that show representations of what is being measured to users. This experiment showed that it is possible to integrate signaling and/or data from other devices, allowing a number of devices to be associated with the virtual world and networked in accordance with the Internet of Things trend. Figure 2. Real-world interaction in the Internet of Things context This ability of dynamically interacting with reality brings about a range of possibilities for the development of learning solutions, with better contextualization of real tasks. This may stimulate the student’s ability to solve problems with a higher degree iED 2013 ImmersiveEducation.org PROCEEDINGS 47 of complexity, as the incoming data derive from appropriating devices that capture information and signaling straight from the real world. The process and handling of this rich range of information by the student will require that the student integrate all knowledge amassed on a particular topic with the experiences that are enabled in the virtual world. Such attributes turn virtual worlds into fruitful spaces for carrying out activities intended for the exploration of demands from a real context, which enhances the formation of Complex Thinking. As for the engagement levels attained through the indicated resources, both reach the highest levels in the Extended Engagement Taxonomy. Regarding the Derivatives Lab, shown in Figure 1, activities encompassing the T3, T7, T8, T9 and T10 levels are highlighted. Similarly, interaction with the real world, represented by Figure 2, enables activities with a high engagement level, which may potentially contribute to the formation of Complex Thinking through elements involving the knowledge and skills required for this. Students themselves may create screenplays, like the one presented in Figure 3 but this requires some training for script development in LSL language. Figure 3. Communication activity between objects The example in Figure 3 shows the result of a training activity for teachers, who, after a 28-hour course, were assigned to develop a script for two elements included in the virtual world that would interact with each other in a sort of game. At each step, data were produced by the script of one of the elements (e.g., the cat), which were transmitted to the other element (the dog). Depending on the data exchanged via the command llListen, either one scored in the game. iED 2013 ImmersiveEducation.org PROCEEDINGS 48 This is an exercise that stimulates students to perceive the connection among the elements constituting the virtual world, in addition to demonstrating that the actions generated in the world can be programmed to take place independently of any intervention by the avatar that represents them. The development of the scripts for this situation was a challenge for the participants that had never before worked with the LSL scripting language. But they all managed to do it combining previous examples presented in the course. 5. Conclusions In using the metaverse technology, a wide range of possibilities is perceived for the design of educational objects. The choice of the tool Open Simulator resulted from the countless advantages provided by this solution, but as this environment is still under development, some instability is noticed in its operation. The version used was OpenSim 0.7.5, which has a few drawbacks as follows: 1. Because OpenSim derives from Second Life (SL), its implementation needs all SL resources to be made available, but depending on the viewer used (Imprudence or Hippo Viewer), some options presented on the command menus for this application are not useable, as these are Second Life functionalities that are not available on OpenSim yet. 2. Importing 3-D objects to OpenSim poses mishaps. Some elements imported along with certain viewers are viewed in an incomplete or simplified way, different from what one would expect to obtain from the presentation of that particular element in the creation tool or repository that originated it. Therefore, the same element can be viewed differently on the Firestorm and Imprudence viewers. 3. The use of audio resources for communication between avatars on OpenSim, despite being described as a permitted solution, is not available yet. This investigation demonstrated the possibilities and limitations of a virtual world such as OpenSim for implementing interactions with elements that can be built thereon. The analysis sought to ascertain if and how it would possible to offer interaction possibilities able to attain the levels of engagement proposed in Myller’s extended engagement taxonomy [7]. Combining these elements with real-world external devices enabled the integration of the virtual world with the real world through the Internet of Things, which allows a richer and more realistic contextualization to be brought to the virtual world. The combination of a more realistic, contextualized and engaging immersive environment is the basis for the formation of complex thinking as proposed by Merriënboer [2]. iED 2013 ImmersiveEducation.org PROCEEDINGS 49 References: 1. Computer Science and Telecommunications Board (CSTB), Committee on Information Technology Literacy and National Research Council, Being Fluent with Information Technology. National Academy of Sciences, Washington DC 1999. 2. J. G. Van Merriënboer, P.A, Kirschner, Ten Steps to Complex Learning. New York: Routledge, 2013. 3. J. Blascovich; J. Bailenson. Avatars, eternal life, new worlds and the dawn of the virtual revolution: Infinite Reality. New York: HarperCollins, 2011. 4. T. Zhang. The Internet of Things Promoting Higher Education Revolution. In: Fourth International Conference on Multimedia Information Networking and Security, n 4, Guangzhou: 2012, p. 790-793. 5. J. G. Van Merriënboer, P.A, Kirschner, L. Kester , L. (2003). Taking the load off a learner’s mind: Instructional design for complex learning. Educational Psychology. 38: 5–13. 6. A, Davies, D. Fidler, M. Gorbis. Future Work Skills 2020. Palo Alto: Institute for the future for the University of Phoenix Research Institute. 19 p. Disponível em: <http://www.iftf.org/our-work/global-landscape/work/future-work-skills-2020/>. Accessed on March 24, 2013. 7. N. Myller, R, Bednarik, E. Sutinen, Extending the Engagement Taxonomy: Software Visualization and Collaborative Learning. Transactions on Computing Education, Vol. 9, No. 1, p. 1-27, mar. 2009. 8. Authors. VEGA - Virtual Environment for Geometry Acquaintance. In: iED 2012 proceedings, Immersive Education Initiative, 2012, Boston, MA. Available: http://JiED.org 9. Authors, Internet of Things in healthcare: Interoperatibility and security issues. (ICC), 2012 IEEE International Conference on Communications. 6121 – 6125 Ottawa 10-15 June 2012 iED 2013 ImmersiveEducation.org PROCEEDINGS 50 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER From Metaverse to MOOC: Can the Cloud meet Scalability Challenges for Open Virtual Worlds? Authors: Colin Allison*1, Iain Oliver1, Alan Miller1, C.J. Davies1, John McCaffery1 Affiliations: 1 School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK * Correspondence to: ca@st-andrews.ac.uk Abstract: The use of immersive 3D virtual worlds for education continues to grow due to their potential for creating innovative learning environments and their enormous popularity with students. However, scalability remains a major challenge. Whereas MOOCs cope with tens of thousands of users downloading or streaming learning materials, the highly interactive, multi-user nature of virtual worlds is far more demanding - supporting even a hundred users in the same region at the same time is considered an achievement. However, if the normal number of concurrent users is relatively low and there is only an occasional need to have a large group in-world at the same time for a special event can the Cloud be used for supporting these high, but short-lived, peaks in demand? This paper develops a context for the deployment of immersive education environments in the Cloud and presents performance results of Cloud-based virtual world hosting. iED 2013 ImmersiveEducation.org PROCEEDINGS 51 One Sentence Summary: Can the Cloud be used on a demand basis for meeting fixed periods of high load in virtual worlds? Introduction Several factors are contributing to the growth of the use of immersive 3D virtual worlds for education: the advances in capabilities and reductions in cost of widely available personal computers – their graphics performance for example; the emergence of open source platforms such as OpenSim [1] and RealXtend [2] for creating and deploying virtual worlds; the availability of free high quality client viewers such as Phoenix [3]; and the increase in suitable bandwidths for arbitrary connections to virtual world servers across the Internet. The educational potential of these technologies for creating innovative learning environments is evidenced by a growing number of case studies, publications, workshops and conferences on the topic; and most importantly, the popularity of virtual worlds with students and pupils. In addition, virtual world software is now being regarded as the vanguard of the 3D Web. The existing 2D Web has enabled major advances in many aspects of online learning – anytime, anywhere access, downloadable learning objects, online shared learning environments, collaboration tools for peer learning, global reach and, as MOOCs have recently demonstrated, a very significant scalability of courses which use a small number of well understood types of online learning resources. To realize the potential of immersive education and progress towards virtual worlds for a 3D Web, as easy to access and use as the 2D Web, there are several challenges to overcome. These include suitable tools for content creation, programmability, support for the management of immersive environments in educational scenarios, initial download times on arrival, erratic accessibility due to firewalls, integration with the 2D Web, and the complexity of certain user interfaces [4]. Scalability also presents a major challenge and is the main motivation for the work reported in this paper. Whereas in the case of MOOCs it is feasible to set up web sites that cope with tens of thousands of users downloading learning materials, the highly interactive, multi-user nature of virtual worlds is far more demanding; supporting even a hundred users in the same region at the same time is rarely possible. As is often the case with online learning environments the normal, or average number of users is relatively low – between one and ten for example – but the need to have a whole class in-world at the same time for an induction session, a demonstration or some other special event requires supporting an order of magnitude more users, albeit for a relatively short period. Provisioning sufficient hardware to meet peak demands is clearly not cost-effective, so how best to meet this recurring requirement? One of the main attractions of Cloud computing is the flexibility of using varying amounts of computing resources in accordance with need. In particular, the Infrastructure as a Service (IAAS) model suggests that rather than having to provision local hardware that can cope with occasional high peaks in demand the Cloud can be used for obtaining large amounts of computing power for a few hours while it is needed, iED 2013 ImmersiveEducation.org PROCEEDINGS 52 paying for it pro rata, then relinquishing it again. The question naturally arises: can the Cloud IAAS model be exploited for relatively heavily attended events in a virtual world? This paper proceeds by: i) giving some examples of how open virtual worlds are being successfully used for immersive education; ii) outlining the design of a suitable benchmark for predicting performance with different avatar loads; iii) applying these benchmarks on a local server hardware, virtual machines and the Cloud. Open Virtual Worlds in Education Figure 1. Routing Island: students can model and build networks and then interactively modify them to observe how routing algorithms recalculate forwarding tables. We distinguish between virtual worlds in general and Open Virtual Worlds (OVW). The latter imply that like a MOOC, these are freely accessible educational resources available across the Internet. In addition the underlying software is typically open source and free to use. OVW for learning have been created to support a wide variety of topics including Internet routing [5](Fig. 1), cultural heritage [6] [7](Fig. 2), archaeology [8], WiFi experimentation [9], electro-magnetic theory [10] , programming algorithms [11], HCI [12], space science [13] and humanitarian aid [14]. iED 2013 ImmersiveEducation.org PROCEEDINGS 53 Figure 2. A 3D immersive reconstruction of St Andrews Cathedral circa 1318 (top) is used in the Schools History curriculum and for public exhibitions. The lower image is the Cathedral as it is today. School pupils combine trips to the ruins with visits to the virtual reconstruction. Part of the motivation for using OVW is that students and pupils readily engage with them, often more so than with conventional learning materials and contexts [15]. Second Life was pioneering in its global reach but it was not designed for education and several commentators have highlighted problem areas that arise when using it for that purpose [16]. These include: commercial cost, code size restrictions, age restrictions, unwanted adult content and other distractions, difficulty of coursework marking due to the permissions system, quality of experience due to remote servers, firewall blocking by iED 2013 ImmersiveEducation.org PROCEEDINGS 54 campus computing services, a lack of facilities for copying and sharing content and backing up work outside of the virtual world. Other issues arise – those of ethics, trust, privacy – and the technological barriers to running a class with software which does not scale. In recent years OpenSim [1] has increasingly displaced Second Life (SL) as the platform of choice for developing immersive learning environments. OpenSim is an open source project which uses the same protocol as Second Life so is compatible with any SL compatible viewer/client including the SL viewer itself and others such as Phoenix [3]. This software compatibility has resulted in a de facto standard for Virtual Worlds and has meant that OpenSim offers a natural progression from Second Life for educationalists. While OpenSim offers solutions to many of the significant drawbacks encountered with Second Life - commercial cost, age restrictions, land constraints, content sharing and backup - there are features (or the lack thereof) inherited from Second Life which act as barriers to a wider adoption of Open Virtual Worlds in education. In this paper we focus on the problem of scalability. The following section details the development of a testbed for OVWs that can estimate the number of simultaneous users a particular OVW can accommodate before the quality of experience for the users deteriorates below what is acceptable. Development of a Framework for Testing Scalability The key question is: how many avatars can any particular virtual world support before their quality of experience (QoE) degrades to the point of non-usefulness? In order to develop a framework for carrying out evaluations of scenarios that will answer that question we need to know: i) how to characterize typical avatar behavior; ii) how to simulate avatars; and iii) what is a meaningful metric to indicate QoE? In order to capture typical avatar behaviour two sets of observations were made with 8 users and 33 users respectively. Each test had 3 runs and each run lasted for 10 minutes. In this experiment the users’ avatars spend 80 to 90 percent of their time standing and doing nothing as described in [17]. The remainder of the time was spent walking around. Four measurements were derived from the traces: Frames per Second (FPS), Physics Frame Per Second (PFPS), Frame Time (FT), and Physics Frame Time (PFT). The data collected is summarized in [18] and strongly suggests the Frame Time better reflects the load escalation caused by increases in avatars than the FPS metric. Accordingly we chose a frame time metric as a better representation of the server load generated by numbers of users. The next stage involved creating automated loads. Various different types of bots were created. These were modified Second Life clients that used libOpenMetaverse to connect to and interact with the OpenSim server. The two bots most closely aligned with humancontrolled avatars behaved as follows: Walk-2: a 20 second walk followed by an 80 second walk in random direction Walk-Rest 1: a 20 second walk in a random direction followed by 80 seconds of standing still iED 2013 ImmersiveEducation.org PROCEEDINGS 55 In addition, the bots had inventories and were clothed to make the experimental environment closer to a human controlled experience. After detailed tests it was found that Walk-2 was the closest match to human controlled avatars. The framework used the following methods, software and hardware. The software consisted of OpenSim version 0.7.1 running in Grid mode, using MySQL on Ubuntu Linux version 11.04. The server is started using a script before the bots are connected and is later stopped after all the bots have disconnected. The server had 10GB RAM, and an Intel Xeon 4-core X3430 quad processor running at 2.4 GHz. The in-world measures of load were gathered using a customized client built using the libOpenMetaverse library. The monitor and the simulators were run on separate workstations to eliminate overhead on server. The workstation OS was openSUSE 11.3 using Mono to run .Net components. The monitor workstation collected data from the OpenSim server every 3 seconds. (The statistics gathered by the monitor client are generated by the server and sent out to all clients.) The bots were distributed over a number of client workstations, so that the load on any one would not affect the behavior of the bots or their load on the server. Through tests it was established that a maximum of 25 bots could be run on a single workstation. The maximum number of bots used in the experiments was 100, which required 4 workstations. The workstations had 4GB RAM and core-2 quad Q6600 processors running at 2.4 GHz. mined to most closely approximate the same load on. The bots used were of the type that mostly closely approximated the load on the server generated by a human-controlled avatar (Walk-2). In the experiments the number of bots was increased by 5 for each iteration. Each run was repeated 3 times. The bots were connected to the server with a 20 second gap between starting each bot. All of the bots for a run were connected to the server before the system was allowed to run for 700 seconds. This collected more than 200 values from the monitor. The monitor was then disconnected. The next section shows how results originally reported in [17] compare with new measurements from the Amazon ec2 Cloud. OVW Performance in the Cloud The following types of machine were used in these tests. Metal (Xeon 4-core as described above). XEN Dom0 and XEN DomU – two variations of the XEN virtual machine [19] running on the Metal hardware. Xen is reportedly used in the Amazon Web Services (AWS) Cloud. Kernel Virtual Machine (KVM) a feature of the Linux kernel [20] that supports virtualization. Again, this was run on the Metal hardware. Virtual Box - an Oracle virtualization product for “home and enterprise use”[21]. The AWS machine instance was picked to closely match the specification of the local Metal configuration. It is a “first generation” extra-large instance: “m1.xlarge”, running Ubuntu. OpenSim and MySQL were installed and the same eversion of Cathedral Island that was used in the non-Cloud test runs. m1.xlarge is described as: “15 GB memory, 8 EC2 Compute Units” (4 virtual cores with 2 EC2 Compute Units each) [22]. iED 2013 ImmersiveEducation.org PROCEEDINGS 56 Figures 3 and 4 compare the measured performance in Frames per Second and Frame Time across these platforms using the framework described above. The OVW used was a version of Cathedral Island (see Fig.2). Figure 3. FPS against numbers of avatars; less than 30 is generally regarded as a poor Quality of Experience for the user Figure 4. Frame Time plotted against number of avatars iED 2013 ImmersiveEducation.org PROCEEDINGS 57 The graphs in Figures 3 and 4 show that: - The local Metal server performs the best by any measure. - The AWS m1.xlarge instance, which approximates to Metal, only performs as well as Xen Dom0. - KVM has slightly lower performance for this application than Xen DomU - VirtualBox is not an attractive option, barely coping with 10 avatars before degrading beneath QoE thresholds. (It may not have been designed for scalable performance). Conclusion We have pointed to the great potential of open virtual worlds for immersive education but also highlighted some significant challenges to realizing that potential. The particular topic addressed in this paper is scalability. It is currently considered a great achievement to maintain good Quality of Experience for even a hundred avatars in the same region at the same time. Affordable commodity local servers can typically support up around sixty avatars comfortably. We asked the question “Can the Cloud be used for relatively short, fixed periods of much higher avatar density?” We have described the development of a framework for automating scalability testing of virtual worlds. Initial results have shown that our framework is producing consistent and useful measurements and that the AWS machine instance most closely corresponding to the local hardware does not perform as well. Work is in progress using larger AWS machine instances to see if the scalability barrier can at least be raised using the Cloud. We have not systematically analyzed the financial cost of using AWS yet, but believe that if a machine instance can significantly outperform the local hardware, then running it for e.g. two or three hours for special events would be cost effective. The authors welcome communications about any aspects of this paper. References: [1] [2] [3] [4] [5] The_Open_Simulator_Project. (2010, June). OpenSim: http://opensimulator.org/wiki/Main_Page. T. Alatalo. (2011) An Entity-Component Model for Extensible Virtual Worlds. IEEE Internet Computing. 30-37. Available: http://doi.ieeecomputersociety.org/10.1109/MIC.2011.82 P. V. P. Inc., "Phoenix: http://www.phoenixviewer.com/," ed, 2012. C. Allison, A. Campbell, C. J. Davies, L. Dow, S. Kennedy, J. P. McCaffery, A. H. D. Miller, I. A. Oliver, and G. I. U. S. Perera, "Growing the use of Virtual Worlds in education: an OpenSim perspective," presented at the 2nd European Immersive Education Summit: EiED 2012, Paris, 2012. Available: http://JiED.org J. P. McCaffery, A. H. D. Miller, and C. Allison, "Extending the Use of Virtual Worlds as an Educational Platform - Network Island: An Advanced Learning Environment for Teaching Internet Routing Algorithms," presented at the 3rd International Conference on Computer Supported Education (CSEDU 2011). Netherlands, 2011. iED 2013 ImmersiveEducation.org PROCEEDINGS 58 [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] F. Bellotti, R. Berta, A. Gloria, and L. Primavera, "Designing Online Virtual Worlds for Cultural Heritage," in Information and Communication Technologies in Tourism 2009, W. Höpken, U. Gretzel, and R. Law, Eds., ed: Springer Vienna, 2009, pp. 199-209. S. Kennedy, L. Dow, I. A. Oliver, R. J. Sweetman, A. H. D. Miller, A. Campbell, C. J. Davies, J. P. McCaffery, C. Allison, D. Green, J. M. Luxford, and R. Fawcett, "Living history with Open Virtual Worlds: Reconstructing St Andrews Cathedral as a stage for historic narrative," presented at the 2nd European Immersive Education Summit: EiED 2012, Paris, 2012. Available: http://JiED.org K. Getchell, A. Miller, R. Nicoll, R. Sweetman, and C. Allison, "Games Methodologies and Immersive Environments for Virtual Fieldwork," IEEE Transactions on Learning Technologies, vol. 3, pp. 281-293, 2010. T. Sturgeon, C. Allison, and A. Miller, "802.11 Wireless Experiments in a Virtual World," ACM SIGCSE Bull., vol. 41, pp. 85-89, 2009 2009. L. D. Thomas and P. Mead, " Implementation of Second Life in electromagnetic theory course; http://www.youtube.com/watch?v=ExWX4Kz-6Qc," presented at the 38th Frontiers in Education Conference, 2008. , Saratoga Springs, NY, 2008. M. Esteves, B. Fonseca, L. Morgado, and P. Martins, "Contextualization of programming learning: A virtual environment study," presented at the The 38th Annual Frontiers in Education Conference, 2008. FIE 2008. , Saratoga Springs, NY,, 2008. R. S. Clavering and A. R. Nicols, "Lessons learned implementing an educational system in Second Life," presented at the Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI...but not as we know it, University of Lancaster, United Kingdom, 2007. NASA. (2010, Learning Technologies Project FY10 Performance Report Project. http://www.nasa.gov/pdf/505587main_2010_ESE_LT.pdf. L. Dow, A. Miller, and E. Burt, "Managing Humanitarian Emergencies: Teaching and Learning with a Virtual Humanitarian Disaster Tool," in 4th International Conference on Computers in Education, CSEDU 2012, Porftugal, 2012. P. Sancho, J. Tirrente, and B. Fernandez-Manjon, "Fighting Drop-Out Rates in Engineering Education: Empirical Research regarding the Impact of MUVEs on Students' Motivation," presented at the 1st European Imeersive Education Summit, Madrid, 2011. Available: http://JiED.org S. Warburton, "Second Life in Higher Education: assessing the potential for and the barriers to deploying virtual worlds in learning and teaching," British Journal of Educational Technology, vol. 40, pp. 414-426, 2009. M. Varvello, F. Picconi, C. Diot, and E. Biersack, "Is there life in Second Life?," in ACM CoNEXT Conference, CoNEXT ’08, New York, 2008, pp. 1:1–1:12. A. Sanatinia, I. A. Oliver, A. Miller, and C. Allison, "Virtual Machines for Virtual Worlds," presented at the CLOSER 2012: 2nd International Conference on Cloud Computing and Services Science, Oporto, Portugal, 2012. xen.org. (2012, Xen Hypervisor www.linux-kvm.org. (2012, Kernel-based Virtual Machine. Oracle. (2012, VirtualBox_https://www.virtualbox.org/. A. W. Services. (2013, Available Instance Types http://aws.amazon.com/ec2/instance-types/. iED 2013 ImmersiveEducation.org PROCEEDINGS 59 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Virtual World Access and Control: A Role-Based Approach to Securing Content In-World Authors: C. Lesko1, Y. Hollingsworth2, S. Kibbe 3 Affiliations: 1 Dr. Charles Lesko, Ph.D., East Carolina University, Greenville, North Carolina, 27858, (252) 737-1907, leskoc@ecu.edu. 2 Yolanda Hollingsworth, MLS, MSW, Middlesex Community College, Lowell, Massachusetts, 01852, (978) 656-3001, hollingsworthy@middlesex.mass.edu. 3 Dr. Sharon Kibbe, Ph.D., East Carolina University, Greenville, North Carolina, 27858, (252) 328-9126, collinss@ecu.edu. Abstract: The growing experimentation and use of virtual world technologies such as OpenSimulator (commonly referred to as OpenSim), demands new and innovative approaches to controlling access to the virtual content created and retained within the virtual space. This effort reviews the concepts surrounding access and control in-world and provides a systematic approach to addressing this challenge. These reviews are drawn from experience gleaned utilizing the OpenSim virtual world solution and are compilations of various academic scenarios and constraints. One Sentence Summary: This paper presents a layered approach to establishing rolebased access controls for a deployment of the OpenSim virtual world environment. iED 2013 ImmersiveEducation.org PROCEEDINGS 60 Introduction Recent years have witnessed a rapid growth in the experimentation and utilization of various virtual world (VW) technologies that can be credited, in part, to several opensource VW platforms like the OpenSimulator (OpenSim) project [1, 2]. OpenSim is an open-source multi-platform, multi-user 3D application server used to create VW environments. Like many of today’s VW solutions OpenSim utilizes a role-based access control (RBAC) approach to managing the access of its end-users to virtual content. Unlike the more traditional approaches such as discretionary access control (DAC) and mandatory access control (MAC), the RBAC approach focuses on specifying and enforcing enterprise-specific security policies that closely align with an established organizational structure. Traditional approaches, such as DAC or MAC, managing access and control within the solution has required solution owners to map organizational security policy to a set of low-level controls commonly referred to as access control lists [3]. The OpenSim solution incorporates an RBAC approach to managing security policy. Each end-user is allocated one or more established roles, through an avatar, closely aligned to the user's job responsibilities within the overall VW solution. Avatars are given permissions or privileges to do multiple tasks. Each role is assigned one or more privileges that enable the end-user to interact within the solution to conduct such actions as: creating, editing and deleting content, accessing information; and controlling the environmental settings such as light, terrain, time of day, etc. It is an end-user's assignment or membership to various roles that determines the actions that each end-user user is allowed to perform. Thus, the management of access and controls within an RBAC type environment involves identifying the activities that must be conducted based on an end-users position within the organization and then assigning the end-user to their appropriate role(s) [4]. The roles and privileges are assigned to avatars that in turn can be assigned to a group. For instance, a co-owner group can be established with rights and privileges. If an avatar is given the association of being in that group, they ultimately have all the rights themselves, much like establish “child” and “parent” permissions on a web file structure. The Opensim solution provides academic institutions with the flexibility to tailor both the hardware and software solution to incorporate within their own existing infrastructures. This affords the opportunity to create scalable VW environments that are capable of integrating with many other third party software solutions. To understand the architectural considerations needed to deploy an OpenSim VW environment for academia, a baseline of key terms is critical. iED 2013 ImmersiveEducation.org PROCEEDINGS 61 Architecture of an OpenSim Grid The VW environments that are established using OpenSim are simulated interactive immersive 3-D digital spaces that provide multiple end users with a shared, persistent collaborative virtual working space in real time; allowing them to build, edit and customize content. A common unit of virtual space is a simulator (sim); a sim is one instance of OpenSim running on a machine. The term sim is often used interchangeably with the term region; a region is a square of virtual space or land that by default typically equates to 256×256 meters of VW surface area. Upon establishing an OpenSim VW enduser utilizes a client viewer (the software solution needed to log into an OpenSim environment) to log into the VW. Once logged into the VW (known as in-world), the end-user is typically represented with a basic avatar - the users virtual representation or proxy; it is the experience the end-user is given via the client viewer. An OpenSim grid is one instance of OpenSim run in grid mode. The concept of a grid involves the creation of a central ‘grid’ server that manages accounts and assets allowing other users to run their own OpenSim regions from individual computing platforms. The current OpenSim solution also provides for the creation of large groupings of regions referred to as megaregions. Megaregions are an expansion on the standard region. Megaregions are created systematically by logically conjoining two or more standard regions; so their virtual surface size is typically a magnitude of 256×256 meters [5]. For example, Figure 1 presents an overview of (4) regions that form a single megaregion with a virtual surface area of 512×512 meters. Regions can also be logically subdivided within the VW environment; these subdivisions are referred to as parcels. A parcel is a fraction of a region that has unique individual settings within a given region such as: media, access, voice, and security. The objects or content found within any given region are collectively referred to as assets or inventory. Assets are virtual content that only exist within the VW environment and can include animations, sounds, gestures, textures, prims, and more. All this virtual content can be retained by each end-user within their avatar’s inventory. This content can also be exported to other grids in an OpenSim environment. iED 2013 ImmersiveEducation.org PROCEEDINGS 62 Figure 1. Overview of (4) regions forming a single Megaregion Whether using regions or megaregions, a single grid instance has the ability to manage multiple regions/megaregions that are logically placed on a two-dimensional map. Once grid services are established for a given grid, regions are mapped to the grid by providing a unique (X,Y) coordinate and a 128-bit Universally Unique Identifier (UUID). With valid X and Y integer values of 0-65535, the theoretical numbers of regions can easily meet the scalability requirements of any given academic VW scenario. With the advent of the Hypergrid concept, grid users are able to move from one OpenSim grid to another; thus adding a whole new set of access issues to the equation [6]. Implementing Grid Access Controls Regardless of whether or not the OpenSim grid is public or private, access and control issues are central to establishing viable academic VW platforms. Applying appropriate restrictions on grids for either assets or virtual spaces is extremely relevant - particularly when protected age groups are involved. Having different end-user groupings with different views of the VW co-existing in the same virtual space with varying levels of access can serve to create to a more secure and clearly defined virtual working environment [7]. When developing architectural level security schemes for designing, building, and deploying open-source solutions such as OpenSim it is important to breakdown the logical layers of the solution’s architecture. There are currently two distinct layers of iED 2013 ImmersiveEducation.org PROCEEDINGS 63 access control with any singular OpenSim grid solution; the Regional Layer and the Grid Layer (see Figure 2). Once a single private grid is connected to one or more other public grids, an additional Hypergrid Layer is added to the access control solution. Even though public grid ownership opens up the access control issues to a whole new level, private grid owners must still implement controls to ensure privacy and accountability are maintained over access to both virtual spaces and grid assets. Grid Layer Initial access and authentication occur at the grid layer. OpenSim utilizes a flexible server shell referred to as Redesigned OpenSimulator Basic Universal Server Technology (ROBUST). The ROBUST server loads various service connectors that provide overarching access and control over the regions linked to the grid and control hyperactivity into and out of the grid. Common services that are managed at the grid level include: login and authentication; asset and inventory management; avatar, friends and presence services; grid and hypergrid management; and voice services. Figure 2: Access Control Layers The Grid layer is responsible for user access creation (UAC) using the familiar username and password authentication approach. Typical grid setups allow for a general user iED 2013 ImmersiveEducation.org PROCEEDINGS 64 registration process whereby creation of grid accounts is conducted online via a webbased interface; however, this approach may require further consideration. Unless separate web-based services are provided for end-users to create their own account, each grid account is issued by the institution to assist in validating the user’s identity. OpenSimulator allows for the creating of grid accounts without any restriction; thus providing academic institutions with the opportunity to use existing unique user identities. Within any given avatar session a unique dynamically-generated UUID based URL is assigned by the system for each given session of user interaction. These URLs act as session tokens for their specific interaction. At the core of any OpenSim instance is the UserLevel integer used to attribute the ‘God’ powers for each individual user which provides access to the VW environment. The ‘God’ role in OpenSim represents the administrator for the grid. When an avatar account is created at the grid level the default value for the new account is set to ‘1’ for ‘GOD_LIKE’; this allows the account to log into the grid and conduct basic interactions. By changing the values as outlined in Figure 3, grid administrators have the ability to assign full grid permissions to ‘200’ for ‘GOD_FULL’ or if needed prohibit the account from logging into the grid by setting the value to ‘-1’ for ‘GOD_NOT’. The concept of creating groups of avatar users is not currently a core component with the OpenSim distribution; two separate third-party groups modules (Flotsam or Simian) are currently available allowing grid administrators the ability to create multiple groups of avatar accounts [8]. Accounts that receive access to GOD permissions are initially established at the Grid Layer. This level of access is very extensive allowing the account end-user to make numerous administrative-level adjustments on the grid including: controlling terraforming (ability to change the grid landscape); managing estates and parcels; controlling environment (such as setting time, wind , sky and physics); managing digital content, scripts and textures; and removing unwanted avatars (end-users) when necessary [9]. iED 2013 ImmersiveEducation.org PROCEEDINGS 65 Figure 3: God Mode Values Region Layer The Region Layer is where the specific access controls to each region are set. At the time when each region is opened up for the first time on the grid a region owner must have been identified and is assigned the role of ‘Region Owner’. The Region Owner role has the access permissions that enable it to manage the specific region it is assigned to. Within the scope of the region there are numerous functions that the region owner has control over that include: land management, user management, media management, content creation, teleporting, flying, and media streaming [4]. Land management also involves being able to subdivide the VW by the Region Owner into land areas referred to as parcels. These parcels can be assigned to different end-users known as parcel owners; once assigned, parcel owners gain some administration capability within the control span of their respective parcels. As a region owner, it is important to establish key groups, roles, and permissions to govern the region. Groups assist in controlling who can build and perform functions with the grid. Groups contain members and the members have roles on the virtual land. Many groups can be set, but each should have their own function. For an institution, it could be iED 2013 ImmersiveEducation.org PROCEEDINGS 66 important to sets roles such as faculty, student, etc. These groups and roles assist in the security maintenance of the VW [10]. Current OpenSim distribution does not provide group services requiring third-party solutions to meet this requirement. Two of the third party options Flotsam and Simian provide for the establishment of multiple groupings with a given grid; another option provided through the diva distribution however is currently limited to establishment of up to two groupings [8]. Regardless of group module that is utilized, this level of permissions granularity is critical to providing for a secure, yet accessible, academic grid. Hypergrid Layer The Hypergrid Layer exists for all grids and individual regions that are linked to one another [11]. The Hypergrid Layer of access control encompasses those aspects of interconnectivity that allow avatars and their associated characteristics and content to travel between virtual world grids. The concept of the hypergrid allows different institutions to run and operate their own virtual world grids while allowing end-users (via their respective avatar accounts) to visit and participate in grids. This hypergrid capability thus provides global collaborative support without the need for global coordination or centralized control [12]. In the Hypergrid region/grid administrators can place hyperlinks on their in-world maps to other institutions’ grids/regions. Once those hyperlinks are established, end-users are able to interact within those grids/regions in exactly the same way that they would interact within their own grid/region. Essentially, end-users (via their avatars) choose to teleport to other hyperlinked grids/regions. Once the avatar reaches the grid/region, the avatar is able to interact within that region without having to logout from their home grid [13]. Grid administrators define the access policies for each of the regions/grids they create. Currently in OpenSim, access to grid assets by outside avatars is controlled by the ‘HG Asset Service’ on the grid server. Grid administrators are able to specify policies for importing and exporting their grid’s assets to the grid. Once defined, these statically assigned access policies are fixed with regards to what any in world end-user account may attempt [11]. At the Hypergrid Layer there are two independent authorities: the home region/grid and the destination region/grid. Each of these can be configured independent of the other. Control of the avatars that enter a region/grid is currently handled in OpenSim by the Gatekeeper Service; this is where grid administrators specify whether the grid accepts visiting avatars. Home grid administrators can also statically control where the users can and cannot go via the User Agents service. When an avatar attempts to teleport to a grid which the home administrator doesn’t approve then the teleport request is denied; policy iED 2013 ImmersiveEducation.org PROCEEDINGS 67 specifications are based on the end-users level giving grid administrators the latitude to apply different restrictions based on the needs of their end-users [11, 13]. Conclusions The need for maintaining secure virtual spaces with host server control is evident for many institutions [14]. Even with the growing interest in the field, coupled with the importance of securing these virtual spaces, there is not much to glean currently from the literature due to limited scholarship in this area [15]. The literature shows research conducted in both military and business industries that regulated virtual spaces, similar to educational facilitated spaces and grids share the necessity of secured seamless multioperational levels. Evidence of market-driven business enterprises further reveals research on focus group definition [16]. Increasing availability of new learning VW solutions is the proper future direction for education institutions. Institutes should focus on content security by host and owner institutions. Development in role definition is warranted particularly in creating policy taxonomies for academia. Newly designed models for group management establishing task and role functions should be taken into high consideration for any academic-centric virtual world solutions moving forward. References: 1. Allison, Colin and Miller, Alan. Open virtual worlds for open learning. St. Andrews, UK : Higher Education Academy, 2012. 2. Designing 3D Virtual World Platforms for E-Learning Services. New Frontiers of Organizational Training. Za, Stefano and Braccini, Maria. 2012, Business Information Processing, Volume 103, pp. 284-296. 3. Winkler, Shenlei E. Opening the Content Pipeline for OpenSim-Based Virtual Worlds. [book auth.] IRM Association. Digital Rights Management: Concepts, Methodologies, Tools, and Applications. Hershey, PA : Information Science Reference, 2013, pp. 10301042. DOI: 10.4018/978-1-4666-2136-7.ch051. 4. Secure Learning in 3 Dimensional Multi User Virtual Environments – Challenges to Overcome. Perera, Indika, Allison, Colin and Miller, Alan. Liverpool, UK : 11th PGNet Symposium, 2010. 5. Scalable and consistent virtual worlds: An extension to the architecture of OpenSimulator. Farooq, U. Bara Gali, Pakistan : IEEEXplore, 2011. 2011 International iED 2013 ImmersiveEducation.org PROCEEDINGS 68 Conference on Computer Networks and Information Technology (ICCNIT). pp. 29-34. DOI: 10.1109/ICCNIT.2011.6020902. 6. Hypergrid: Architecture and Protocol for Virtual World Interoperability. Lopes, C. 2011, Internet Computing, IEEE, Volume 15, Issue 5, pp. 22-29. DOI: 10.1109/MIC.2011.77. 7. NextVC2 — A next generation virtual world command and control. Carvalho, Marco M and Ford, Richard. Orlando, FL : Internet Computing, IEEE, 2012. Military Communications Conference, 2012 - MILCOM 2012. pp. 1-6. DOI: 10.1109/MILCOM.2012.6415834. 8. Korolov, Maria. Hypergrid inventor adds group support to OpenSim. Hyper Business. February 22, 2013. 9. Hodge, Elizabeth, Collins, Sharon and Giordano, Tracy. The Virtual Worlds Handbook: How to Use Second Life and Other 3D Virtual Environments. Sudbury, MA : Jones & Bartlett Learning, LLC, 2009. 10. Research and Application of Access Control Technique In Distributed 3D Virtual Environment. Bo Sun, Ji-yi Xu, Xiao-ming, Hui-qin Zhao and Liu, Jing. Melbourne, Australia : ICCSE, 2012. The 7th International Conference on Computer Science & Education (ICCSE 2012). pp. 1638-1643. DOI: 10.1109/ICCSE.2012.6295378. 11. Lopes, Crista. How HG 2.0 is coming along. Metaverseink.com. [Online] September 23, 2012. http://metaverseink.com/blog/?p=459%7CHow. 12. The Universal Campus: An open virtual 3-D world infrastructure for research and education. Baldi, Pierre and Lopes, Crista. April 2012, eLearn Magazine, Issue 4, Article 6. DOI: 10.1145/2181207.2206888. 13. OpenSimulator.org - HyperGrid. Hypergrid. OpenSimulator.org. [Online] March 4, 2012. [Cited: August 6, 2012.] http://opensimulator.org/wiki/Hypergrid. 14. Virtual worlds and libraries: Gridhopping to new worlds. Hill, Valerie and Meister, Marcia. 2013, College & Research Libraries News, Volume 74, Issue 1, pp. 43-47. 15. Scacchi, Walt, Brown, Craig and Nies, Kari'. Investigating the Use of Computer Games and Virtual Worlds for Decentralized Command and Control. Monterey, CA : Naval Postgraduate School, 2011. 16. Augmented Focus Groups: On Leveraging the Peculiarities of Online Virtual Worlds when Conducting In-World Focus Groups. Houliez, Chris and Gamble, Edward. 2012, Journal of Theoretical and Applied Electronic Commerce Research, Volume 7, Issue 2, pp. 31-51. DOI: 10.4067/S0718-18762012000200005. iED 2013 ImmersiveEducation.org PROCEEDINGS 69 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Formative Assessment in Immersive Environments: A Semantic Approach to Automated Evaluation of User Behavior in Open Wonderland Authors: Joachim Maderer*1, Christian Gütl1,2, Mohammad AL-Smadi3 Affiliations: 1 Graz University of Technology, Graz, Austria. 2 Curtin University, Perth, Western Australia. 3 Tallinn University, Tallinn, Estonia. *Correspondence to: joachim.maderer@student.tugraz.at Abstract: This paper proposes a pedagogic-prominent approach to design automated assessment and feedback apart from the 3D virtual worlds and use them to support guided learning. Through an externalization of the assessment process, the approach supports interaction between educators and virtual environment designers, allowing pedagogic content to be identified at the development stage and then manipulated dynamically in response to learner actions and interactions within an overarching set of pedagogic goals defined by the educator or trainer. The method supports integration with automated assessment technologies, allowing such tools to recognize and respond immediately to learner actions by modifying the virtual environment or triggering feedback. This decoupling between iED 2013 ImmersiveEducation.org PROCEEDINGS 70 the assessment engine and the 3D virtual worlds enables supporting a variety of environments as well as different contexts and application domains. One Sentence Summary: The semantic-enabled automated assessment and feedback have shown promising applicability to different immersive 3D virtual environments. 1. Introduction Immersive virtual 3D environments have become popular in various learning settings and there is common agreement on their educational impact. Recent research has shown the potential for different use in fields such as science education [1], skill training [2], or language learning and training [3]. Particularly in science education there is a great benefit for situations too dangerous, expensive or impossible to create in real world settings to be provided virtually. Beside these self-evident arguments, there is another aspect that needs to be considered. Science education employs experiments to improve the understanding of abstract concepts. But experiments are normally conducted in laboratories and therefore often out of the real world context. Thus virtual worlds can improve learning by enhancing the experiment with a more realistic setting and helping students transferring the knowledge of a model to real world experiences and contextualization [4]. As assessment and feedback needs to be integrated parts of a holistic learning experience, these aspects require special attention in virtual world environments. While summative aspects can easily be administered and evaluated afterwards (e.g. with multiple choice tests in world or even out of world, refer to [5]), formative evaluation is far more complex to implement. Especially in practical training like laboratory exercises, supervisors and tutors are usually available to guide the learner and help avoiding deadlocks through supportive feedback. Given that one of the main advantages of elearning is the support for self-paced learning [6], it must be assumed that there is not always a teacher available in-world to observe students' knowledge and skill levels, and give immediate supportive feedback and guidance. Nevertheless, especially immediate feedback is considered important to foster motivation and support the development of self-regulation learning settings [7]. Few approaches can be found in literature for providing assessment in immersive 3D environments. Among these approaches is the Sloodle project (Simulation Linked Object Oriented Dynamic Learning Environment) which provides Quiz chair tool integrated with Moodle to support teachers in delivering quizzes within Second Life [8]. QuizHUD (Heads Up Display) [9] is another example, in which avatars in Second Life are enabled to touch items and complex objects in order to provide response while answering regular multiple-choice questions. However, both Sloodle Quiz chair and QuizHUD are limited to simple types of questions such as multiple choice questions and lack the support of other 3D environments – such as Open Wonderland and OpenSim. Nevertheless, they cannot be applied to provide alternative forms of assessment such as behavioral assessment and non-invasive assessment. Therefore, developers have to embed those forms of assessment using the environment scripting language or code behind. However, this requires teachers to have high level of programming skills to implement assessment as well as the approach is neither scalable nor reusable. Focusing iED 2013 ImmersiveEducation.org PROCEEDINGS 71 on behavioral assessment and interpersonal skills assessment, there are proposals to use assessment approaches from serious games in which player interactions are tracked and used to evaluate their behavior and skills based on artificial intelligence methods – such as decision trees, finite state machines, and Bayesian networks [10]. However, these approaches also lack to some extent flexible teacher control over the assessment method and are not applicable for different contexts and learning scenarios. Literature reviews of feedback in 3D virtual worlds stressed the importance of formative models for feedback provision [11, 12]. Roger [13] classifies feedback forms into evaluative (players get a score), interpretive (players get a score and the wrong action), supportive (players get a score and guidance information), probing (players get a score and analysis of why the player did the wrong action), and understanding (players get a score and analysis of why the player did the wrong action, as well as guidance for supportive steps or learning material). Dunwell et al. [14] discuss the feedback aspects in digital educational games based on Rogers’ classification and propose a four-dimensional approach for feedback provision in serious games. According to this approach, the following aspects should be considered when it comes to the design of feedback forms in 3D virtual worlds: (a) type: feedback type differs based on Roger’s classification – discussed above – with respect to students, teachers, or technology. Thus, required aspects to classify feedback – such as measure variables, their relationships model, learner model, knowledge model, and domain model – should be considered; (b) content: content can be classified with respect to the learning outcomes into essential or desirable; (c) format: the media used to represent feedback (e.g. text, image, voice, etc.); (d) frequency: the rate of feedback provision to students differs with respect to instructors, technology, pedagogy, and learner preferences control. Hence, feedback can be immediate, delayed, or dynamic based on the domain and learner action type. This situation stated so far has motivated our research on automated online assessment that creates dynamic feedback and focusing on supportive aspects based on the behavior of the learner within an immersive environment. This includes the support for different virtual world platforms and game-based environments, and also various application scenarios and flexibility in terms of level of assessment and type of feedback. Previous research of this group has already focused on flexible assessment in game-based environments; where a context dependent, rule-based assessment engine, which is decoupled from the game design, was implemented [15]. In this research, we report about a generalization on the conceptual level and first experiences in 3D virtual worlds. Open Wonderland [16], a popular open source environment for hosting virtual worlds, was selected regarding its current use in immersive education research, and due to its rich extensibility. The main contribution of this work is the design and development of an assessment and feedback module for Open Wonderland – based on a semantic approach – and its application on a sample physics experiment. Different types of feedback – based on [13, 14] – have been considered in our approach which is discussed in more details in this paper. The remainder of the paper is structured as follows: Section 2 introduces a conceptual architecture with focus on semantics. Section 3 reports on the analysis and findings of issues and problems regarding the implementation of a flexible semantic assessment module in Open Wonderland and details the architecture of our approach. iED 2013 ImmersiveEducation.org PROCEEDINGS 72 Section 4 reports on the applied demonstration of the assessment module on a physics experiment and Section 5 finally discusses the results and gives an outlook for future work. 2. Design and Conceptual Architecture Based on our previous experiences and an extensive literature survey as briefly outlined in section 1, we are interested in a flexible versatile assessment system for immersive environments. While the primary focus lies on formative aspects in order to provide dynamic supportive feedback, summative aspects should be covered as well. The assessment process is supposed to work fully automated, based on the evaluation of user behavior related to a given situational context within the immersive environment. Flexibility also addresses the support of a variety of immersive environments. Consequently, the following requirements for a flexible assessment system have been identified: The assessment system should be (a) separated from the immersive environment to support different platforms, including game-based environments and virtual worlds. It should also be (b) flexible in terms of application domains and selected scenarios. (c) Connections to external systems, particularly learning management systems (LMS), should support the exchange of data – e.g. knowledge and skill level – from and to the learners’ profiles. (d) The actual assessment mechanism (assessment logic) should be independent and exchangeable, which allows the definition and execution of assessment aspects in respect to knowledge and skill levels of the learner. (e) A flexible feedback mechanism should also support different forms of assessment and feedback, including simple text messages or multiple-choice dialogs. This includes also the choice of the guiding system – e.g. simple screen messages or communication through nonplayer characters (NPC) – based on the available platform resources and learner preferences. In addition, there are also non-functional requirements to be considered such as accurate response times and usability aspects. While the decoupling of assessment system and immersive environment has already been shown in the context of game-based environments [15], we propose now an enhanced and generalized flexible architecture (see Figure 1), in order to fulfill the requirements stated above. The assessment system is supposed to act as middleware between an immersive environment, which is delivered to the user as a learning resource, and external systems such as LMSs or other information systems. The communication between immersive environment and assessment system is based on a pre-shared knowledge base of common and domain specific semantics (semantic knowledge repository). These semantics are used to describe abstract spatial location, abilities and observable state of entities (items and avatars), as well as possible interactions between objects and avatars controlled by the players. Therefore, both design processes – assessment rules and in-world scenario – can be completed independently by different groups; i.e. the instructional designers and game designers. In terms of overall flexibility and support of a variety of immersive environments, an assessment module (AM) for event tracking and feedback delivery – specifically implemented for a certain platform – is required to send events, which describe changes and actions, to the assessment system and process possible feedback content. Events are supposed to be player centered. That means only those changes and actions are considered, which occur iED 2013 ImmersiveEducation.org PROCEEDINGS 73 within the perception range of the avatar. In multi-user environments, the assessment system must therefore manage an isolated perception context per player. The approach of using semantics is also compatible with the general request for rich semantics in future 3D environments, in order to simplify the design process and increase efficiency [17]. This can further improve the reuse of implemented items for different learning scenarios within a certain platform itself. The learning management system (LMS) provides information (configurable set of values) describing knowledge and skill levels of the learner, or a set of tasks that have already been mastered or not, in order to adjust the assessment process. In the reverse direction, the achievements and setbacks in certain exercises are used as input to update these values, based on the current knowledge and skill levels of the learner. Figure 1. Overview of the conceptual architecture of the automated, flexible assessment for immersive environments. The current work aims at two main objectives: First, an exemplary implementation for Open Wonderland should proof, that the assessment approach is also applicable to 3D virtual worlds. Second, an initial step towards a rich semantic representation of virtual worlds should be made, in order to study if such semantics can be facilitated to support dynamic supportive feedback across different application domains and platforms with the potential of decreasing amount of time required for assessment design. Therefore the next section will describe the development of a flexible semantic-enabled assessment module for Open Wonderland. iED 2013 ImmersiveEducation.org PROCEEDINGS 74 4. Flexible Semantic-enabled Assessment Module for Open Wonderland 4.1 Situation Analysis and Identified Issues As a client-server based system, entirely written in Java, Open Wonderland (OWL) is a toolkit for hosting 3D virtual worlds. Its design features a highly modular system, which can be well extended and customized for specific purposes. Typical 3D items, which users can interact with, are called cells in OWL. Communication between client and server is facilitated through serializable Java objects, referred to as messages. The client software, which is considered an interactive browser, is delivered through Java Web Start and loads all relevant packages needed for joining a server session when logging on to a server. The balance between server and client is designed in such a way that the server is primarily responsible for managing the synchronous state between cells and their representation on multiple clients, while the client does most of the physics calculations and graphics rendering. [18] Prior to the actual development, an analysis of the current state of Open Wonderland regarding the aspects of semantic event tracking for assessment purpose (as discussed in Section 2) was conducted. This relates to the interception of user actions, available information on 3D objects (cells) and the detection of location changes of the avatar. The review is based on the architecture paper, available documentation and source code studies of the OWL project [16, 18-20]. On the client-side, user actions are implemented through different mechanisms, including context menus, control panels (also available in-world) and direct processing of mouse and keyboard events within the 3D space. While basic mouse and keyboard events are less interesting due to the raw information they provide, actions revealed by context menus and control panels were examined more closely. It has been found impossible to derive information that can be used to automatically create semantic events, because there is no guarantee that actions will expose any information beyond graphical or textual representation required for the user interface. Especially textual representations depend on the current language setting of the client; therefore it does not render a valid source for independent information. As a considerable amount of actions have to be shared with other clients, cell messages are finally sent to the server to request those changes. Although these messages can be intercepted, there is still the problem that messages are simple serializable and derived Java objects, which do not feature a common interface to expose semantically usable information. Available information of cells is rather limited. Besides a name there are no attributes available that describe the purpose of the 3D object. As cells are implemented as derived classes and stored in different modules, all information and behavior is encapsulated. Consequently, there is no common way to determine which kind of user interaction with an object is supported. This concerns also the status of an object, whereas in that case status refers to something that could be understood and observed by a human being, e.g. the color of a traffic light or something similar. Regarding the current location of an avatar, it is not complicated to obtain the corresponding 3D coordinates. However, the usage of exact coordinates is not desirable, as the assessment system should be able to work with different scenarios, thus not relying iED 2013 ImmersiveEducation.org PROCEEDINGS 75 on coordinates, but rather on abstract descriptions of location on a semantic level. Furthermore, there is no pre-defined mechanism for tagging spatial areas. OWL features enable the concept of proximity listeners, which can define 3D spatial areas (called bounding volumes) that trigger events whenever an avatar enters or leaves such an area. Thus, our findings reveal that it is not possible to develop a fully automated tracking system, which can generate semantic useful events without adapting cells contained in existing modules. To overcome these shortcomings, we propose a conceptual approach in the following section. 3.2 Conceptual Architecture Based on the issues presented in the previous section and the general approach presented before, we propose a conceptual architecture for an assessment module in Open Wonderland (OWL) that supports the two following phases: (a) design time, where cells (3D objects) and spatial sections are annotated with semantic information. In addition, cells in existing modules must be enhanced by developers in order to trigger semantic events related to encapsulated behavior; and (b) runtime, where user actions and environmental changes are tracked to automatically create semantic events. All semantic events are collected and instantly sent to an external assessment system, which evaluates the event flow based on an internal assessment mechanism and provides feedback and guidance if required. Provided feedback is then delivered to the set of OWL clients according to the scenario and the instructional objectives. Due to the architecture of OWL itself, the assessment module (see Figure 2) is divided into aspects that belong to either the client context or the server context. The conceptual architecture has the following components: Client 3D object and shared 3D object refer to a typical pair of client-side cell and server-side managed cell objects in OWL. While the cell object on the client-side is responsible for user interaction and presentation, relevant actions that affect all participating clients are sent to the server through cell messages. The managed cell object on the server-side is supposed to construct semantic events and sent them to the assessment module for further processing. The semantic event manager is responsible for collecting semantic events created within the virtual world and forwarding the information to all registered listeners. The following mechanisms are involved in creating semantic events: (a) Tagging and metadata components are used to annotate spatial sections with place marks and add additional information to certain cells (3D objects), including proximity zones – discrete definition of distance from an avatar to the cell. This can be used to detect behavior such as users entering or leaving rooms; or approaching certain cells. Both actions are accomplished by world designers during design time. In the runtime phase, semantic events are created automatically based on the location of the avatar. (b) Programmatically invoked events are created by embedded functionality that is added to existing and newly created modules (usually server-side managed cells) by software developers. This kind of events is used to describe interaction with a cell object – as well as to report on its state – e.g. the amplitude of a pendulum swing. Here again, a state is always considered to be information that is directly observable and understandable by a iED 2013 ImmersiveEducation.org PROCEEDINGS 76 human player. (c) Common events include very basic OWL actions like start and stop of avatar movements or gestures, which are again created automatically. However, there is no particular design time feature exists for such events. Therefore, a global configuration is supposed to provide mappings to yield proper semantic information for such built-in events. The external assessment interface is supposed to register with the semantic event manager and communicate with the external assessment system. The assessment interface uses the server-side feedback API to create feedback content items based on the results received from the external assessment system. The feedback API is a server-side interface to create and send feedback content to specified clients based on the participating users. Which users are involved in certain feedback messages is determined by the external assessment system. The structural representation of the feedback content is based on the four-dimensional approach for feedback in serious games [14]. However, while pedagogical aspects are decided in the context of the assessment system, the module is responsible for presentation. The current version is capable of displaying text messages only. The client feedback API is a collection of utility functions to play feedback content items on the OWL client and to activate different guiding mechanisms. The authoring tools on the client-side are used by content designers to create place marks for annotating spatial areas and attach metadata to existing cells. To support debugging, an event monitor and a system reset function are included as well. Figure 2. Conceptual architecture for a flexible semantic assessment module in OWL iED 2013 ImmersiveEducation.org PROCEEDINGS 77 4. Sample Application of a Physics Experiment Based on the conceptual architecture introduced in the previous section, we have developed a first proof of concept of the assessment module for Open Wonderland. To a certain extent, the developed assessment module supports the exemplary scenario presented in this section. The following aspects have been chosen to be tested with the prototype: (a) The tagging and metadata component, more exactly the feature for defining annotated place marks, (b) programmatically invoked events implemented on a single cell (3D object) included in a sample module on a physics experiment and (c) delivery of textual feedback content to clients through the feedback API (server) and client feedback API enabling simple screen messages. Feedback content items are forward from OWL server to client. As the assessment module does not include built-in assessment logic by design, the scenario has been linked with SOFIA Middleware [11] through the simple object access protocol (SOAP) in order to receive feedback based on a context-dependent, rule-based assessment mechanism. The assessment process includes the transfer of semantic events (containing user actions and state values) from the assessment module to the evaluation web service. Based on these semantic events, the assessment engine provides the suitable feedback by evaluating the event against related assessment rules – matching for actions and state changes – which have been defined in advance. Figure 3. During the first approach to the experiment, the learner receives a supportive message, referring to the PDF document displayed on the presentation wall at the lefthand side. iED 2013 ImmersiveEducation.org PROCEEDINGS 78 For demonstration purpose a laboratory setup modeling a real pendulum has been selected. The primary element of this setup is a cell object that represents an animated pendulum based on an underlying simulation model that has been developed as OWL module. The simulation model is connected with the refresh rate (time interval) of the 3D scene and solves the equation of motion through numerical integration. Due to this fact, the pendulum behavior is close to real world settings. The pendulum cell has been programmatically enhanced to raise semantic events (refer to (b) above), describing the interaction for pushing the pendulum (user action), as well as giving information on the maximum deflection angle (state) – i.e. the peak amplitude on the basis of its current mechanical energy. Further items in the scene include a presentation wall – in order to present instructions for the experiment contained in a PDF file – and a white board to make notes. Finally, the setting has been annotated with a hidden place mark [refer to (a) above] that defines a spatial section around all described items. This place mark is labeled to associate it with semantic information. The following two show cases have been selected to demonstrate either (a) and (b), both in combinations with (c), as feedback is an essential element in both examples: (1) the instructions and experiment description on the presentation wall are a permanent feature of this setting. However, less experienced learners may even need a hint to read these instructions first, before they begin with the experiment. This has been a motivation for a first show case (see Figure 3). The world-builder has used a hidden spatial place mark to annotate the local area around the experiment. When the player approaches this area, an appropriate event is automatically triggered by the assessment module. The instructor has defined a rule in the context of the remote assessment logic that matches against this event. As a result, a text-based feedback message is released by the assessment logic and sent back to the assessment module. Finally, the feedback is displayed on the screen, guiding the learner to the presentation wall. If the assessment system is additionally connected with the learner profile, it could also decide whether such feedback is required. (2) After reading the instructions the player should be aware that the formula for a simplified pendulum is only valid for small deflection angles. The second show case (see Figure 4) assumes that (based on any actual task) pushing the pendulum too strong would corrupt certain measurements and calculations. Whenever the player performs a click on the pendulum a discrete amount of kinetic energy is induced. As a side effect, a built-in event – containing the interaction itself, as well as an update of the amplitude (max. deflection angle) – is communicated via the semantic event manager. In this case, the instructor defines a rule that matches not only against the interaction of pushing the pendulum, but evaluates also a condition determining the threshold for the maximum allowed deflection angle. If this threshold is exceeded, appropriate feedback is given to the learner. Besides minor technical issues, which are mostly related to the current development state of Open Wonderland, the conceptual approach and findings from the first implementation turns out to be mostly successful. The response times are acceptable for providing immediate feedback and suffice real time performance. That holds true even if Open Wonderland server and assessment web service are separated through public internet connections. The semantic approach turns out to be useful to support the iED 2013 ImmersiveEducation.org PROCEEDINGS 79 decoupling of assessment design and world building, thus enabling the reuse of certain environmental elements for different situations. However, as already indicated in the previous sections, the limitations concerning the direct integration of existing modules make it difficult to provide a complete perspective of the situational context for the assessment system. Therefore, a considerable amount of work has to be invested in order to prepare all commonly used pre-existing modules (e.g. presentation wall, whiteboard, etc.) to support this approach. New modules can adopt this approach directly by following the design guidelines for programmatically invoked events. Another aspect is that most items in Open Wonderland can be selected and operated without being even close to those items. As a result, events can be generated out of context, thus delivering a wrong perspective to the assessment system of what is actually happening within the virtual world. Finally, common events – although not implemented and tested in this sample application – are intended to track basic player behavior. For instance, by tracking start and stop events of player movements the assessment system could evaluate the avatar behavior. This could also include changes of viewing directions which could be improved to track also cells currently observed by the avatar. Then the assessment system would be able to give appropriate feedback only related to cells visible on the screen on that moment. Figure 4. When pushing the pendulum too strong, a supportive feedback message suggests that the simplified theoretical model is not valid for large deflection angles. iED 2013 ImmersiveEducation.org PROCEEDINGS 80 6. Discussion and Outlook This research aims at developing a semantic-enabled approach to support automated assessment forms in Open Wonderland. More precisely, to develop an external and automated assessment of user behavior within the virtual world. The proposed approach enables dynamic and supportive feedback and guidance provision considering knowledge and skill levels of the learner. The approach is underpinned by a conceptual architecture for a generalized flexible assessment system in immersive environments as a foundation to support not only Open Wonderland, but different platforms for game-based environments and virtual worlds as well. In addition the approach is flexibly designed to support a variety of application domains and scenarios. To this end, an assessment module for Open Wonderland has been designed, and a prototype has been developed and used for the implementation of proof of concept for automated assessment and feedback. Moreover, the assessment approach has been applied on a sample physics experiment to evaluate its applicability and performance. First findings have shown that the general approach is applicable to different immersive 3D virtual environments. The response times were acceptable for providing immediate feedback and suffice real time performance. However, first experiences with Open Wonderland reveal that semantic information is not considered an important aspect at this time; and it seems natural to assume that other platforms will share similar issues. Especially if considering again that there exists a general request for rich semantics in complex 3D virtual environments [17]. Consequently, it would make sense to suggest making semantic information a key feature in next generation virtual world platforms, thus not only supporting the semantic approach on flexible assessment proposed in this paper, but also other aspects. Moreover, the used semantic-enabled assessment system is rule-based, context dependent, and provides feedback through a clearly defined web service that was developed earlier by our research group [15]. This decoupling between the assessment engine and the 3D virtual environment enables supporting a variety of environments as well as different contexts and application domains. Nevertheless, it enables content and scenarios reusability and maintains interoperable and flexible automated assessment forms. Nevertheless, the next steps must include the improvement of this approach, especially in creating large-scale scenarios annotated with rich semantics. Moreover, future research may include using standards and specifications that are semantic related to represent services and communication with the semantic knowledge repository. In future work, we will also apply the findings and tools of this work on the NDIVE project (http://ndive-project.com) for assessment and providing feedback and guidance for skill training. Acknowledgements Parts of the research is linked to the nDIVE project (http://ndive-project.com). Support for the production of this publication has been provided by the Australian Government Office for Learning and Teaching (Development of an authentic training environment to iED 2013 ImmersiveEducation.org PROCEEDINGS 81 support skill acquisition in logistics & supply chain management, ID:ID12-2498). The views expressed in this publication do not necessarily reflect the views of the Australian Government Office for Learning and Teaching. References and Notes: 1. J. Pirker, S. Berger, C. Gütl, J. Belcher, P. Bailey, in Proceedings of the 2nd European Immersive Education Summit, Nov, (Immersive Education, Paris, France, 2012), pp. 26–27. Available: http://JiED.org 2. T. Reiners et al., in IADIS International Conference on Internet Technologies & Society, P. Kommers, T. Issa, P. Isaías, Eds. (2012), pp. 93–100. 3. K. Hirokiand Akama Ishizuka, Language Learning in 3D Virtual World, eleed 8 (2011) (available at http://nbn-resolving.de/urn:nbn:de:0009-5-31706). 4. T. Machet, D. Lowe, C. Gütl, On the potential for using immersive virtual environments to support laboratory experiment contextualisation, European Journal of Engineering Education 37, 527–540 (2012). 5. G. Crisp, The e-assessment handbook (Continuum, [London]; New York, 2007). 6. D. Zhang, J. L. Zhao, L. Zhou, J. F. Nunamaker Jr, Can e-learning replace classroom learning?, Communications of the ACM 47, 75–79 (2004). 7. D. J. Nicol, D. Macfarlane-Dick, Formative assessment and self-regulated learning: A model and seven principles of good feedback practice, Studies in higher education 31, 199–218 (2006). 8. J. W. Kemp, D. Livingstone, P. R. Bloomfield, SLOODLE: Connecting VLE tools with emergent teaching practice in Second Life, British Journal of Educational Technology 40, 551555 (2009). 9. P. R Bloomfield, D. Livingstone, Immersive Learning and Assessment with quizHUD. Computing and Information Systems Journal, 13(1), 20-26, 2009. 10. M. B. Ibáñez, R. M. Crespo, C. Delgado Kloos, in KCKS 2010 Key Competencies in the Knowledge Society World Computer Congress Brisbane Australia 2023 Sept 2010, N. Reynolds, N. Turcsányi-Szabó, Eds. (Springer Boston, 2010), pp. 165-176. 11. E. H. Mory, Feedback research revisited, Handbook of research on educational communications and technology 2, 745–783 (2004). 12. V. J. Shute, Focus on formative feedback, Review of educational research 78, 153–189 (2008). 13. C. Rogers, Client-centered Therapy: Its Current Practice, Implications and Theory. London: Constable (ISBN 1-84119-840-4, 1951). 14. I. Dunwell, S. De Freitas, S. Jarvis, Four-dimensional consideration of feedback in serious games, In Digital Games and Learning, P. Maharg & S. de Freitas (Eds.) Continuum Publishing. iED 2013 ImmersiveEducation.org PROCEEDINGS 82 15. M. Al-Smadi, G. Wesiak, C. Guetl, Assessment in Serious Games: An Enhanced Approach for Integrated Assessment Forms and Feedback to Support Guided Learning, in Interactive Collaborative Learning (ICL), 2012 15th International Conference on, (2012), pp. 1– 6. 16. Open Wonderland (available at http://openwonderland.org/). 17. T. Tutenel, R. Bidarra, R. M. Smelik, K. J. D. Kraker, The role of semantics in games and simulations, Comput. Entertain. 6, 57:1–57:35 (2008). 18. J. Kaplan, N. Yankelovich, Open Wonderland: An Extensible Virtual World Architecture, Internet Computing, IEEE 15, 38–45. 19. Open Wonderland JavaDoc (available at http://javadoc.openwonderland.org/). 20. Open Wonderland Documentation Wiki (available at http://code.google.com/p/openwonderland/wiki/OpenWonderland). iED 2013 ImmersiveEducation.org PROCEEDINGS 83 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER A Technological Model for Teaching in Immersive Worlds: TYMMI’s Project Authors: M.G. Badilla1*†, C. Lara2 Affiliations: 1 Universidad Católica de la Santísima Concepción. 2 Universidad Católica de la Santísima Concepción. *Correspondence to: mgbadilla@ucsc.cl † Facultad de Educación Alonso de Ribera Avenue # 2850 Campus San Andrés Concepción, Chile Abstract: TYMMI project, Technology and pedagogical models in immersive worlds, is funded by the National Fund for Scientific and Technological Development (FONDECYT) in Chile. Through an exploratory, descriptive, and longitudinal study, we seek to strengthen the professional performance of bachelor education students by simulating teaching practice in three dimensional (3D) learning environments, through the design, validation and implementation of a technological model in Second Life (SL) and OpenSimulator (OpenSim). Those environments allow virtual learning opportunities where community members can meet, share and interact in the process of assimilation of teaching practices. Preliminary results are related to the development of a conceptual and technological iED 2013 ImmersiveEducation.org PROCEEDINGS 84 model for teaching in immersive virtual worlds and the design of an immersive simulation platform for students. In this virtual space there is architecture to support pedagogical proposal that includes classrooms with educational situations that are set in the sub-critical zone of Language and Communication in primary education. One Sentence Summary: The conceptual technology model supports the design and deployment of pedagogical actions in immersive virtual worlds through process and technology components. 1. Introduction The design of activities that consciously adhere to the principles of good teaching, curricular context and relevance in relation to teaching strategies, learning techniques and the profile of the students is a complex job for teachers. The difficulties arise from the volume, complexity and high frequency of changes in the general definitions of university courses and curriculum design. This is not only a reality in Chile. In Latin America, Tenti Fanfani [1] states that the reforms have created a crisis of professional identity of teachers, as they have relied on an instrumental view of teaching. Another relevant factor is the scarce systemization of standards of documentation of educational activities and the lack of integration of these documents with the particular structures of learning strategies included in the activity. Other factors include time constraints for class preparation, attention to students, breaks during the workday and lack of time caused by the need to have more than one job [2-8]. Another dimension of the problem is the integration of Information Technology and Communication (ICT) as a resource, which is hampered by the gap between digital immigrant teachers and digital native students, both concepts defined by Prensky [9]. Teachers have few systematized knowledge bases that allow them to consult available resources relevant to their design needs. The large amount of scattered information available on the Internet further adds to the problem, hindering teachers’ decision-making regarding the use of ICT in the teaching-learning process. Evidence indicates that most teachers do not use technology as a teaching resource intensively in their classrooms due to poor preparation in the use of these tools during their training as teachers [10], among other causes. Consequently, a major challenge is to train teachers to match the current demands, and incorporate an inclusive vision for initial training so that through ICT, education becomes the catalyst for equitable development. Meanwhile, 2021 Organization Goals not only raise the need to integrate ICT into the curriculum and evaluate its impact, but also to train teachers and disseminate innovative pedagogical practices using ICT [11]. Teachers, both in Chile, Latin America and developed countries, complain about a lack of technical support to adequately choose between the various and often conflicting existing pedagogical alternatives [1, 6]. For example, they resent the lack of opportunity to update the educational use of technologies, to the extent that it generates among them a sense of obsolescence and fear of being replaced by others with more training [12, 13]. In Chile, teachers complain about iED 2013 ImmersiveEducation.org PROCEEDINGS 85 the lack of support and professional development opportunities [14], an absence that is of particular concern in the case of teachers with less than three years of service [15]. 2. Virtual Worlds One of the technological possibilities that can be incorporated into the world of education is Second Life (SL), an immersive virtual world that is accessible from the Internet. The potentialities of virtual worlds lie in the fact that today learning processes do not attend to the demands that the work environment requires of education professionals, thus the creation of three-dimensional educational settings could generate an additional advantage to traditional methodologies, allowing users to interact in simulated work environments. One of the features of Second Life is that it is a cross-platform software with threedimensional (3D) features, and unique, easy to remember settings, which can run on Linux, Macintosh and Windows. Second Life was developed in 2003 by Linden Lab, a private U.S. corporation. Its counterpart is OpenSimulator (OpenSim), open source software that allows for easy simulation of 3D virtual environments. Its function is to allow various developers to create virtual worlds and to put these in contact with worlds created by other developers. This creates a domino effect because it extends the use of virtual worlds to enhance the interest of netizens (citizens using the Internet). SL offers the opportunity to use simulation in a safe environment to enhance experiential learning. It enables participants (avatars), in various communities (islands), the ability to create their own spaces, to exchange information, and to create situations in order to interact within the environment, practice skills, try new ideas and learn from their errors [16]. Immersion in a virtual world in the sphere of education produces positive results in various areas. For example, it benefits the possibilities of experimentation through the user's active action, so that it, in turn, relates to formative elements and equally to other elements of the environment. It thus becomes collaborative and motivating [17, 18]. Gonzalez and Chavez [19] indicate that there is a positive relationship between the use of the tool in 3D and learning, specifically in the degree of retention, utilization of transmission and diagnosis of skills. 3. Methodology The current project is developing from 2012 to 2015. It is addressed in a multidisciplinary approach, bringing together the areas of pedagogy, educational computing, engineering sciences and social communication. This research is developed under a theoretical framework, which integrates the principles and rules of both the positivist and interpretive paradigm, assuming that in education the use of more than one paradigm as integration and complement of each other is accepted [20]. The scope of this research is of an exploratory and descriptive nature because it attempts to examine an understudied topic in order to become familiar with relatively unknown phenomena and because it seeks to describe the various aspects and dimensions of it [21]. iED 2013 ImmersiveEducation.org PROCEEDINGS 86 The methodology corresponds to a multi-method research design, which is used when two or more procedures are employed to investigate the same phenomenon, through the different stages of the project [22-24]. It is considered a multiple model that uses qualitative and quantitative approaches in phases of the research process: research design, selection of approach, construction, data analysis, and interpretation of results [25]. 4. Preliminary Results Preliminary results relate to the first year of the project. It consists in the development of a conceptual and technological model for teaching in immersive virtual worlds and the design of an immersive simulation platform for students. In this virtual space there is architecture to support a pedagogical proposal, which includes classrooms with educational situations that are set in the sub-critical zone of Language and Communication in primary education. 4.1 Conceptual and Technological Model for Teaching in Immersive Worlds The architecture to support the pedagogical proposal of work in immersive environments developed by Project TYMMI contemplates in one of its phases, the design of a conceptual technological model for teaching in immersive worlds, TEDOMI. It proposes the interaction of a set of processes with technology components, focused on enhancing the simulation of teaching practices in three-dimensional environments like Second Life and OpenSim. The process component proposes groups of actions for teachers to consult, plan and/or implement educational activities, ensuring an appropriate approach to the use of resources, and meaningful curricular context. The technology component in turn comprises of: 1) computer systems that support teachers in consultation, design and planning, and 2) the spaces created in immersive worlds platforms where resources are available for use and teaching practice with this type of technology, among those that are considered for use is SLOODLE (see Figure 1). iED 2013 ImmersiveEducation.org PROCEEDINGS 87 Figure 1. Technological model for teaching in immersive worlds: TEDOMI 4.1.1 Technology Components The technology components are represented by pedagogical deployment platforms that constitute the space where teaching activities will be held. In the context of this project, these platforms are the immersive virtual worlds Second Life and OpenSim. The other component is the cataloging of resources and educational planning, which consists of a support platform for preliminary processes and post-teaching activities. A) Pedagogical deployment platforms In this project, the pedagogical deployment platforms are Second Life and OpenSim. In the Second Life environment a private region was acquired, equipped with free and paid tools with high educational potential. The same will happen in a second phase with the OpenSim platform. It ensures the integration of both environments with the Learning Management System (LMS) Moodle through an integrator, known as SLOODLE. This technical measure is useful for tasks that require the use and access the LMS from the virtual world, for the purpose of registering products made by students with a permanent storage medium. B) Platform for cataloging resources and educational planning The platform for cataloging resources and educational planning consists of software, called Acti-Plan, custom-built for this project, which will allow teachers to act in two areas: cataloging resources and educational planning. The cataloging of resources, which consists in providing the teacher with an interface computer system in which an index card can be registered, relating to a tool explored in iED 2013 ImmersiveEducation.org PROCEEDINGS 88 virtual worlds has been considered as a potential educational resource. It should be noted that the first resources listed are those used in private virtual spaces in Second Life and OpenSim, already acquired by the project. The spirit of creating this module of cataloging in Acti-Plan is based on the fact that the resources that are useful for education are in constant development, and that they may be hosted in multiple locations within virtual worlds. Therefore, it is an impossible task for a centralized administrator or teacher in particular, to maintain a permanent exploration of resources and an up-to-date catalog. The added value of this interface is that it allows the task to be approached as a collaborative teaching community. Each listing added contains the following metadata about the resource: Resource Name, Creator or author, Description, Type, Permissions, Cost, Location, Version and Revision date. Educational planning consists in providing the teacher with a user interface for assisted recording of educational activities. The documentation of activities assistant proposes three phases: 1. Context of good teaching practice 2. Curricular context: national plans and programs, institutional graduate profile 3. Design of curricular activities • Phase sequence base • Determination of steps for each phase of the proposed base sequence. • Learning strategies related to a step or phase of the base sequence • Resources • Teaching skills required The design of the Acti-plan contemplates that all elements are preloaded to facilitate the teacher in planning exercises, reducing the task to the selection of the relevant facts. In this way the teacher concerns himself with the sequence of the activity designed, given that all the instrumental curricular definition can be quickly defined. 4.1.2 Process Components Process components define frameworks for cataloging tools, consulting the knowledge base, teacher action planning and recording the experience. These processes are not necessarily a strict sequence and the teacher, depending on particular needs and knowledge gaps, can use one or all processes. Below is a description of the five processes involved: A) Cataloging Tools SL / OS for teaching In this instance teachers explore the pedagogical deployment platforms to find tools which could be of interest as educational resources. In the context of this project, the pedagogical deployment platforms considered are: Second Life and OpenSim. However, iED 2013 ImmersiveEducation.org PROCEEDINGS 89 this is not a restriction and exploration can be expanded or changed to suit the interests of the teacher. This process is supported by the Platform for cataloging and planning educational resources, specifically the functionality referred to as cataloging, where the teacher will record the resource explored according to the index card defined for this purpose. B) Check the knowledge base In practice, planning educational activities in virtual worlds may prove difficult, as the teacher may have little teaching experience to successfully deal with the design of the activities. To strengthen this process, the technological platform, Acti-Plan, allows you to query a knowledge base (Knowledge base or K-base) that contains the information about activities stored by the user and by other teachers. This process has support from the Platform for cataloging resources and pedagogical planning, specifically in the functionality referring to planning, where the teacher can consult the database with different search criteria, such as resource type, education level, unit content, learning strategy, and also access documentation of lists of activities related to the search criteria. C) Pedagogical/Didactic Planning Process Planning activities can be very time-consuming for a teacher if there is no appropriate assistance or instrumentation. To strengthen this, the platform Acti-Plan allows teachers to plan their activities in an assisted manner, reducing the time spent on the task and increasing the quality and effectiveness of the design of the activities. This process is supported by the Platform for cataloging and planning educational resources, specifically in functionality related to pedagogical planning, where it is possible to define all the curricular context, fundamental steps for the development of activities, learning strategies and the use of technological resources. D) Implementation of the pedagogical activity Depending on the planned activity, the teacher runs the pedagogical action on the selected deployment platform. For the context of this research the alternatives are Second Life or OpenSim, with the possibility of considering integration with Moodle. E) Recording the experience (teaching metacognition) Since 1976, when Flavell [26] defined metacognition as the knowledge and regulation that an individual has about his own cognoscitive processes, various conceptual constructions have been established. However, none were very distant from the original. Garrison point out that metacognition consisting of two components: awareness (knowledge) and implementation strategies (control) [27]. Torar-Gálvez [28] poses metacognition as being a strategy that spans three dimensions through which the individual acts and carries out the tasks: iED 2013 ImmersiveEducation.org PROCEEDINGS 90 • Dimension of reflection, in which the individual recognizes and evaluates his own cognitive structures, methodological possibilities, processes, abilities and disadvantages. • Dimension of administration, during which the individual, already conscious of his condition, proceeds to combine these diagnosed cognitive components, with the intention of forming strategies to solve the task. • Dimension of evaluation, through which the individual assesses the implementation of his strategies and the degree to which the cognitive goal is being achieved. After the implementation of the educational activity, it is important to capture the impressions of teachers and participating students. To strengthen this process and to systematize information, the Acti-Plan provides a module for systematically recording these perceptions. In this way, the process cycle is closed; the K-base records data on the complete cycle of the teaching action, enabling analysis and construction of knowledge. The Acti-Plan also includes a function in which teachers and students can record evaluations of the educational experience that covers the evaluation dimension. Educational metacognition, in regard to the reflection and administration dimensions, must be considered according to the pedagogical model, which is the basis for the implementation of the pedagogical planning module. For students, both dimensions must be triggered by the design of the activity. + + 4.2 Relationship Problem-Solution In relation to the issues raised in the context of this research (that is, to strengthen the professional practices of student teachers through immersive simulation environments) the technological and conceptual model TEDOMI responds to the problems identified in Table 1 below, supporting the design and deployment of pedagogical actions. iED 2013 ImmersiveEducation.org PROCEEDINGS 91 Table 1. Relationship between pedagogical problem and the solution proposed by TEDOMI Problem Solución del modelo TEDOMI Volume, complexity and high frequency of changes in the general definitions and curriculum Systematize this information and keep it up to date in the database K-Base Teachers have few systematized knowledge bases which allow them to consult available resources relevant to their design needs The model designs and implements software called ActiPlan, where one of its interfaces allows the cataloging of resources (or tools) available in the online stores (virtual shop) and/or in the immersive environment platforms themselves (Second Life and OpenSim). Acti-Plan not only allows the documentation of ICT resources, but also associates these resources to activities that are planned and, at the end of its execution, Acti-Plan has an interface for recording teaching experience with respect to the use of ICT and other relevant dimensions Thanks to this, the K-Base provides information to teachers so that they can explore the contexts and experiences in which one or more resources have been used pedagogically. Scarce systemization of standards of documentation of educational activities, and the lack of integration of these documents with the particular structures of learning strategies included in the activity The model designs and implements a computer system, called Acti-Plan, which assists the teacher in the documentation and recording of the activity, according to a standard model, also allowing to document the activity according to related learning strategies. To achieve this assistance, the K-Base systemizes the standard of documenting activities and the standard of documenting learning strategies The activity log created by the teacher, with the assistance of Acti-Plan becomes effective in K-Base The global context of good teaching practices, such as the curricular space in which the activity is located, can be easily related, given that in K-Base such information is systematized. In this process of assisted teaching planning, it will be possible to associate one or more cataloged resources from Second Life or OpenSim to an activity, which are pertinent to the development of the activity. Source: Authors iED 2013 ImmersiveEducation.org PROCEEDINGS 92 4.3 Architecture to Support Pedagogical Proposal The second result of this project is related to the construction of a website, which presents the main aspects of the project [29] (see Figure 2 below). In addition, a TYMMI island in Second Life was acquired and an environment was configured. This environment was configured with classrooms and resources related to education which, together with the integration of Moodle, allowed access to this LMS from Second Life (see Figure 3). In this virtual space, there is architecture to support the pedagogical proposal. This architecture included classrooms with educational situations that were set in the subcritical zone of Language and Communication. Here the teaching techniques will be implemented as simulation, such as art exhibits, communities, creation of objects, blogs, conferences, exploration and research. The instructional model that provides the basis for the design of simulation activities of learning environments for pedagogy students is based on the theory of meaningful learning, with a strong component in role-play, collaborative work and metacognition. An example of how Second Life is being used currently by enabling thematic areas in which the teacher and students work in four moments of a class: first, the teacher contextualizes the issues addressed during the session; second, students discover and work with the different resources; third (optional), students visit other islands in Second Life, and fourth, students reflect and socialize the results of their search. The island has various resources such as: The Opinionator, which instantly calculate and displays votes in percentage and as a pie chart in the center, and MindMap 3D, which creates some spheres (node) to connect each other to represent what avatars are thinking. iED 2013 ImmersiveEducation.org PROCEEDINGS 93 Figure 2. TYMMI’s Project web page iED 2013 ImmersiveEducation.org PROCEEDINGS 94 Figure 3. TYMMI Project in Second Life 5. Conclusions Teachers need to be freed from instrumental suffocation that planning often requires. Given the complex curriculum framework in which it is situated, it is important to revive the professional identity of teachers that gives them the opportunity to focus their capabilities on meeting the needs of the teaching and learning process present in the educational contexts in which they work. Providing teachers with access to knowledge bases containing ICT resources cataloged by both characteristics and user experience allows initiated users to review and reflect on teaching practices already used by other teachers. This then enables them to adapt these to their own contexts. The formation of a community of teachers, which explores and catalogs ICT resources and shared experiences of teaching and learning in relation to diverse curricular contexts, consolidates a learning curve and provides a constantly evolving collaboration among teachers. This enables the strengthening of the utilization of ICT resources in their teaching practice. Undoubtedly, this platform has a repository of activities and good practices to incorporate ICT in the classroom, which allows the development of a complementary model of education and virtual learning opportunities where avatars of community members can meet, share and interact in the assimilation process of teaching practices. Moreover, the use of immersive virtual worlds as a teaching resource can trigger high motivation of students in the development of educational activities, maximizing their learning. iED 2013 ImmersiveEducation.org PROCEEDINGS 95 Aknowledgments This research is being carried out thanks to the support of Comisión Nacional de Investigación Científica –Conicyt- Ministry of Education, Chile, through the National Fund for Scientific and Technological Development: Iniciation Fondecyt Project number 11121532. References: [1] E. Tenti Fanfani, La Condición Docente. Análisis Comparado de la Argentina, Brasil, Perú y Uruguay (Siglo Veintiuno Editores, Buenos Aires, Argentina, 2005) [2] Centro Microdatos, Encuesta Longitudinal de docentes. Informe Final (Univ. of Chile, Santigo, 2007) [3] M. Robalino, A. Körner, Condiciones de Trabajo y Salud Docente. Condiciones de Trabajo y Salud Docente. Estudios de Casos en Argentina, Chile, Ecuador, México, Perú y Uruguay (UNESCO, Santiago, 2005) [4] D. Vaillant, C. Rossel, Maestros de Escuelas Básicas en América Latina. Hacia una Radiografía de la Profesión (PREAL, Santiago, 2006) [5] A. Hargreaves, Time and teachers work: An analysis of the intensification thesis. Teachers College Record, 94(1), 87-108 (1992). [6] J. M. Esteve in El oficio de docente. Vocación, trabajo y profesión en el siglo XXI, E. T. Fanfani, Comp. (Siglo Veintiuno Editores, Buenos Aires, 2006), pp. 19-69. [7] P. Bennell, K. Akyeampong, Teacher Motivation in Sub-Saharan Africa and South Asia. Department for International Development UK (DFID), (Londres, 2007; http://r4d.dfid.gov.uk/pdf/outputs/policystrategy/researchingtheissuesno71.pdf) [8] X. S. Liu, J. Ramsey, Teachers’job satisfaction: Analyses of the teacher follow-up survey in the United States for 2000-200. Teaching and Teacher Education, 24(5), 11151400 (2008). [9] M. Prensky, Digital Natives, Digital Inmigrants. On the Horizon, 9(6) (2001). [10] J. Hinostroza, C. Labbé, M. Brun, C. Matamala, Teaching and learning activities in Chilean classrooms: Is ICT making a difference? Computers & Education, 57(1), 13581367 (2011). [11] OEI-CEPAL, Organización de Estados Iberoamericanos para la Educación, la Ciencia y la Cultura. 2021 Metas Educativas. La educación que queremos para la generación de los bicentenarios. Documento Final, (Madrid, 2010, http://www.oei.es/metas2021) [12] E. Tenti Fanfani, in El Oficio de Docente: Vocación. Trabajo y Profesión en el Siglo XXI, E. T. Fanfani, Comp. (Siglo Veintiuno Editores, Buenos Aires, 2006), pp. 119-142. iED 2013 ImmersiveEducation.org PROCEEDINGS 96 [13] E. Tenti Fanfani, Consideraciones sociológicas sobre profesionalización docente. Educación y Sociedad, 28(99), 335-353 (2007). [14] C. Bellei, in Economía Política de las Reformas Educativas en América Latina, S. Martinic y M. Pardo, (CIDE & PREAL, Santiago, 2001), pp. 227-258. [15] B. Avalos, B. Carlson, P. Aylwin, La inserción de profesores neófitos en el sistema educativo… ¿Cuánto sienten que saben y cómo perciben su capacidad docente en relación con las tareas de enseñanza asignadas? (2004, http://www.cepal.org/ddpe/publicaciones/sinsigla/xml/7/19597/INSERPROFE.pdf) [16] A. E. Iribas, in IV Jornada Campus Virtual UCM: Experiencias en el Campus Virtual. (Editorial Complutense, Madrid, 2008, http://eprints.ucm.es/7800/1/campusvirtual130-148.pdf), pp. 125-142. [17] D. Maniega, P. Yànez, P. Lara, Uso de un videojuego Inmersivo online 3D para el aprendizaje del español. El caso de “Lost in La Mancha”. Icono14, 9(2), 101-121 (2011). [18] J. Quinche, F. González, Entornos Virtuales 3D, alternativa pedagógica para el fomento del aprendizaje colaborativo y gestión del conocimiento en Uniminuto. Revista Formación Universitaria, 4(2), 45-54 (2011). [19] A. L. González, G. Chávez, La realidad virtual inmersiva en ambientes inteligentes de aprendizaje. Icono14, 9(2), 122-137 (2011). [20] M. J. Albert, La investigación educativa (Mc Graw Hill, Madrid, 2007). [21] P. Salinas, M. Cárdenas, Métodos de investigación social (Intiyan, Quito, 2009). [22] J. M. Morse, in A. Tashakkori, Ch. Teddlie, (Eds.) Handbook of mixed methods in social and behavioral research (Thousand Oaks, Sage, California, 2003). [23] B. Kaplan, D. Duchon, Combining Qualitative and Quantitative Methods in Information Systems Research: A Case Study. Mis Quarterly, 12(4), 571-586 (1988). [24] C. Ruiz, El enfoque multimétodo en la investigación social y educativa: Una mirada desde el paradigma de la complejidad. Revista de filosofía y sociopolítica de la educación, 8, 13-28 (2008). [25] A. Tashakkori, C. Teddlie, Handbook of Mixed Methods in Social & Behavioral Research (Thousand Oak, Sage, 2003). [26] J. H. Flavell, in The nature of intelligence, L. B. Resnick, Ed. (Hillsdale, NJ: Erlbaum, 1976), pp. 231–235). [27] D.R. Garrison, E-learning in the 21st century. A framework for research and practice (Routledge, New York, 2011). [28] J. C. Tovar-Gálvez, Evaluación metacognitiva y el aprendizaje autónomo. Revista Tecné Episteme y Didaxis TEΔ, extraordinary number, Segundo Congreso Sobre Formación de Profesores de Ciencias, Universidad Pedagógica Nacional, Bogotá D.C. (2005). [29] Tymmi. Proyecto http://www.tymmi.cl) iED 2013 Fondecyt de Iniciación ImmersiveEducation.org 11121532 (Chile, PROCEEDINGS 2012, 97 iED 2013 ImmersiveEducation.org PROCEEDINGS 98 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Towards a Theory of Immersive Fluency Authors: Nicola Marae Allain1* Affiliations: 1 SUNY Empire State College *Correspondence to: nicola.allain@esc.edu Abstract: The challenge of working fluently across a combination of literacies is a major hurdle in implementing immersive education. Even when students, particularly digital natives, embrace the complexities of the medium, faculty often struggle to develop the technical skills required to create immersive experiences. The theory and practice of immersion applied in the Master of Arts in Learning and Emerging Technologies (MALET) at SUNY Empire State College provides graduate students with immersion in virtual worlds learning as they acquire fluency across literacies. Learners co-create in collaborative virtual environments as designers of teaching and learning experiences. They participate in peer review in preparation for an open, juried showcase presented in virtual worlds, and design a complete learning environment for the Advanced Design Seminar. The fluency they acquire requires moving beyond literacies as they work in environments that foster a host of experiences leading to visual, digital, media, cultural, and critical fluencies. iED 2013 ImmersiveEducation.org PROCEEDINGS 99 One Sentence Summary: Students in an immersive graduate program acquire fluency beyond literacies as they create immersive learning experiences. Introduction The challenge of working fluently across a combination of literacies is a major hurdle in implementing immersive education. The act of immersion in itself includes the integration of a complex set of skills such as habituation to being within an avatar embodiment, able navigation, handling headsets, communication etiquette, and orienting oneself to the environment. In immersive learning situations, students are also asked to interact with, and create, a variety of digital media in interdisciplinary contexts. In collaborative settings, they must also coordinate the complex logistics of teamwork and content creation as they master the new environments. Even when students, particularly digital natives, embrace the complexities of the medium, faculty often struggle to develop the technical skills required to create immersive experiences. According to the NMC Horizon Report: 2013 Higher Education Edition, “Faculty training still does not acknowledge the fact that digital media literacy continues its rise in importance as a key skill in every discipline and profession. Despite the widespread agreement on the importance of digital media literacy, training in the supporting skills and techniques is rare in teacher education and non-existent in the preparation of faculty.” [1-3] I propose that an immersive approach to education helps overcome these challenges. Just as immersive language learning provides students and faculty with opportunities to become fluent within a condensed time period, learning in immersive environments contributes to a fluid acquisition across literacies. Regardless of entry literacy level, students and teachers in immersive environments learn to integrate skills and literacies while applying them to emergent experiences. Scope of work This is an exploration of the concept of information fluency applied to immersion. I propose that immersive fluency should be the overarching goal for students and faculty engaged in immersive education. As in mastery of language, reading, and other skills essential to the learning enterprise, basic literacy is barely enough to assure an adequate functioning level in the medium. Immersive education requires participants to push beyond basic literacies. Mackey and Jacobson provide an excellent discussion of current developments in the following literacies: information literacy, media literacy, digital literacy, visual literacy, cyberliteracy. [4] They also include an overview of the theory of information fluency, which was the focus of a 1999 report sponsored by the Committee on Information Technology Literacy, National Research Council. According to the report, “this requirement of a deeper understanding than is implied by the rudimentary term “computer literacy” motivated the committee to adopt “fluency” as a term connoting a iED 2013 ImmersiveEducation.org PROCEEDINGS 100 higher level of competency. People fluent with information technology are able to express themselves creatively, to reformulate knowledge, and to synthesize new information. Fluency with information technology entails a process of lifelong learning in which individuals continually apply what they know to adapt to change and acquire more knowledge to be more effective at applying information technology to their work and personal lives. Fluency with information technology requires three kinds of knowledge: contemporary skills, foundational concepts, and intellectual capabilities.” [4-5] In a later report on information fluency, Gibson stated that “Fluency conveys a dynamism in the learning process well-suited to highly mobile students who expect constant technological change… This construct of IT fluency introduced the notion of fluency itself, suggesting a dynamic, maturational aspect to acquiring technology skills.” [6] Though immersive technologies were not the focus of fluency at the time, this framework transfers well to the mastery of competencies, skills and knowledge required within an immersive learning environment. Deeper levels of understanding and higher levels of fluency are required for immersive fluency in a similar manner to those required for information fluency. In addition, learning within an immersive environment encourages the student to draw from, and develop, different literacy types. Results This immersive approach is applied in the Master of Arts in Learning and Emerging Technologies (MALET) at SUNY Empire State College. The theory and practice of immersion provides graduate students with immersion in virtual worlds and experimental environments as they acquire fluency across literacies. Learners co-create in collaborative virtual settings as designers of teaching and learning experiences. They participate in peer review in preparation for an open, juried showcase presented in virtual worlds, and design a complete learning environment for the Advanced Design Seminar. The fluency they acquire requires moving beyond literacies as they work in environments that foster a host of experiences leading to visual, digital, media, cultural, and critical fluencies. Wankel and Blessinger propose the following application benefits to learner-centered immersive environments: 1) Intragroup and intergroup dialogue and collaboration in a multiplicity of complex situations and contexts 2) Immediacy, a sense of belonging, and group cohesiveness, which fosters shared identity and culture 3) Mediation to facilitate learning tasks, thereby making learning more enjoyable and interesting 4) The development of multiple perspectives and multiple modes of inquiry though role play and personal reflection and through the development of ethical reasoning skills, and iED 2013 ImmersiveEducation.org PROCEEDINGS 101 5) Individualized learning that is more personally meaningful to each student and more authentic and conducive to how today’s students experience learning in their real life-worlds. [7] The fully online graduate program MALET program provides multiple, cross-course opportunities for students to experience the benefits described above. In addition, open house events, receptions, festivals, graduate meetings and showcase events hosted within virtual worlds further enhance the student and faculty immersive experience. The learning and emerging technologies program has the following goals, which align with the benefits proposed by Wankel and Blessinger: 1. Consider the social, ethical and legal impacts of new technologies on our lives, individually and collectively. 2. Explore the multiple, unfolding political and economic impacts of digital media as a transformative agent in the global civic and market arenas. 3. Develop an understanding of how people learn in technology-mediated environments. 4. Examine and evaluate learning that occurs in technology mediated environments, and the impact of digital tools, resources and pedagogical methods in these settings. 5. Acquire the skills and capacity to identify, employ and evaluate technologically supported tools and methodologies. 6. Conduct original research projects both individually and in collaborative facultystudent teams in order to expand knowledge in the field. [8] Many of these goals are met as the student and faculty convene in immersive learning environments to communicated and collaborate as they design learning experiences. Term 1 Learning with Emerging Technologies: Theory and Practice New Media and New Literacies Term 2 Design of Online Learning Environments Social and Ethical Issues in the Digital Era Term 3 Evaluating Learning In Participatory Learning Environments Advanced Design Seminar: Portfolio Project Table 1: YEAR 1 Core Courses Term 1 Elective Elective Term 2 Elective Elective Term 3 Pro-Seminar Research or Capstone Project Table 2: YEAR 2 Electives and Research Seminars iED 2013 ImmersiveEducation.org PROCEEDINGS 102 Program Courses Game Based Learning Identities and Communities in Immersive Environments Advanced Program Planning/Systems Thinking Advanced Evaluation and Analytics Computers, Ethics and Society Other Courses Individualized Studies Selections from other graduate programs: MBA, MA in Social Policy, MLA (liberal studies), MAT (teaching), MAAL (adult learning) Selections from certificate programs Practicum Research Design Teaching Table 3: Elective Types Students start their immersive experience with low stakes requirements. They must attend an in-world open house and orientation, which will allow them to familiarize themselves with navigating the virtual space while learning about program expectations. At this point in the program, they only need to create an avatar account, log into the world, teleport to the College campus, and find their way to a seat. During this initial meeting, they learn to communicate using chat, instant messaging, group functions, profile management, and voice functions of the platform. Figure 1. Initial Immersion. Immersion begins with the MALET Opening Reception. Students receive an orientation to the program, and virtual environment iED 2013 ImmersiveEducation.org PROCEEDINGS 103 As students begin their studies, each of the courses introduces them to aspects of virtual and immersive literacy that will assist them in acquiring competency in immersive education. They learn the theory and practice of learning with emerging technologies, and are introduced to new media and new literacies. In the second term, the students attend a second orientation. This time, they are the senior group welcoming a newer cohort, and assisting their incoming peers in mastering the basic entry skills in the immersive environment. They learn to design online learning environments, and address social and ethical issues in the digital era. Figure 2. Learning Integration. The first sessions require a conversion of digital media, technology, and communication skills as students begin to integrate their learning and prepare for complex tasks. The third term sequence moves the students from literacy to fluency. Students evaluate learning in participatory environments as they create within immersive virtual environments during the culminating first year course: Advanced Design Seminar. The advanced seminar includes multiple immersion experiences, including project creation, presentation, and peer review. Interpretations In this final core course of the MALET Program, students continue to deepen their knowledge of theories and practices pertaining to instructional design and emerging technologies. They create a body of work that reflects the ability to integrate theory and iED 2013 ImmersiveEducation.org PROCEEDINGS 104 skills of design and development, learning principles, and assessment methods. This knowledge and skill is demonstrated in the creation of a comprehensive multimedia project for their ePortfolio or their professional work environment. This project should demonstrate the student’s growth as a specialist in emerging technologies as well as incorporate their own past skills, knowledge, and/or interests on their chosen topic. Personal reflection is used to self-evaluate their own evidence of learning and to make deeper connections between the concepts learned in the other courses. Students incorporate knowledge of instructional methods, learning theories and evaluation techniques with principles of instructional design and multimedia development to create a web-based or instructional design project. Students might choose to explore topics such as: how to apply learning theories to instructional design and assessment models; how to experiment with new technology tools to address a context-specific problem; how to implement, manage and evaluate a design project; how to analyze the effectiveness of a project designed using a particular model but used in different educational settings. Program participants take part in a two-day immersive juried Design Showcase, where they present their design portfolios, and join in peer review and feedback discussions. [9] Figure 3: Creative Collaboration. Students co-create at a distance in collaborative virtual environments as part of their learning. iED 2013 ImmersiveEducation.org PROCEEDINGS 105 Students are evaluated on their ability to create an effective multimedia project that meets the criteria established by the student within the stated course goals. The project is expected to be completed at a graduate, professional level, peer-reviewed, framed with the appropriate theory, assessed, and evaluated. Specifically, their work is assessed using the following criteria: 1. Alignment with stated Program Goals (G) and Specific Learning Objectives (SLOs) for this course. Students must identify which goals and SOs align with each section of their project and specify how they are met. [See S1] 2. Alignment with NETS student standards http://www.iste.org/standards/nets-for-students/nets-student-standards-2007.aspx Students must identify which NETS Student standards align with each section of their project and specify how they are met. 3. Alignment with NETS teacher standards http://www.iste.org/standards/nets-forteachers.aspx Students must identify which NETS Teacher standards align with each section of their project and specify how they are met. 4. Peer review and critique of projects They are assigned as peer reviewer to 2-3 peer projects. 5. A substantive final reflective paper used to self-evaluate the student’s evidence of learning and to make deeper connections between the concepts learned in the other courses. This is a synthesis of the project within a learning framework, and would include a discussion of each iteration of the project with a reflective introduction and conclusion. 6. Final integration of project into ePortfolio and participation in Immersive Design Showcase. [10] Conclusion This preliminary exploration a theory of immersive fluency attempts to demonstrate that a carefully designed program built to progressive move students from an introductory literacy level to fluency in immersive technologies (and related skills, knowledge and competencies). Whereas students entering the program may have little or no experience in the immersive environment, scaffolding, peer support, and total immersion in complex, collaborative virtual spaces provide them with the a gradual acquisition of immersive fluency. iED 2013 ImmersiveEducation.org PROCEEDINGS 106 References and Notes: 1. L. Johnson, S. B. Adams, M. Cummins, V. Estrada, A. Freeman, et al. NMC Horizon Report: 2013 Higher Education Edition. (The New Media Consortium, Austin, Texas, 2013) 2. “The term ‘digital and media literacy’ is used to encompass the full range of cognitive, emotional and social competencies that includes the use of texts, tools and technologies; the skills of critical thinking and analysis; the practice of message composition and creativity; the ability to engage in reflection and ethical thinking; as well as active participation through teamwork and collaboration.” 3. R. Hobbs. Digital and Media Literacy: A Plan of Action. (The Aspen Institute, Washington, DC, 2010) 4. T. Mackey, T. Jacobson. Reframing Information Literacy as a Metaliteracy. College & Research Libraries [serial online]. January 2011;72(1):62-78. 5. National Research Council (U.S.). Being fluent with information technology. (National Academy Press , Washington, DC, 1999) 6. Gibson C. Information Literacy and IT Fluency. Reference & User Services Quarterly [serial online]. Spring2007 2007;46(3):23-59 7. C. Wankel, P. Blessinger. Increasing student engagement and retention using immersive interfaces: virtual worlds, gaming, and stimulation. (Emerald, Bingley, UK, 2012) 8. Gal, D. et al. Master of Arts in Learning and Emerging Technologies Full Program Proposal. (SUNY Empire State College, Saratoga Springs, NY 2011) 9. Advanced Design Seminar: Portfolio Project Official Course Description (SUNY Empire State College, Saratoga Springs, NY 2012) 10. Allain, Nicola Marae. Advanced Design Seminar: Portfolio Project (Online course documents. SUNY Empire State College, Saratoga Springs, NY 2012). iED 2013 ImmersiveEducation.org PROCEEDINGS 107 Supplementary Materials S1. Key Learning Outcomes [8] REINF=Reinforced SLO3: Apply an understanding of emerging technologies to the personally-valued education and social systems that they would like to develop or support from the knowledge, skills, and abilities gained within the MALET program. SLO7: Identify and assess current uses of technology tools in learning environments relevant to one’s own context. SLO10: Create new practices, products, and performances. SLO 11: Produce high quality, cohesive, clear and effective learning products – whether they are papers, reports, electronic multi-media or other ICT products. SLO 14: Demonstrate the ability to produce projects in cooperative teams. SLO 15: Demonstrate ability to communicate effectively in groups by supporting and/or generating consensus. SLO 19: Design and conduct effective evaluations that capture how specific ICT tools impact learning. SLO 22: Consider the ways that today’s technology implementers improve one’s own learning and understanding, by engaging in a range of activities (workshops, forums, and affinity groups. M/A = Master/Assessed SLO1: Understand how different learning theories inform the planning, creation, and facilitation of learning experiences with new technologies. SLO2: Demonstrate the ability to use technology tools and skills beyond traditional modes of production (products as material artifacts and commodities) to consider them tools of mediation, collaboration, and design development. SLO5: Create new content relevant to personal needs or professional contexts. SLO 9: Demonstrate the ability to use inquiry to critique/evaluate existing technology and digital tool use. SLO 12: Exhibit willingness to participate with, listen to, and support effectively and responsibly the participants in a learning community. SLO 17: Develop and evaluate technology tools that are effective for other learners. SLO 18: Demonstrate the ability to design, disseminate and study the usability of technology tools that will be used by learners in one’s own work environment. SLO 21: Document a critically reflective ability to learn new technology tools in an independent and selfdirected way. SLO 23: Articulate a personal and original perspective on the uses and applications of emerging technologies, going beyond the ideas shared by instructors, reading, and resources. iED 2013 ImmersiveEducation.org PROCEEDINGS 108 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Immersive Learning in Education Authors: 1Dr Olinkha Gustafson-Pearce, 2Dr Julia Stephenson Affiliations: 1 Dr Olinkha Gustafson-Pearce, Brunel University School of Engineering and Design Uxbridge Middlesex UB8 3PH olinkha.gustafson-pearce@brunel.ac.uk 2+Dr Julia Stephenson, Brunel University, Uxbridge Middlesex UB8 3PH julia.stephenson@brunel.ac.uk Abstract: As the concept of immersive learning is becoming a reality, the focus of this paper is to explore the use of Web 3D in the context of higher education and whether its integration has significant educational benefits to the students. Findings from the study were categorised into five main themes: preconceptions, usability, progression, satisfaction and potential. Overall, the integration of Web 3D appears useful in supporting and increasing the quality of student learning as it can offer a plethora of benefits. Furthermore, the authors argue that a relevant and timely introduction to new concepts and technologies is a fundamental objective in order to deflect any perceived misconceptions thus avoiding a negative learning experience. iED 2013 ImmersiveEducation.org PROCEEDINGS 109 One Sentence Summary: This study uses a virtual world environment and computermediated communication to develop interactive learning environments for University Design students. 1.0 Introduction An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage (Jack Welch). The twenty-first Century University is in a period of transition. According to Freedman (2008), “Today, higher education is at a historical juncture, transitioning from the industrial era to the information era, and from a national perspective to a globalized one”. In addition and equally significant, the 21st century student is evolving. Having grown up with an assortment of technologies and ubiquitous access, combined with the explosion of Web 2.0, the old paradigm of learning, that is, the passive teacher-centred approach, is becoming no longer applicable. The requirement of the ‘new generation’ student is rather one that is collaborative, autonomous and also exploratory. Furthermore, presentation of information is changing as ‘new technologies’ become ubiquitous. It no longer has to be in a static sequential order or hierarchical structure but rather networked, highly collaborative and community-based, an impressive example being Cloudworks (OULDI, 2010; Conole & Culver, 2009a; Conole & Culver 2009b). Thus, in order to cope with the demands and expectations that are being placed on institutions, they need to adapt and cater for the market force, in order to survive. For curriculum design to remain viable, there is a need to utilize technology to stay ahead of learners. By embracing innovation, institutions keep up with, and possibly exceed, learner’s expectations. Many learning centres have found that the use of Web 2.0 tools, such as social networking, micro-blogging and media-sharing, in addition to the ’standard’ ‘Blackboard’ type tools (Gillespie, Boulton, Hramiak & Williamson, 2007), have enhanced and augmented traditional teaching methods, thus embracing the “networked world” we live in (Siemens, 2008). However, as the web moves into the arena of Web 3D, further opportunities are arising to enable the student and the learning provider to develop these new learning environments, which, if designed coherently can provide powerful, pedagogically sound, learning environments, capable of covering different educational considerations (Stephenson, Murray, Alberts, Parnis, Sharma, Fraser et al. 2008; Murray, Alberts & Stephenson, 2010). Furthermore, the authors of this paper are in agreement with Burgess, Slate, Rojas-LeBouef & LaPrairie (2010) that constructing and managing virtual worlds is considerably disparate to that of face-to-face classrooms or Learning Management Systems. According to Messinger, Stroulia, Lyons, Bone, Niu, Smirov et al, (2009) “[virtual world] participation has grown exponentially since 2000, due to improvements in virtual reality technology (adapted from electronic gaming), continued drops in personal computer prices, increases in computing capacity, and greater broadband network access”. iED 2013 ImmersiveEducation.org PROCEEDINGS 110 As Web 3D is not bound by the constraints of real-world physics (Wiecha, Heyden, Sternthal & Merialdi, 2010) it therefore has the power to create ‘fantasy’ scenarios, or to replicate real life, which in some cases might be difficult or unachievable to replicate in the real world (Twining, 2009), whilst simultaneously providing the user opportunities for rich multi-sensory immersive experiences. For example, Web 3D users can attend and interact at virtual conferences, meetings, focus groups, lectures, or time travel to historical enactments and participate in role plays, journey through constructs such as the human body or the DNA helix at the micro level, explore outer space, practice foreign language skills, or partake in simulations, thus encouraging experiential learning (Jarmon, Traphagen, Mayrath & Trivedi, 2008). As a result of the lack of limitations in Web 3D platforms, such unconstrained environments allow participants to practice repeatedly, procedures or skills in a safe and cost effective environment, reflect on their learning, access content on demand, network, interact, communicate, collaborate, ad infinitum. Thus equally importantly, Web 3D lends itself to a diverse range of educational disciplines; for example, medical/healthcare (Hansen, 2008; Wiecha et al. 2010), oral production (Petrakou, 2010) chemistry (Lang & Bradley, 2009), natural sciences/ecology (Wrzesien & Raya, 2010), epidemiology/public health (Gordon, Bjӧrklund, Smith & Blyden, 2009) and clinical psychology (Gorini, Gaggioli, Vigna & Riva, 2008) to name just a few. Therefore, in this paper we conducted a conjectural analysis to explore the perspectives of virtual world usage from the view of the student and gain an insight into whether Web 3D is being accepted and embraced by 21st century students. 2.0 Material and Methods 2.1. Introduction The aim of this research study was to examine the effect and perceptions of an immersive 3D environment, namely Second Life, on University undergraduates. In this study we principally relied on volunteered student evaluation reports as a means for data collection (N=43). The class cohort were 129 Level 2 Design students, who were asked to conceive and design an exhibition in Second Life. The ‘focus’ of the brief, was to present a ‘Design orientated’ exhibition. The students were assigned working groups of no more than 12 students, but were encouraged to share resources and information. These students are required to design, manage and present their Major Project show at Level 3, and this exhibition in Second Life, was to give them practical experience of some of the factors that they will need to consider, to professionally present the final year show. 2.2. The Graphic Communication Module (Second Life element) Level 2 Graphic Communication is a 12 week course, taught with a weekly 1 hour lecture and a 2 hour lab (computer) session. At the start of this module none of the students had any previous experience in Second Life, although a few had knowledge of World of Warcraft (Blizzard Entertainment Inc). In the first lecture the students were introduced to the concept of virtual worlds, and the purpose of the ‘engagement’ and the brief, was iED 2013 ImmersiveEducation.org PROCEEDINGS 111 discussed. The students were given basic training in “how to function” in Second Life (walking, talking, flying, sitting, etc.), in the first lab session, and were asked to explore the Second Life environment over the course of the following week. To aid them in this, they were given a variety of landmarks. In the following weeks, they were taken from ‘basic’ to ‘advanced builder skills’, by the tutor. This cohort of students are experienced in many advanced design packages (software), such as the CS suite (including Flash), CAD, etc. They also have extensive physical workshop skills (metal, wood, plastics, etc.). Therefore they are fully competent in many of the base requirements required to ‘build’ in a Web 3D program. In this case it was a question of ‘refining skills' rather than teaching from basics with regard to the building tools. The weekly lecture focused on key elements of exhibition design; graphics, planning, user studies, signage, etc. whilst the supervised tutorial lab sessions were to assist the students in the planning and building of the exhibits. Students were also expected to work unsupervised between sessions. The learning outcomes for the module fell into four categories (see Table 1) and the course was structured to enable the students to achieve these outcomes. • Knowledge and Understanding • • • Cognitive Skills • • Practical Skills Transferable Skills • • • recognise the application of appropriate visual languages in the design process recognise the fundamental elements of visual communication identify and express the appropriateness of the graphic elements to design. express the variety and complexity of their visual thinking. recognise the multi-disciplinary nature of the design process. recognise and evaluate the role of the internet in visual communication and design. design and produce confident, competent and expressive visualisations of both two, and three-dimensional objects using both traditional and electronic media. manage their work to a high standard with a high professional standard of presentation. have a command of many of the processes and techniques appropriate to the internet. Table 1: The learning outcomes for the module (four categories) " The 12 week taught programme was from January to March, with the project submission due after the Spring break. Many of the groups had incomplete projects by the end of the semester. Students then dispersed to a variety of locations, from the United Kingdom to International. The ability to enter the Second Life environment via broadband connections meant that students could work together, which they would have iED 2013 ImmersiveEducation.org PROCEEDINGS 112 found very difficult or impossible, if they had used more traditional methods. " The main submission requirement for each group (n=10 groups) was to present an exhibition in Second Life (Figures 1-4 illustrating the final exhibition space). When designing the exhibition, students were asked to use features of the virtual world such as 3D models, posters, information ‘packages’, ‘asset linking’, etc. In addition groups were required to ‘link’ relevant ‘assets such as a website. As part of the submission materials, students were asked to do workbooks which charted the ‘journey’ the student had undertaken throughout the module. Many of the students included a Conclusion. In order to do quantitative analyses of the data, a series of questions was formulated and the Conclusions were interrogated to form a basis for statistical analysis. Figure 1. The Inspire exhibition - Isambard Island iED 2013 ImmersiveEducation.org PROCEEDINGS 113 Figure 2. Picture the Music exhibition - Isambard Island Figure 3. The Time exhibition - Isambard Island iED 2013 ImmersiveEducation.org PROCEEDINGS 114 Figure 4. The Brunel Automotive exhibition - Isambard Island 3.0 Results and Discussion Students provided extensive feedback (n=43 returns) about Web 3D experiences in their evaluation reports; quantitative and qualitative data revealed that the assignment was an enjoyable experience and equally importantly students were pleased with the overall outcome of their project. From the data collected, five main themes were identified: Preconceptions, Usability, Progression, Satisfaction and Potential. Table 2: Evaluation data – preconceptions, usability, progression, satisfaction and potential Question 1 (no) Preconceptions 100% 2 3 4 5 (yes) 5% 70% 23% 0 Have you had any experience of Second Life before engaging with this module? Did you find the introduction to Second Life comprehensive? iED 2013 2% ImmersiveEducation.org PROCEEDINGS 115 0 0 5% 16% 79% Progression Do you think Second Life is a good environment to practice building skills? Do you feel that working as a group helped you to achieve the project outcomes? 0 0 7% 19% 74% 10% 20% 32% 18% 20% Do you think there was enough communication between group members? 18% 17% 27% 18% 20% Satisfaction Overall, did you enjoy the Second Life experience? 0 0 14% 18% 68% Potential Would you use Second Life again as a ‘Design’ medium? 0 2% 14% 14% 70% Usability Did you find the building tools easy to use? 3.1 Preconceptions Several students openly conveyed their negative perceptions at the onset of the course, namely their scepticism towards using this multiuser virtual environment and also its lack of relevance to the module. Analysis of the data in Table 2 revealed that all students actually had no previous experience of SL prior to embarking on this module, which could be an attributing factor to their preconceptions. It was interesting to witness that these attitudes subsequently transformed to those of appreciation over time, for example one student commented: “At first, I was sceptical about SL and did not understand why it was included as part of the [Graphic Communication] module. However, throughout the course of the term, I have begun to appreciate how designers can use the virtual world to their benefit”. Thus, once students had embraced SL, they began to appreciate the potential it could offer. For example creating 3D prototypes with ease, the cost effectiveness of achieving goals that would otherwise be difficult (or impossible) in the real world, its impact on businesses and also people’s lives (especially young designers) and the simplicity (and fun) of communicating with peers remotely (both in the UK and internationally). Furthermore, another student expressed: “At first we were all highly dubious of SL as a viable means of communication, but I feel that the majority of people who had this view will have had their minds changed through the course of the project”. 3.2 Usability Students had mixed opinions about the platform. In terms of building in Second Life, comments ranged from being “remarkably intuitive” to “difficult to grasp in the first instance”, additionally however, data in Table 2 revealed that the majority of students iED 2013 ImmersiveEducation.org PROCEEDINGS 116 (79%) found the building tools easy to use. Furthermore, some regarded the software as ‘crude’ and ‘quite primitive’, however, it is worth noting that this cohort are level 2 Design students and are therefore used to engaging with rich, dedicated CAD software. One student commented that: “Although SL can be frustrating at times, especially the build interface, I have learned that it is a fantastic medium for presenting material. The 3D areas can create excellent atmospheres and effects and can provide a real educational experience. I have enjoyed the interactivity that is possible in SL”; this response was indeed representative of several students. For some, the notion of frustration with the build interface was evident too; students felt restricted as they had a limited primitive (basic shapes that are used for building) count (due to sim restrictions students were limited to 450 prims per group), and to a lesser extent space, thus impacting on their creative flair. 3.3 Progression Several participants in this study valued the opportunity of integrating Web 3D into their module as they saw it as an addition to their educational development; it allowed communication on different levels, although according to Table 2, students did not really feel there was enough communication achieved between group members. However this was not seen as specifically a Second Life issue rather a ‘group working’ issue. In addition it was felt that it gave them the ability and practice to develop and enhance exhibition setup skills, stretch boundaries and provide freedom of building with easily achievable realistic feel and textures in a highly cost effective medium. Student commented included: " • “Personally I am thankful that I have experienced using SL as [it] opens up different doors and allows us to communicate on different levels.” • “I found this project a definite expansion of my mind I didn’t expect SL to perform in the way it did and it definitely stretched my boundaries and involved me in things that I would not have naturally part taken in. I found the building in SL remarkably intuitive and it is a quick way to display an exhibition layout and how it will flow”. • “Learning the use of a virtual world system and using another CAD program are extra skills to my growing list”. 3.4 Satisfaction Generally, students were unfamiliar with using Second Life, but this did not impact on their motivation to complete their projects. Analysis of data revealed that students appreciated the experience of learning and transferring their newfound knowledge within the Web 3D platform. Satisfyingly, students were pleased with the outcome of their iED 2013 ImmersiveEducation.org PROCEEDINGS 117 projects, comments included: " • “I have thoroughly enjoyed completing this project and getting to grips with SL, a program I had not heard of or used before the beginning of this project….I look forward to continuing to use SL”. • “The project overall was an interesting experience, the creation of a product that functioned entirely in the virtual world made the project enjoyable to work on and a good learning experience”. • “I have fully enjoyed the experience of SL and have tried my best to get fully engaged within what was a new experience and world for me”. 3.5 Potential There was ample evidence to support the notion that Web 3D has great potential; examples included, the concept of interaction, particularly as users are not confined by geographical distance and also the idea of partaking in activities that would otherwise be inconceivable in the real world due to multiple factors such as time, economical and infrastructural constraints. Additionally, students felt that they would use SL again as a design medium (see Table 2). Once students had engaged with the platform they were in a position to have powerful views, comments included: " • “Whilst getting to grips with the vast SL web experience, I found more and more that the possibilities of Web 3.0 interaction were huge.” • “I did however enjoy my time using SL, and as such have realised its potential for improving the lives of people, and businesses. The fact that I could work on this project from Spain indeed proves the usefulness to internationally communicate your message, hold business conferences or just say hi to a relative from far way. Web 3.0 even still in its early stages is a valuable tool for anyone to use, but I eagerly wait to see how it develops from here”. • “Once the novelty of playing around in SL wore off and the work began, I began to see the possibilities that the interactive environment offered. The exhibition in SL allowed for things that would not be possible in real life, and the fact the members of the group ... could just meet up in SL rather than in real life made construction easier” • There is a lot of potential with a program such as this and the concept to me is fantastic. In the future I can see myself using SL further and developing my skills on the program as it is a relatively inexpensive way to funnel viewers to your build”. The purpose of this study was to explore the potential of engaging with an immersive virtual world environment from the perspective of the Design student and to investigate if such environments have a niche within the current and future educational system. Interestingly, from the student feedback generated in this study, once the student had engaged with the medium, analysis indicated that the majority of the students felt that the virtual world environment was beneficial to their educational development. iED 2013 ImmersiveEducation.org PROCEEDINGS 118 Furthermore, our findings concretised the notion that virtual world platforms can offer experiences that would be either impossible or difficult to replicate in the real world, an opinion supported by Twining (2009). Interestingly however, some students expressed their dislikes towards the project, but did not comment from a ‘tool use’ perspective, but rather towards their lack of planning, organising and commitment, especially as they were required to work in teams: “I did feel like I was carrying the burden of the whole group on my shoulders and I wasn’t even the group leader” and “A lot of time could have been saved by the group taking a much more structured approach to the project and more detailed planning. But through all the mistakes I have made with the planning/organisation I have learnt lessons for next time!”. Teamwork, communication, delegation and time management are examples of key soft skills that students need to be confident in, especially in preparation for the workforce. Thus an additional ‘learning outcome’ for projects such as this would be to specifically use the ‘immersive’ qualities of the virtual world platform to enhance and encourage these transferable skills. Noor (2010) has acknowledged that virtual worlds certainly have a place in the 21st century as they convert the 2D flat Internet to a ‘fully immersive’ ‘experienced’ environment. Noor further comments, “The richness of experience in virtual worlds can be significantly enhanced, and their potential realized, through the integration of a number of technologies and facilities, some of which are currently under development”. Realising the potential of virtual worlds is very much echoed in our study as student evaluation data revealed that numerous students recognised the importance of learning the skills and tools inherent in the virtual world interface. At the end of the project they particularly acknowledged the benefits and appreciated the opportunity to interact with the medium as part of their module. To quote a student: “Even if one disagrees with the idea, SL is the wave of the future, one cannot deny its prominence and relevance to the Internet not to mention the business opportunities involved with selling what is essential intellectual property”. However, at the other end of the spectrum, findings from this study also revealed that students were considerably sceptical about using the platform at the onset of the project. The results here suggest that students tended to have significant misconceptions about virtual world usage. However, once they engaged with the virtual platform, their attitudes tended to be encouraging. Ignorance or misconceptions about virtual world use, technology and possibilities, seemed to be a key factor which generated negativity. To quote another student from our study, “One of the best things about the brief was that it challenged us to approach a design problem on a platform that we had never considered, a virtual world, and though it took time to realise the full potential of this, the experience has taught me to always approach a problem with fresh eyes and not with preconceptions”. " " iED 2013 ImmersiveEducation.org PROCEEDINGS 119 4.0 Conclusion This study specifically looked at the use by undergraduate Design students of the virtual world environment Second Life. These students are well used to working collaboratively in a ‘real world’ setting and are fully familiar with Design specific software, such as CAD programmes. Initially it was felt that they would move into the virtual world setting with ease, however the study showed that initial scepticism and the need for enhanced project management skills by the students were the major factors that needed further consideration by the teaching team. Certain ‘moderating factors in the ‘real world’ setting such as the tutors, to a certain extent, managing students ‘work practices’, or constraints, such as workshop or studio open times, which can be seen as regulating factors for project and time management. However when introducing a virtual world environment, which is a ‘open, any-time access’ environment, where the students are required to manage their own collaborative projects, with far more autonomy than in many traditional formal settings, additional ‘soft skill’ information and tutoring may be required. This may be different from previously experienced practice, and potentially may more reflect a ‘real world’ work environment, in that the students need to be organised and to ‘project manage’ at a higher level than they might otherwise have needed to do in the university setting. This module was Graphic Communication and used the virtual world as the ‘environment’ which enabled the students to realise their projects. In many cases where modules are conducted in a virtual world setting, the module leader may simply transfer the prescribed learning outcomes and structure of the module to the virtual world. In this study it was found that additional skills were needed by the students that were not ‘strictly’ within the module descriptors. However the acquisition of these skills (management, organisation, planning, etc.) are soft skills that are needed in the ‘workplace’ and are potentially more accurately reflected through use of an immersive, open access 3D virtual environment. In addition a lack of knowledge or understanding of the ‘tool’, can lead to misconceptions, and consequently negativity, which can limit a student’s engagement with the core learning. Therefore, when introducing a new tool, it is important to fully explain and demonstrate the concept and its relevance to the student, otherwise a lack of knowledge of new technologies or software can result in failure by the student, to acknowledge potential benefits. Subsequent to this study, Second Life has been introduced to the students as ‘a stadia in which the game is played’ and it is highlighted that the media is not ‘the game’. This clear separation between the tool and the required outcomes, is especially important when the tool is as complex and immersive as a virtual world. References and Notes: th Blizzard Entertainment Inc. (2010). World of Warcraft. Accessed 8 June 2010 at: http://www.worldofwarcraft.com/index.xml iED 2013 ImmersiveEducation.org PROCEEDINGS 120 Burgess, M.L., Slate, J.R., Rojas-LeBouef, A., & LaPrairie K. (2010) Teaching and Learning in Second Life: Using the Community of Inquiry (CoI) model to support online instruction with graduate students in instructional technology. Internet and Higher Education, 13, 84-88. Conole, G., & Culver, J. (2009a). Cloudworks: Social networking for learning design. Australasian Journal of Educational Technology, 25(5), 763-782. Accessed 6 April 2010 at: http://www.ascilite.org.au/ajet/ajet25/conole.html Conole, G., & Culver, J. (2009b). The design of Cloudworks: Applying social networking practice to foster the exchange of learning and teaching ideas and designs. Computers & Education, 54(3), 679-692. Freedman, G. (2008). Unlocking the global education imperative: Core challenges & st critical responses. Accessed 21 October 2009 at: http://www.blackboard.com/CMSPages/GetFile.aspx?guid=6032f8df-b6ba-4510-81d23198459529dc Gillespie, H., Boulton, H., Hramiak, G., & Williamson, R. (2007). Learning and teaching with virtual learning environments. Exeter: Learning Matters Ltd. Gordon, R., Bjӧrklund, N.K., Smith, R.J., & Blyden, E.R. (2009). Halting HIV/AIDS with avatars and havatars: a virtual world approach to modelling epidemics. BMC Public Health, 9 (Suppl 1):S13. Accessed online 22 March 2010 at: http://www.biomedcentral.com/1471-2458/9/S1/S13 Gorini, A., Gaggioli, A., Vigna, C., & Riva, G. (2008). A Second Life for eHealth: Prospects for the use of 3-D Virtual Worlds in Clinical Psychology. Journal of Medical Internet Research, 10(3): e21. Accessed 22 March 2010 online at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2553247/ Hansen, M.M. (2008). Versatile, Immersive, Creative and Dynamic Virtual 3-D Healthcare Learning Environments: A Review of the Literature. Journal of Medical Internet Research, 10(3): e26. Accessed online 22 March 2010 at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2626432/ Jarmon, L., Traphagan, T., Mayrath, M., & Trivedi, A. (2009). Virtual world teaching, experiential learning, and assessment: An interdisciplinary communication course in Second Life. Computers & Education, 53, 169-182. Lang, A.S.I.D., & Bradley, J-C. (2009). Chemistry in Second Life. Chemistry Central Journal, 3:14. Accessed online 22 March 2010 at: http://journal.chemistrycentral.com/content/3/1/14 iED 2013 ImmersiveEducation.org PROCEEDINGS 121 Messinger, P.R., Stroulia, E., Lyons, K., Bone, M., Niu, R.H., Smirnov, K., & Perelgut, S. (2009). Virtual worlds – past, present, and future: New directions in social computing. Decision Support Systems, 47, 204-228. Murray, L.A., Alberts, P.A., & Stephenson, J.E. (2010). Using Appreciative Inquiry (AI) for an e-Learning Change Management Programme: the ENTICE Project at Brunel University, in Ehlers U-D. & Schneckenberg D. (ed) Changing Cultures in Higher Education. Springer: Berlin Heidelberg. Noor, A.K. (2010). Potential of virtual worlds for remote space exploration. Advances in Engineering Software, 41, 666-673. OULDI (2010). Cloudworks: Open University Learning Design Initiative. Accessed 6 April 2010 at: http://cloudworks.ac.uk/about/vision Petrakou, A. (2010). Interacting through avatars: Virtual worlds as a context for online education. Computers & Education, 54, 1020-1027. Siemens, G. (2008). New structures and spaces of learning: The systematic impact of st connective knowledge, connectivism and networked learning. Accessed March 31 2010 at: http://elearnspace.org/Articles/systemic_impact.htm Stephenson, J.E., Murray, L., Alberts, P., Parnis, N., Sharma, A., Fraser, J., & Zammit, M. (2008). Educational Considerations for Blended Learning. Accessed 2 April 2009 at: http://www.brunel.ac.uk/life/study/computing/weblearn/enticepathfinderproject Twining, P. (2009). Exploring the educational potential of virtual worlds – Some reflections from the SPP. British Journal of Educational Technology, 40(3), 496-514. Wiecha, J., Heyden, R., Sternthal, E., & Merialdi, M (2010). Learning in a Virtual World: Experience with Using Second Life for Medical Education. Journal of Medical Internet Research, 12(1) e1. Wrzesien, M., & Raya, M.A. (2010). Learning in serious virtual worlds: Evaluation of learning effectiveness and appeal to students in the E-Junior project. Computers & Education, in press. iED 2013 ImmersiveEducation.org PROCEEDINGS 122 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Making it Real: The Cosmic Background Radiation Explorer App Author: R. Blundell1 Affiliations: 1 Macquarie University. *Correspondence to: richard.blundell@mq.edu.au Abstract: The Cosmic Background Radiation (CBR) is a keystone piece of consilient evidence in support of modern cosmology and is critical to understanding the subsequent evolution of the universe. Much of what we know about the Big Bang and subsequent evolution of the universe is inferred from the study of the anisotropic fluctuations recorded in the microwave radiation represented in the CBR image. While the CBR is one of the most fundamental and empirically supported phenomena in all of cosmology, widespread public perception and student understanding of the CBR remains inappropriately weak. In this article, we argue for new phenomenological approaches to teaching about abstract subjects by building conceptual understanding on top of real-world, phenomenological experience. We designed a mobile app to implement this research specifically for the CBR. The app does this by simulating the CBR as it would be observed if one could finetune their eyes to see in the microwave radiation. While conventional means of educating about the CBR have failed to capture and convey its essential relevance to daily experience, the Cosmic Background Explorer App addresses this problem by providing a iED 2013 ImmersiveEducation.org PROCEEDINGS 123 simulated, geospatial, interactive, and phenomenological experience of the CBR. This paper reports on the theoretical, pedagogical, and technical aspects of the project. One Sentence Summary: This paper describes the philosophical, pedagogical, and technical aspects of an educational mobile app for simulating an in-situ experience of the CBR. Introduction Any attempt to teach what we know about the early universe, or the history of the universe in its entirety (such as in Cosmic Evolution or Big History courses), should begin with the beginning of our knowledge. This means starting with the physics of the Big Bang. Much of what we know about the Big Bang and subsequent evolution of the universe is inferred from the study of the Cosmic Background Radiation (CBR). The CBR is one of the most fundamental and empirically supported phenomena in all of cosmology [1]. The entire modern cosmological model rests on the CBR because everything that follows it is rooted in the primordial temperature fluctuations represented by the splotches of blue, turquoise, orange and red (See Fig 1). But the true significance of these quantum-level perturbations is impossible to grasp unless one holds an overall understanding of how these minute temperature differences (or densities) have evolved into the large-scale galactic structure we see today. Subsequent topics, such as the formation of stars and galaxies, accretionary physics of planetary systems, the emergence of complexity, and the mega-scale structure of the universe, depend on integrating the physics and spatial ubiquity of the CBR. Figure 1. The Cosmic Background Radiation (ESA) iED 2013 ImmersiveEducation.org PROCEEDINGS 124 It is common for educators to introduce the CBR by telling the wonderful serendipitous story of how Penzias and Wilson discovered it. They include language like: “The CBR is the echo of the Big Bang;” or “The CBR is the leftover radiation from the Big Bang.” These conversations are typically supported using images of the CBR produced by NASA and the European Space Agency [2]. Published surveys [3,4] and our own teaching experience suggests that while most students can readily grasp the ideas of ‘echo’ and ‘residual’ radiation, they fail to be able to explain how evolution is the source for the mega-scale structure of the universe, or even locate where the CBR actually is. While the NASA and ESA images are excellent resources for visualizing the CBR in formal learning environments, and discussing the finer points of cosmic microwave anisotropy, we argue that these images alone routinely fail to place the CBR into a realworld, and thus appropriately meaningful context. Another way that educators, including ourselves, have tried to bring relevance the CBR is to explain to students that they can experience the CBR today by ‘un-tuning’ an old-style television set and observing that about 10% of the static seen is caused by the CBR. While there is nothing factually incorrect in the content of these approaches, they are technologically outmoded and inadequate for communicating the immense importance of the CBR as a cornerstone idea of the Big Bang model. There are a handful of projects dealing with the conceptual understanding and visualization of the CBR [5–8] All of these endeavors rely on the large traditional desktop or other large format (e.g. stationary) monitors and aim to develop sophisticated conceptual understandings. More recently, the popular GoogleSky platform includes a “microwave” overlay that maps the CBR with user-adjusted opacity onto a real-time image of the night sky [9]. While this feature effectively superimposes the CBR onto the present-day structure of the universe, it does so out of any real-world context (Note: the GoogleSky app for Android does not include the CBR feature found on the desktop version). While GoogleSky represents a substantial interface improvement, it still lacks the required phenomenological component. It does not provide an adequate lived experience upon which to build a more thorough conceptual understanding. It is also limited by requiring a large monitor to be useful. Finally, the CBR overlay in GoogleSky is limited to very large scales and the flat mapping produces indecipherable results when zoomed all the way out. Because of these limitations, GoogleSky seems inadequate to provide the kind of primary learning the CBR requires and deserves. Research Problems The CBR remains under-appreciated and misunderstood relative to its importance, ubiquity, and its being a rare example of scientific consilience. While the abstract notions iED 2013 ImmersiveEducation.org PROCEEDINGS 125 and the complex physics of the early universe are known contributors to this low engagement [4,6], we argue that the initial introduction of the CBR needs to be a more phenomenological encounter. If students can be introduced to the CBR as a livedexperience first, then they will be better equipped, and more motivated, to engage with it intellectually when the material gets more sophisticated. While we are arguing here, in general, for a more relevant, phenomenological, way of encountering the subject of the CBR, another, more specialized, problem also arises in the context of narrative-based courses that include the CBR. In courses such as Big History, where the overall structure is sequential, and the complexity of one chronological phase builds upon the one preceding it, the issue of keeping concepts linked in a congruent way is especially important. Because the CBR, and the cognitive disconnect associated with it, happens early in these types of courses, it can severely disrupt future learning. We have observed that once students lose their orientation, it is often very hard to regain it in an adequately meaningful way. By failing to communicate the genuinely critical aspects of location and relevance of the early the universe, Big History educators risk losing many students from the outset. This not only deprives students of a full understanding of the Big Bang model, but also the truly awe-inspiring and holistic reality of the rise of complexity in the universe – a reality of which they are a part and can experience with a little technological help and imagination. In a constructivist paradigm of learning, it is illogical to expect students to successively link the phases of cosmic evolution into a logical sequence without a primary understanding of how the CBR predicts the future evolution of the universe. In a very real and debilitating way, they can get lost as soon as they leave the gate. The idea proposed in this article is that there is an under-realized way of perceiving the CBR and that learners who can incorporate this perception will undergo a sort of transformative relationship with the world. We call this basic transformation in thinking an epiphany, as in: the usually sudden manifestation or perception of the essential nature or meaning of something [10]. The specific epiphanies provided by this app are that the CBR is real, it is out there in the sky, and that the structure sky of today is a real-world embodiment of the initial temperature fluctuations recorded in the CBR. By the rule of transitive relation then, the user is a real-world embodiment of the cosmos. We believe that these new perceptions of the CBR, which are facilitated by this app, constitute an epiphany of the science type. Other topics of concern (or epiphanies) for Big History courses which could be addressed in future apps could include abstract and difficult ideas such as; vast temporal and spatial scales, emergence and complexity, improbability and anthropocentrism, uniformitarianism, origins of life plausibility, evolution, and alternative historical and cultural interpretations. iED 2013 ImmersiveEducation.org PROCEEDINGS 126 Scope of this Work This project first argues for an innovative approach wherein a real-world context is used as the environment for a phenomenological first-encounter. We then propose a specialized learning app to provide this experience using the CBR as its subject. The key innovation here is to provide a phenomenological base upon which more formal in-class conceptual understandings can be later built. We do not aim to provide an in-depth examination of the science of electromagnetism or the CBR. We emphasize this limit in defining the scope of this app to the initial part of this aim. Filling in the details of concepts such as cosmic inflation, differentiation, decoupling, and the behavior of electromagnetic radiation, for example, still require formal learning efforts and environments. This limitation notwithstanding, we also hope and expect that this app may find extended application in broadly diverse, informal settings. In any case, this project enhances the general trend in integrating educational media and software (such as apps) in the classroom [11]. Theoretical Underpinnings The arguments and claims of this project emerge largely from our own observations in teaching and assessing students’ understanding of the CBR. There also exists a rich body of published literature and empirical research in support of phenomenological education. What is new is the ability to implement well-crafted educational experiences through affordable technological means. Currently, the emerging technological paradigm presents new opportunities to apply time-tested educational approaches. If the mark of a good idea is indeed timelessness, then the ideas of the American educational philosopher John Dewey most certainly qualify. Beginning in the 1890s, Dewey thought deeply, and wrote widely, on the relationship between human experience and learning. He anticipated the constructivist theory of learning in which becoming educated meant more than just ones’ cognitive accrual or rote learning. Broadly speaking, this app aims to reconnect learning to everyday life. Dewey held deep, intuitive understandings of the power of personal experiences and made explicit connections to the formal educational endeavor. In 1909 he was writing about the importance of “organizing education to bring all its various factors together… into organic union with everyday life” [11 p. 35]. Dewey also stressed the reciprocal relationship between experience and learning. Dewey’s idea is explained by Pugh “ Just as experience is a means for enriching and expanding learning, so learning is a means for enriching and expanding experience” [12 p. 109]. This reciprocal synergy between the lived-experience and learning constitutes the phenomenological argument for this app. iED 2013 ImmersiveEducation.org PROCEEDINGS 127 Dewey also maintained a dual commitment to the power of aesthetic experience while also remaining a staunch pragmatist. He reminded us to measure the value of things insofar as they impacted everyday life. Toward the end of his career he displayed this pragmatism when he wrote that the value of any philosophy would rest on the answer to this question: “Does it end in conclusions which, when they are referred back to ordinary life-experiences and their predicaments, render them more significant, more luminous to us, and make our dealings with them more fruitful”? [13 p.7]. Finally, Dewey [15] offers insight into this project by making distinctions between more traditional and progressive definitions of learning. For example, he discusses the difference between learning concepts (as static knowledge to be acquired) and encountering ideas (as opportunities to consummate new potentials). Dewey argued that when we think in terms of ideas, as opposed to concepts, we open up whole new ways of relating to the material to be learned. In other words, he is framing educational opportunities as experiences to be had, as opposed to concepts to be merely learned. In this way, he distinguishes between ordinary experience and “an” experience (the latter being a richer event, imbued with opportunities for exploration) as having the potential to change the way we perceive, interact, and participate with the world. It is compelling that in this technological age, Dewey’s ideas would find new contemporary application. Handheld, mobile, and multimedia technologies are providing a new platform on which to build innovative learning experiences. Implementation This section describes: the design of the graphical interface; a typical user-progression through the app; technical considerations; evaluation and the social networking aspects of this project. As this is the first in a series of ‘epiphany’ apps, we also explain project’s place within a larger educational framework. User Progression The app experience follows a guided sequential progression through the app experience. Each numbered entry represents a discrete step (or page), while the lettered sections represent modes as follows: 1. Introductory video vignette: The introductory video has the feel of enthusiastic exploration suitable for all ages. A human character acts as a guide and introduces the CBR as the earliest known photograph of the infant universe. The guide briefly tells the story of the CBR’s discovery, explains what the colors represent, and how the oval shape is actually because the CBR should be imagined projected onto the inside surface of a sphere in order to see its true shape (the video will demonstrate this effect). How scientists have been able to take the picture using a series of satellites that have cameras tuned to just the right frequency of light (the video will show a wave spectrum animation) is also covered. This discussion iED 2013 ImmersiveEducation.org PROCEEDINGS 128 explains that we can simulate being able to ‘see’ in microwave frequency light by adjusting the frequency of our camera. Finally, the guide invites the user to “cross the threshold” and enter the realm of the CBR. Note: each screen will display icons of various human forms that can be tapped for context sensitive guidance (text and audio). 2. Spectrum slider with integrated waveform graphics as a way to adjust the camera’s light sensitivity (Figure 2). The word “SIMULATION” appears onscreen throughout these pages to indicate that the camera is not actually detecting the various wavelengths. At the bottom of the screen is a slider graphic for adjusting the (simulated) wavelengths that the camera can see in. An audio tone plays in proportion to the wavelength (low to high). Figure 2. The wavelength slider graphic The wavelength slider has click-stops at the following modes (See Figure 3) A. X-Ray (the screen shows the users hand with X-Ray filter applied) B. Ultraviolet (the screen shows a flower in the foreground with UV filter) C. Visible Light (the screen shows the raw image produced by the camera – no filter), D. Infrared (the screen shows simulated infrared on a house) E. Microwave radiation iED 2013 ImmersiveEducation.org PROCEEDINGS 129 Figure 3. Progression through preliminary simulation modes (A-D) 3. When the “Microwave” wavelength is selected, the CBR appears as a background to the camera image. In this mode, the real-time 3D mapping feature is activated, simulating an augmented reality experience of the CBR (Figure 4). Figure 4. CBR Microwave Mode iED 2013 ImmersiveEducation.org PROCEEDINGS 130 4. In the Microwave mode, an icon appears to “play the search pattern game” (Figure 5A). In this search mode, red arrows guide the user toward the predefined pattern (patch) that triggers the CBR evolution movie. Figure 5A (left) and 5B (right). CBR Search Game and Cosmic Evolution Movie 5. The “CBR Evolution Movie” (Figure 5B) illustrates the evolution of the galactic mega-structure of the particular pre-defined pattern (patch). The movie is triggered when the user points the camera directly overhead and holds the device level (screen facing downward). This position activates a feedback vibration and audio accompaniment begins while the CBR image slowly transitions into a map of the actual night sky above the user. Through this final exercise the user experiences how the CBR is the blueprint for the overall structure of the universe. Evaluation and Dissemination Evaluation is part of the overall project plan and is being conducted by our university’s in-house learning and teaching group. Dissemination is also an integrated goal so the CBR Explorer app will be made available for download free of charge to high school and colleges worldwide. It will also be available through commercial channels for a nominal fee. Built in social networking functionality will encourage users to share liberally. iED 2013 ImmersiveEducation.org PROCEEDINGS 131 Technical considerations The Cosmic Background Explorer App creates simulations of the various electromagnetic wavelengths through real-time filtering of the image captured by the smartphone camera (Android or Apple iOS). Three of the primary modes (x-ray, ultraviolet, and infrared), employ a separate effect filter to simulate what the user would see if the phone’s camera could detect the specific wavelength. We colorize each of the simulated image streams by filtering the RGB channels from the CCD. The visible light spectrum applies no filter and displays raw camera data. The Microwave mode simulation employs a more complex processing wherein we map the ESA CBR image (See Figure 1) onto very large virtual 3D sphere. The point of view of the user looking at the mobile's screen is in the center of the sphere. We mix the geospatial data, the CBR image, and the camera’s view by taking into account the brightness and contrast of the image elements. We use the phone’s gyroscopic sensors to adjust the CBR image when the camera is moved. During the pattern search game, we detect the horizontal position of the mobile with the screen facing downward (phone directly overhead) to trigger the video media (3D animation explaining the evolution in time of the CBR). Conclusions The CBR Explorer app uses the latest high-resolution ESA imagery mapped onto the interior of a 3D virtual sphere to simulate an in situ lived-experience of the CBR. The specific aim is to improve conceptual understanding by first providing a self-contained phenomenological experience. By guiding users toward a lived-experience of the CBR though a step-wise progression of simulated visualizations, the app highlights the everyday relevance of an otherwise abstract and static visual image. Several Dewyan educational philosophies are also implemented. They are; the distinction between concepts and ideas; the distinction between experience and “an” experience; the role of aesthetic experience in transformative learning and meaning making; and the power of lived-experience to bring every day relevance to understanding. This app serves the research problems by enhancing phenomenological and conceptual understanding of the CBR. By applying accepted educational theory to emerging, technologically-facilitated, learning scenarios, this app addresses the lack of student engagement and appreciation of the Cosmic Background Radiation. While the trend toward integrating apps into educational curricula is not new, the CBR Explorer is the first in a series of “Epiphany” apps that enhance the education and communication of abstract and latent ideas. These apps may stand alone as informal educational experiences or be used to supplement learning across a wide range of subjects. iED 2013 ImmersiveEducation.org PROCEEDINGS 132 Acknowledgments: Funding for this project is provided by Macquarie University Innovation Scholarship Program. User interface design and programming coordination by Fred Adam at http://www.gpsmuseum.eu/. References and Notes: [1] Weinberg S., 2008, Cosmology, OUP Oxford. [2] European Space Agency, 2013, “Planck reveals an almost perfect Universe / Planck / Space Science / Our Activities / ESA.” [3] Fraknoi A., 2004, “Insights from a survey of astronomy instructors in community and other teaching-oriented colleges in the United States,” Astronomy Education Review, 3(1), pp. 7–16. [4] Wallace C. S., Prather E. E., and K Duncan D., 2011, “A study of general education astronomy students’ understandings of cosmology. Part I. Development and validation of four conceptual cosmology surveys,” Astronomy Education Review, 10(1), pp. 010106–010106. [5] Van der Veen J., 2010, “Planck Visualization Project: Seeing and Hearing the Cosmic Microwave Background,” Science Education and Outreach: Forging a Path to the Future, p. 295. [6] Van der Veen J., McGee R., and Moreland J., 2012, “The Planck Visualization Project: Immersive and Interactive Software for Astronomy and Cosmology Education.” [7] McGee R., Van der Veen J., Wright M., Kuchera-Morin J., Alper B., and Lubin P., 2011, “Sonifying the Cosmic Microwave Background,” Proc. 17th Int. Conf. on Auditory Display (ICAD-2011) June, pp. 20–24. [8] Dekker G., Moreland J., and Van der Veen J., 2011, “Developing the Planck Mission Simulation as a multi-platform immersive application,” Proceedings of the ASME 2011 World Conference on Innovative Virtual Reality, WINVR2011. [9] Google Inc., “Google Sky” [Online]. Available: http://www.google.com/sky/. [Accessed: 30-Mar-2013]. [10] Merriam Webster, “Epiphany,” (Def. 3a) Merriam Webster Online (2012). [11] Bonnington C., 2013, “Can the iPad Rescue a Struggling American Education System?,” Wired. [12] Dewey J., 2010, The School and Society and The Child and the Curriculum, Digireads.com. [13] Pugh K. J., 2011, “Transformative Experience: An Integrative Construct in the Spirit of Deweyan Pragmatism,” Educational Psychologist, 46(2), pp. 107–121. [14] Dewey J., 1958, Experience and nature, Courier Dover Publications. [15] Dewey J., 1934, Art as Experience. iED 2013 ImmersiveEducation.org PROCEEDINGS 133 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER Classroom Group-directed Exploratory Learning through a Full-body Immersive Digital Learning Playground Authors: Chin-Feng CHEN1*, Chia-Jung WU1, De-Yuan HUANG2, Yi-Chuan FAN1, Chi-Wen HUANG1, Chin-Yeh WANG1, and Gwo-Dong CHEN1 Affiliations: 1 Department of Computer Science and Information Engineering, National Central University, Taiwan, Republic of China 2 Research Center for Science and Technology for Learning, National Central University, Taiwan, Republic of China Abstract: This paper presents an immersive gaming exploratory learning platform, Digital Learning Playground (DLP), set in a classroom-based collaborative environment. DLP consists of a digital shared tabletop that serves as a game board for team planning and a shared Kinect-embedded display that not only casts students in a virtual world but also enables them to interact with the virtual events through gestures and speech command. The Lshape layout of DLP forms a theatre-like learning environment that entails a clear focus on knowledge demonstration in a context-relevant immersion through peer collaboration and observation. Current school learning has been falling short for self-exploratory learning due to time and resource constrains. This study investigates whether learning effectiveness and time efficiency could be improved through group-directed exploratory iED 2013 ImmersiveEducation.org PROCEEDINGS 134 learning experiences. Forty-four (44) Taiwanese college students (n = 44) were engaged in an English learning task. The college students were randomly assigned to one of two group-directed exploratory learning groups: an experimental group or a self-directed exploratory group (the control group). The experimental data includes the results of knowledge vocabulary test that was used for assessing learning outcome, time record of task completion for time efficiency, and non-participant observation for further details with respect to learning experience. One Sentence Summary: The purpose of this research is to assess whether learning effectiveness and time efficiency could be improved through a group-directed exploratory learning experience. Introduction Exploratory learning and experiential learning has been ongoing for many years in an effort to achieve relevant learning, but to date it has not been easy to implement in the formal education setting. Classroom learning has certain characteristics that are difficult to break through or change. Our research indicates that instructional design of an immersive environment can be formed to fit into the classroom mechanism and, at the same time, enhance the quality of learning. The immersive environment that we built, Digital Learning Playground (DLP), has been adapted in several ways for educational purposes. Our research has addressed some critical characteristics of the classroom mechanism and how current technology and common classroom resources cooperate to improve the quality of classroom-based learning. 1. Background and Motivation 1.1 Practical Concerns and Solution of Classroom Exploratory Learning From a constructivist’s view, learners are like knowledge explorers in that the essence of learning is self-discovered and self-inspired through the journey. That is, authentic learning values the individuals’ mind processing and learning experience. These two essential elements of learning are fraught with difficulties of practice in formal curriculum in relation to conventions and restrictions. Such conventions and restrictions are: (1) set learning objectives; (2) time frames; and (3) a preferred teacher-catered approach for an easy classroom management strategy. Thus, formal curriculum usually sets learning objectives for each level of learners and assessments which are mainly scaled levels of understanding to the knowledge (instead of process of knowledge exploratory or discovery). Owing to this, the hypothesis-generation-testing cycles of self-directed exploratory learning are hard to achieve in conventional school settings[1, 2]. Additionally, it can take time to train learners’ skills of generating questions and learn by doing it. The time requirement for induction is unpredictable because instructors typically deal with a class full of students. By the same token, a key challenge posted as implementing discovery learning is classroom management. Instructors may find it difficult to manage and monitor each iED 2013 ImmersiveEducation.org PROCEEDINGS 135 student’s progress in a free-exploring environment in comparison to a dominant approach, such as a show-and-tell lecturing strategy. It seems that ordinary classrooms have rigid mechanism in terms of knowledge domain instruction, time restriction, and pursuit of strategies for easy classroom management. This reality must be acknowledged, yet the quality of learning shouldn’t be sacrificed. In order to meet common classroom ecology and, at the same time, improve the overall quality of learning, our research indicates that classroom exploratory learning should transfer from self-directed to guided group-directed collaborative learning and from free-exploring environment to task-oriented exploration. Additionally, the design implementation of the research made use of existing and potential future classroom resources and technologies such as projectors, interactive white boards (IWBs), and tablets in order to maintain and enhance contemporary classroom learning. 1.2 Background of Digital Learning Playground In the first study, we created a theatre-like immersive learning setting, Digital Learning Playground (DLP), by combining vertical and horizontal displays that provided students with a knowledge-in-use platform with rich contexts. Several educational applications for different contexts of students via DLP were implemented. In the early stage, we used a robot as a learning agent on the stage table. The robot successfully attracted primary school learners to keep their eyes on the targeted knowledge[3]. In the second study we transformed classroom flash cards into augmented reality (AR) code cards to trigger virtual objects on the screen. In this study, we noted that gathering young learners’ attention was not easy, especially with multiple foci, cards, a robot, and virtual events[4]. In the third study, conducted in a high school setting, we applied board-game mechanics on the interactive tabletop and situated the game cards scenarios on the screen [5]. We observed that learning effectiveness outperformed traditional classroom learning, and learners were engaging. But there is a rising concern that a board game with authentic situations, such as educational simulations, usually takes significant time for learners to explore and wrestle with the problems. Additionally, we must take instruction efficiency into consideration. Standing on the accumulated experience of the previous studies, the following sections provide an elaboration of optimal instructional designs for classroom exploratory learning. 2. Instructional Designs of Implementation 2.1 Instructional Designs of Classroom Immersive Learning Playground The key elements of instructional design and technology-supported classroom immersive learning are detailed as follows. 2.1.1 Task-oriented immersive environment with a shared Kinect-supported display Contemporary learning emphasizes relevant learning in which students are able to connect or even apply learned knowledge from school to daily life event[6]. Although iED 2013 ImmersiveEducation.org PROCEEDINGS 136 today’s contemporary learning content groups highlighted that they were contextrelevant, this content reform hardly changed the way learners passively receive knowledge and was merely to provide a knowledge-in-action environment. Actually, a context-rich immersive learning environment is achievable through supported interactive computing technologies. As we mentioned earlier, the challenge is not from the technology but from the conventional classroom mechanism. To incorporate learning exploration in classroom ecology we suggest that the designed immersive environment be task-oriented and consistent with learning objectives. Riemann also indicated that taskoriented exploration was easily accepted in pedagogical practice[7]. The emergence of gestural technology tools, such as Nintendo Wii and Microsoft Kinect, bring the sensation of immersion to a new level by reproducing motor action in a virtual world[8-10]. Supported by gestural technology, knowledge with context transforms to a task-based environment that students learn by exploring and manipulating objects. This immersion can enhance more active construction of meaning. With respect to the display for simulative situations, we chose a screen and a projector as a shared display not only to leverage commonly available hardware at schools but also to demonstrate that instructors can monitor these activities with ease. 2.1.2 Learners’ mirror images forming peer model influence and engaging atmosphere From a constructivism perspective, learners are active agents of knowledge acquisition who themselves actively assemble the puzzle of knowledge[11, 12]. However, classroom management is not easy when students take control of the learning. Common classroom instruction typically revolves around showing, telling and doing. Showing is knowledge demonstration via textbooks, blackboards, videos; telling is from the teacher’s instruction; and doing is the practice that involve individual work or group work. Our solution was to use peer influence in group-directed learning, where the students were in charge of showing and telling. We adapted Kinect sensors to embed live student performances into simulated plots. Kinect-embedded display not only casts the students into a virtual world, it also enables them to interact with virtual events through gestures and speech commands. The performer’s mirror images with interactive context served as knowledge demonstration. In this sense the performer is like a magician in the screen, and the peer audience can see the entire competency performance. A performer is also a knowledgein-use presenter, task taker, and an observer (the environment and himself). We expected that the performer perceived himself as a modeler and intended to be their very best in front of their peers. As for the peer audience, the active context with their peer’s image in it serves as an attentive agent to sustain focus on learning the content. In addition, the task-oriented mechanic drives peer observers to closely assess other’s performance and to form their later performance. We have fully connected showing, telling and doing in an immersive show stage and facilitated learning with pack experience. iED 2013 ImmersiveEducation.org PROCEEDINGS 137 2.1.3 Knowledge exploring under a Group-directed Collaboration " Discovery" learning" encourages" students" to" learn" by" trial4and4error[13]." The" actual" school" practice" may" not" provide" enough" time" for" individuals" to" properly" experience"trial4and4error"learning."Reinforcing"the"classroom"characteristic,"group" dynamics," trial4and4error" experience" is" shared" and" guided" through" group4directed" exploration." The" beauty" of" such" collaboration" is" that" it" facilitates" the" combining" of" an" individual’s" experience" with" their" background" knowledge" to" accomplish" the" learning" tasks." We" assert" that" group4directed" strategies" such" as" this" makes" exploratory"learning"more"time"efficient."" " " 2.1.4 Shared collaborative work table with shared task experience Classroom activities often involve group work such as group discussions, project assignments, and role-playing. Thus, a collaborative worktable was needed. Several researchers have reported that interactive tabletops facilitate more collaborative interactions and awareness of group tasks[14, 15]. Furthermore, games frequently serve as intrinsic motivators (stimulators) that provide challenge, control, and fantasy[16]. This was conducted through a simulation setting via a work tabletop where game matters (such as resources and rewordings) can be sorted and manipulated on the tabletop. Arranging learning content and gaming resources separately gave the vertical display a clear presentation for immersion and avoided interruption or distraction from pop-up game information on the vertical display. In terms of focus management for instructors, the management became easier by clearly dividing knowledge content on the screen and gaming elements on the tabletop. 2.2 System Architecture We incorporated current classroom equipment like a projection screen and interactive white boards with educational applications and technology, such as Kinect and tablets, to construct this classroom immersive environment (Figure 1). The user interface of the full-body immersive Digital Learning Playground consisted mainly of a shared display and a shared tabletop that was used to interact with authentic learning experiences embedded in a game flow. The shared display presented situated scenarios through a projected screen. Microsoft Kinect was used for embedding the users’ images and for generating interactive situations by skeleton analysis, gesture analysis, and body tracking. The shared tabletop allowed direct on-screen manipulation by multiple users. We implemented two different interfaces: (1) a lying-down IWB and (2) a jigsaw tablet, as shared tabletop. Learners performed object manipulation and discussion logging on the IWB. Jigsaw tablets enabled a collection of individual tablets. iED 2013 ImmersiveEducation.org PROCEEDINGS 138 Groups of personal tablets communicated with each other via G-sensor. This included moving and sending objects to one another’s tablets, and broadcasting events to the shared display. Windows Presentation Foundation (WPF) was used to develop the shared display, IWB and Android tablets. Communications between the shared display and IWB was conducted through WPF parameters as the tablets communicating with the shared display through network sockets. The database of the system included input-output and system information. Inputoutput information included icons, pictures, sound, dialogue, and animation. System information included video files, system setting file, and positioning coordinates. The environment setup is illustrated in Figure 2. Figure 1. System architecture iED 2013 ImmersiveEducation.org PROCEEDINGS 139 (a) (b) Figure 2. a) Setup of DLP and Kinect, b) Two ways to implement shared tabletop 3. Experiment & Methods 3.1 Hypothesis It is the hypothesis of this study that learning effectiveness and time efficiency can be improved through group-directed exploratory learning experience. 3.2 Instruction and Task The designed exploratory learning system was targeted at second language learning, as EFL (English as a Foreign Language) and ESL (English as a Second Language) learners have less experience with non-native languages in their native countries. The target knowledge is referred to as daily life vocabulary. The topic was Food and Recipe. The instructional design centered on acting out corresponding activities and experiences in relevant contexts. There were 26 new words and phrases to learn that were embedded in the recipe instruction (such as patty, grill, push against, and fill in). Learners were immersed in a kitchen scene and acted out the role of a kitchen cook. The kitchen cook is required to carefully read the recipe instruction and make the food order of the guests/customers (See Figure 3.). The instruction was written in English with animated picture graphs. Learners needed to detect the meaning and explore the right way to make food. iED 2013 ImmersiveEducation.org PROCEEDINGS 140 Figure 3. Exploratory learning activity 3.3 Procedure The subjects were forty-four (44) college students who learned English as their foreign language. The experiment was held at the National Central University in Taiwan. A month before the experiment all of the participants were given the pre-test adopted from a Vocabulary Knowledge Scale, as described in the next section, in order to assess their prerequisite knowledge. The homogeneity of the results was tested. These showed that the subjects’ prerequisite knowledge of the target learning objectives were about the same, F= .27, p > .05 (Table 1). We implemented English activity via DLP. There was no time limitation for the entire activity. We separated the participants equally into (1) experimental group (EG) which was the group-directed exploratory learning group and (2) control group (CG) which was the self-directed exploratory learning group. There were five (5) students in the experimental group (EG), and one (1) student in the control group (CG). After the learning activity, there was an immediate post-test. Table 1. Test of Homogeneity (N = 22) Pearson Chi-Square iED 2013 Value df Asymp. Sig.(2-sided) 320.833a 306 .27 ImmersiveEducation.org PROCEEDINGS 141 3.4 Method The assessment tool of learning effectiveness was the Vocabulary Knowledge Scale (VKS) (see Figure 4.). Developed by Wesche and Paribakht, VKS is a self-reported assessment that scales the level of vocabulary comprehension [17]. To assess the level transition of learned vocabulary comprehension, the subjects had a pre-test and post-test before and after the learning exploration. As for time efficiency, the time recording tool was built in DLP to record the time spent on each task. We also used three (3) cameras to collect learners’ behaviors as observational data. Sample test item Patty . . . . . I don’t remember having seen this word before. I have seen this word before, but I don’t know what it means. I have seen this word before, and I think it means ______. (synonym or translation) I know this word. It means ______. (synonym or translation) I can use the word in a sentence: ________________ (if you do this, please also do section Meaning of scores Possible score . . . . . 1 2 3 4 5 .) The word is not familiar at all. The word is familiar but its meaning is not known. A correct synonym or translation is given. The word is used with semantic appropriateness in a sentence. The word is used with semantic appropriateness and grammatical accuracy in a sentence. Figure 4. Sample test item and explanation of scores 4. Result Analysis To investigate time efficiency between the group-directed and self-directed exploratory learning, the average time spent on the whole activity (6 tasks) in each group was recorded. As seen in Table 3, the experimental group’s average time spent of the task completion was 25.14 minutes while the control group’s average time spent was 21.08 minutes. The data demonstrate that the group-directed team spent more time than the self-individual learning. To take school available time and class size into consideration, the common class period is 40-50 minutes and a class size is about 30 students. To let every learner take a turn in exploring (selfdirected), it took approximately 632 minutes, which is 16 regular class periods, for the whole class to complete the activity. Arranging students in a group (5-6 people), it took about 125.7 to 150.8 minutes, which is 3 or 4 regular class periods. We have concluded that the overall time spent in group-learning was considerably iED 2013 ImmersiveEducation.org PROCEEDINGS 142 more efficient than self-direct learning despite the fact that the group-directed exploratory learning spent little more time than individual learning. To investigate learning effectiveness, the difference between the pre-test and the post-test indicated a progress score for each participant. We conducted t-test to determine if there was a significant difference between the subject’s prior understanding and the learned level of the target knowledge after the hands-on experience. The t-test result illustrates the knowledge acquisition after the activity in both groups has significant difference (p < .005) (Table 2). It showed that both groups had significant improvement after the learning activity. The hypothesis of the experiment was to assess whether the participants of group-directed learning could acquire certain amount of knowledge through observation without full participation in task operation. The results indicate that the expected learning outcome was achieved. Table 2. Average scores attained in group and individual Group Pre Test Group Post Test Individual Pre Test Individual Post Test N Mean SD Correlation t Sig. (2tailed) 22 22 64.70 88.25 11.28 13.44 .81 -13.84 .000* 22 67.43 10.29 .67 -11.62 .000* 22 86.84 8.37 *significant at p < .005 Table 3. Average time attained in group and individual N Mean Group Average Time 5 25.14 Individual Average Time 22 21.08 SD 4.12 3.31 5. Conclusion and Discussion The purpose of this research was to assess whether learning effectiveness and time efficiency could be improved through a group-directed exploratory learning experience. The findings show that the participants’ comprehension level after the knowledge-in-action learning had a significant improvement regardless of whether the learning activities occurred in a group or individually. We found that relevant hands-on experience had profound influence on learners. Additionally, we iED 2013 ImmersiveEducation.org PROCEEDINGS 143 observed that learners can acquire knowledge through peer observation (as observed from the data of learning outcomes in the group-directed learning group). Furthermore, we observed that learning with a group formed a supportive learning atmosphere that was not possible in individual learning. In self-directed exploration, many of the participants attempted to finish the tasks as quickly as possible. We inferred that more individual learners viewed the tasks as games; numerous errors occurred because of careless reading of the instructions. In contrast, students in the group-directed team were suggesting, discussing, and error-correcting. These interactions could also explain the longer time spent by students who participated in group-based learning. Learning by doing in a context-relevant environment seems like an unreachable goal in formal educational settings. We have integrated existed classroom equipment with other affordable technology tool sets in order to make such scenarios more accessible to conventional education settings. 6. Acknowledgements With special thanks to: You-Chih Liu for his help with graphic design. Thanks to class B of freshman and class B of sophomore in the Department of Computer Science and Information Engineering at National Central University for their support. This work is partially supported by the National Science Council, Taiwan, under the grant number NSC 102-2511-S-008-001-MY3. References: 1. C. A. Chinn, W. F. Brewer, The role of anomalous data in knowledge acquisition: A theoretical framework and implications for science instruction. Review of educational research 63, 1 (1993). 2. M. Njoo, T. De Jong, Exploratory learning with a computer simulation for control theory: Learning processes and instructional support. Journal of Research in Science Teaching 30, 821 (1993). 3. C.-Y. Wang et al., in Proceedings of the 18th International Conference on Computers in Education. (2010), pp. 121-128. 4. J. S. JIANG, G.-D. CHEN, C.-J. WU, W.-J. LEE, in Proceedings of the 19th International Conference on Computers in Education. (2011). 5. K.-C. Chen, C.-J. Wu, G.-D. Chen, in Advanced Learning Technologies (ICALT), 2011 11th IEEE International Conference on. (IEEE, 2011), pp. 25-29. 6. K. Illeris, Towards a contemporary and comprehensive theory of learning. International Journal of Lifelong Education 22, 396 (2003). iED 2013 ImmersiveEducation.org PROCEEDINGS 144 7. J. Rieman, A field study of exploratory learning strategies. ACM Transactions on Computer-Human Interaction (TOCHI) 3, 189 (1996). 8. D. DePriest, K. Barilovits, in 16th ANNUAL TCC Worldwide Online Conference, Hawa. (2011). 9. H.-m. J. Hsu, in 2nd International Conference on Education and Management Technology. (2011), vol. 13, pp. 334-338. 10. M. Worsley, P. Blikstein, in Annual Meeting of the American Education Research Association (AERA). (2012). 11. M. G. Brooks, J. G. Brooks. (1999). 12. C. E. Hmelo-Silver, H. S. Barrows, Facilitating collaborative knowledge building. Cognition and instruction 26, 48 (2008). 13. S. D. Whitehead, D. H. Ballard, Learning to perceive and act by trial and error. Machine Learning 7, 45 (1991). 14. P. Dietz, D. Leigh, in Proceedings of the 14th annual ACM symposium on User interface software and technology. (ACM, 2001), pp. 219-226. 15. Y. Rogers, S. Lindley, Collaborating around vertical and horizontal large interactive displays: which way is best? Interacting with Computers 16, 1133 (2004). 16. T. W. Malone, Toward a theory of intrinsically motivating instruction. Cognitive science 5, 333 (1981). 17. M. Wesche, T. S. Paribakht, Enhancing Vocabulary Acquisition through Reading: A Hierarchy of Text-Related Exercise Types. (1994). iED 2013 ImmersiveEducation.org PROCEEDINGS 145 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PAPER The Vari House: Digital Puppeteering for History Education Authors: J. Jacobson1, D. Sanders2 Affiliations: 1 PublicVR 2 Learning Sites, Inc *Correspondence to: PublicVR, 333 Lamartine St., Jamaica Plain, MA 02130 Abstract: The Vari House virtual world is a reconstruction of an ancient Greek farmhouse excavated fifty years ago in southern Greece. Implemented in Unity3D, the virtual world features a digital puppet, an avatar representing the teenage son of the farming family that lives there. We project Vari House onto a large screen, so that the house and puppet are life-sized to enhance audience engagement. Under the control of a teacher or puppeteer, the avatar communicates through voice and gesture, moving freely through the virtual space. He discusses the house itself and daily life, but any relevant topic is accessible. For depth of conversation, a human puppeteer is superior to any artificial intelligence. This version of Vari House works well for museum audiences of all ages and fits the ancient history curriculum mandated in most states for middle school. iED 2013 ImmersiveEducation.org PROCEEDINGS 146 One Sentence Summary: Life-sized human-controlled digital avatar/puppet of a Greek farmer gives tours of his house, a powerful technique for educational presentations. Introduction The Vari House virtual world is a reconstruction of a Greek farmhouse excavated fifty years ago, in southern Greece near the shoreline [23-24]. The construction, plan, and siting of the Vari House follow the teachings of the ancient writers. For example, the house should be open and face toward the sun, closed off from cold north winds. Also, perishable-building materials should be placed between imperishable materials for protection from the weather; thus, in the Vari House, the mud brick walls and wood ceiling framing and columns are set between a stone foundation and ceramic tile roof. In 1996, Learning Sites, Inc., developed the original VRML model with linked curriculum materials as an online educational tool to help teach the use of archaeology to investigate life in the Hellenic world [35]. In 2012, Learning Sites and PublicVR reimplemented the house using the Unity 3D authoring environment (http://unity3d.com), and greatly increased the instructional power of the virtual world by the addition of an ancient Greek farming family. With a moderately fast computer, anyone can explore the house through a Web browser. Unity also supports immersive presentations, like the one we describe in this paper. Vari House is not a film or set presentation. It is an interactive three-dimensional space, now greatly enhanced by the addition, an ancient Greek farming family, most of whom are represented as automated characters going about their business. The star of the show is an avatar representing a young teenage farmer (Orestes), controlled by a live puppeteer. The operator sees the audience through a Webcam and can move the avatar freely through the virtual space. The avatar addresses the audience directly, in character, according to a flexible script developed for the particular lesson, audience, or situation. The avatar puppet can communicate through voice, gesture, and visual context. He can describe the architectural and archaeological features of his environment, pointing to individual aspects of the house to address specific curriculum needs. He can then use the context to talk about ancient daily life, history, religion, or economics. The action and dialogue can follow a prescribed storyboard or can be extemporaneous, responding to audience feedback. iED 2013 ImmersiveEducation.org PROCEEDINGS 147 Figure 1. An audience member converses with the digital puppet, Orestes, "in" the courtyard of the Vari House. His mother is visible in the background. Optionally, a teacher or second actor can mediate between the audience and the avatar. The teacher/mediator can explain things that would be awkward for the avatar to say in character and can also disambiguate spatial references (e.g., whom the avatar is pointing to). This is important, because we want to present the illusion that the audience and the avatar occupy a contiguous space that extends from the theater or classroom and into the virtual house. We are not trying to fool audience members with an optical illusion, but we do want to make it natural for them to play along with the audio-visual narrative. The depth and flexibility of conversation of the human-controlled avatar is superior to that of any artificial intelligence. The show can change every day and develop complex narratives allowing teachers to develop curricula flexibly. That said, artificially intelligent characters are low-cost (once developed) and deployable in situations where a human puppeteer is not available. We do not seek to replace automated characters, but complement them to round out a better learning experience. In this article, we will summarize the implications of digital puppeteering in education and cultural heritage, provide the historical underpinnings of Vari House, describe how the application works, and explore a sample narrative. iED 2013 ImmersiveEducation.org PROCEEDINGS 148 While the Vari House model is proprietary, an executable will be available free of charge to the public through the PublicVR and Learning Sites websites. The code is based on the open-source code produced by the Egyptian Oracle project (http://publicvr.org/html/pro_oracle.html), which the reader can download to make his or her own applications. Digital Puppetry Today, the game industry has provided a host of online virtual environments, where each player controls an avatar. Depending on the structure of the game, the avatars interact with each other, with artificially intelligent “bots,” the virtual environment itself, and the narrative generally. The most famous example is World of Warcraft, but there are many other Massively Multiple Online Role-Playing Games (MMORPGs) and a host of acronyms to describe them. When an avatar is visible to other players, the avatar is literally a digital puppet. The one thing that all puppets have in common is control by one or more human beings at performance time. This is profound, because puppetry is an ancient art with highly developed techniques and technologies for representation, illusion, the evoking of emotion, and the conveying of information. And yet, most avatars are very primitive puppets indeed, controlled through a keyboard and mouse and having a very limited range of expression. If we can convey even a fraction of the expertise in the traditional puppeteering to the game industry, then we will really have made a contribution. Jim Henson created the first modern digital puppet, Waldo, in 1988. It interacted directly with physical puppets. Today, most of the advanced digital puppets are used for film production, as in the series “Sid the Science Kid” [19]. The Henson Company and other practitioners have developed a range of innovative control devices, most of them involving the manipulation of physical models. Most of the highly realistic or lifelike digital puppets, such as Gollum from the Lord of the Rings, are driven by live motion capture of an entire human figure. For Gollum, the actor’s movements were directly translated to the virtual body, which was digitally captured and merged with the live footage. The effect is expressive and human, perfect for detailed humanoid figures. However, it is not the ultimate control interface. The reason why the Henson Company and others use more abstract control mechanisms is the power of caricature. They want to deliberately exaggerate or abstract puppet motions to achieve dramatic effect in a way that would not be efficient or possible with full-body motion capture. Unfortunately, the great majority of digital puppets, the avatars in games and online virtual environments, are controlled by keyboard and mouse, which is limiting. One of our long-term goals is to make better low-cost control schemes accessible to the public. Puppetry has a close connection with shamanic ritual. The shaman who enters a trance state of consciousness uses a ritual object, such as a mask or puppet. This results in a performance of great excitement, public engagement, and reflection of community [33]. iED 2013 ImmersiveEducation.org PROCEEDINGS 149 Our goal is to capture some of the excitement and meaning of the original ritual. Beginning with the Egyptian Oracle, we are attempting to enact and further "extend" the meaning of ritual for the digital age, exploring the psychical and technical dimensions of the virtual. The puppet is a transformative vehicle for both the performer and the community, in this case bridging an imagined ancient past with the real world of today. Digital Puppets for Educational Theater A small number of dramatic productions use a sophisticated avatar/puppet for direct viewing by an audience [32] and her digital puppet performed a shamanistic drama for a live audience. Andreadis and his colleagues [1] created a live performance by avatars/puppets in a virtual Pompeii, which was projected onto a large screen for a live audience. Anstey et al [2] staged a number of dramas with a mixture of virtual and live actors. As with a traditional play, the audience is “along for the ride.” Vari House is an interactive performance, where audience members may communicate directly with the puppet. There are other notable examples. In “Turtle Talk with Crush,” at Disney theme parks, children see and converse with a virtual digital puppet projected onto the glass of an aquarium [37]. In a cultural sensitivity training scenario employing the Gepetto system [28], a trainer-puppeteer controls virtual Arabs with a single user/audience member. In the TeachMe™ system, used to help middle school children resist peer pressure, a single puppeteer controls five virtual characters, which interact directly with the user/audience [43]. People in Virtual Heritage "Virtual Heritage" is the use of interactive electronic media to recreate culture and cultural artifacts as they might have been or interpret them as they are today [29][32]. The central element is usually a three-dimensional computer model of a person, place, or thing, especially an ancient site, building, or event. Applications that include people fall into four categories: • Virtual people are simply there in the environment, going about their business. They could be simple crowds [38] or participants in a complex drama [1]. • The virtual people interact with the user in some meaningful way [8-9]. • In online worlds such as Second Life users represent themselves as ancient peoples and interact with each other and artificial people [4]. • The experience is personal, as the user interacts with a single complex virtual person [20][30][22][15]. These same categories of virtual people can be implemented in augmented reality, which is some mixture of the physical world and VR. For example, the LIFEPLUS EU 1st system [31] describes a proposed and later built hybrid where virtual Romans are visible iED 2013 ImmersiveEducation.org PROCEEDINGS 150 to observers in the physical ruins of Pompeii. The augmented reality also includes reconstructions of some of the architecture but is primarily focused on the people. Figure 2. Orestes' family (his mother, father, and younger brother). They don’t do much, yet, but just their presence provides a scale and social context to the architecture (from the Learning Sites virtual reality model; © 2011 Learning Sites). Educational Theater and Cultural Heritage Dramatic productions for educational purposes have a long and productive history. Today, many large science museums have small theaters and workstations where educators give demonstrations and talks, often with audience participation. Children’s museums stage puppet shows, introducing children to science topics and social issues. Theater is also educational for student actors in K through 12 schools and higher education. Since the 1920s, students have learned a wide range of subjects and developed their personalities by learning stagecraft [41]. Games, improvisation, and role-play foster communication skills, problem solving, social awareness, and positive self-image. One of the most widely used forms of theater in education is reenactment of scenes from an historical time period: “An enactment may be cast in the past, the present, or the future, but happens in the ‘now of time.’”[42]. This strategy encourages students to iED 2013 ImmersiveEducation.org PROCEEDINGS 151 interact with the material, challenging them to take on the viewpoint of a character. For example, undergraduate students perform rituals documented in Egyptian religious texts from the Late or Graeco-Roman periods [13], the Mysteries of Osiris in the Month of Khoiak[17], and Confirmation of Power in the Egyptian New Year ceremonies [18]. These enactments provide a powerful learning experience for the students and reveal aspects of the ceremonies not easily evident in the text. A much less structured approach is the Living Museum [17]. Actors and the reconstructed or restored historical architecture together simulate a community from the past, which visitors can explore and with which they can interact. The actors play an interesting balancing act between staying in character and recognizing the reality of the modern person talking with them. Examples include Historic Williamsburg (http://www.history.org), Fort Snelling (http://www.mnhs.org), Plymouth Plantation (http://www.plimoth.org), and Old Sturbridge Village (http://www.osv.org). The live actress in PublicVR’s Oracle performance also does this. The public wants to see history acted out [17]. Artificially Intelligent Agents and Hybrid Systems Similar work is being done with artificially intelligent human figures that interact with the audience/user. These are neither puppets nor avatars, but agents or bots. Sophisticated agents require a great deal of skill and expense to program but have the obvious advantage of being portable, tireless, and potentially connected to databases not directly accessible to human readers. Kenny [26] and Swartout et al. [36] describe their own cultural sensitivity trainer, an AIdriven direct competitor to the puppeteer system, Gepetto, described in Mapes [28]. The Intelligent Virtual Environments group at Teesside University developed another sophisticated AI system. In their “Madam Bovary” simulation, the user takes the role of a major character, the foil for the protagonist [7]. Each of these systems is intended for a single user in a CAVE-like [12] display. Good, AI-driven interactive storytelling can be developed without extreme cost. For example, Ansty and Pape [2] have also staged interactive psychodramas that respond to the emotional state of the user. Ryu [33] developed a shadow puppet that responds directly to the user. In both systems, the programming for the artificial intelligence is relatively simple, but the artistic and narrative design makes it a powerful experience. Importantly, the TeachME™ system is a hybrid [43]. When the puppeteer is not directly controlling an avatar, it acts according to a set of preprogrammed rules and behaviors. In this way, all five avatars are active in a way throughout the scenario. Many adventure games allow a single user to control a group of characters, using a similar strategy (e.g., Everquest). iED 2013 ImmersiveEducation.org PROCEEDINGS 152 The Vari House Project Project History In the mid-1990s, Learning Sites, Inc., began to pursue the concept (new at the time) of using online virtual reality environments to change the way history is taught in schools. There clearly were exciting possibilities of presenting students with immersive visualizations of places, peoples, and events of the past, along with linked information, lessons, and problem-solving tasks. By 1998, Learning Sites had designed and released the Ancient Greece: Town & Country educational package, which combined and contrasted their virtual reality models of the Vari House and the House of Many Colors (from Olynthus; both Hellenistic in date). The package guides the student-reader through a game-like activity, where they are in the role of an archaeologist. Armed with questions they need to answer, students gathered some of the answers by directly examining the virtual worlds. They could also learn from the linked text, which had information on relevant curriculum-focused topics, such as history, culture, and geography that helped them interpret the significance of their discoveries [35]. In 2012, the authors translated the Vari house model into Unity, a game development software platform that has become popular among archaeologists and other academics (http://unity3d.com). Then, we added automated models of the family members going about their business, and the digital puppet representing their teenaged son. The programming for the puppet is based on the open source code for the Egyptian Oracle project [21] accessible from http://publicvr.org/html/pro_oracle.html. (Follow the link that says “Application and Source Code”). Eventually we will make this version accessible from the web, as well. But our current focus is on educational theater, as we described in this article. The Building Itself The Vari House is located in southeastern Greece, about 4km (2.5 miles) from the coast of the Aegean Sea along a rocky spur of the southernmost foothills of the Hymettus Mountain range. The site is about 18km (11 miles) southeast of Athens; and about 2km (1.2 miles) from the ancient town of Vari. The site was excavated in the summer of 1966 by British archaeologists [23-24], whose final report is the basis for our reconstruction. The basic structure as excavated is a rectangle with external dimensions of 13.7 – 13.85 m x 17.6 – 17.7 m (about 45’ x 58’). The house is set on a stone terrace, which projects 0.5m along the south side. There are four rooms arrayed across the back of the house, a large central open courtyard, and two additional spaces in the front corners of the building. One is a tower, and the other is an extension of the side workspace. Each room has only one doorway, and that doorway opens only into the central courtyard; the building itself appears to have had only one iED 2013 ImmersiveEducation.org PROCEEDINGS 153 entrance facing south. Thus, the whole house has a southern orientation, which was common for the period. No evidence was found for any other buildings in the immediate vicinity. Based on the surviving bits of pottery and a few coins found, the small amount of wear on the thresholds, and the lack of long-term renovations, the house seems to been occupied only for a short time from the second half 4th-century to the early half of the third century BCE. Figure 3. Vari House, aerial view (from the Learning Sites virtual reality model; © 2010 Learning Sites). A verandah roof that shades the Vari House faces toward the south, and the layout of rooms along the north side. This layout follows closely what ancient Greek and Roman writers tell us about the ideal setting for a dwelling: it should be open and face toward the sun, it should be closed off from cold north winds, and it should capture the heat and light of the sun appropriately for the seasons. Further, the distribution of materials follows the teachings of the ancient writers: place perishable building materials (in the Vari House, the mud brick of the walls and wood ceiling framing and columns) between imperishable materials (in the Vari House, between a stone foundation and ceramic tile roof) to protect the perishable materials from the weather [23-24]. The walls between the rooms at the back of the house are thin since they only support their own weight. The corner room's walls are so thick that archaeologists believe they supported a two-story tower. Evidence of Beekeeping There is no evidence of Vari House being a farm, in the traditional sense, or that special work was carried out at the house. There are no livestock or other purpose outbuildings. iED 2013 ImmersiveEducation.org PROCEEDINGS 154 However, many clay vessels were found around the site, with lids and extension rings consistent with bee skeps of the time. They are specially designed and constructed to encourage honeybees to build their honeycombs inside and to allow people to remove the combs and retrieve the honey easily. Wall paintings and carvings at ancient Egyptian sites show very similar vessels for beekeeping and show the process of removing the honeycombs for their honey. Researchers found microscopic traces of honey inside some potsherds through chemical analysis [14]. All this evidence demonstrates that this was a beekeeper’s house, and that honey was harvested here on a large scale [11]. Honey was in ancient Greece, as it still is today, a favorite food topping, ingredient, and dessert [11]. Aristotle mentioned that keeping bees was part of farming activities. Other ancient writers commented that the best honey in the Greek world came from the Hymettus Mountain range, the location of the Vari House [25]. Perception, Interaction, and Hardware Vari House is projected life-sized onto a screen or a blank wall in a museum, school theater, or classroom. This effectively extends physical space into the virtual world of the Greek farmhouse, giving the audience greater empathy for the material and a more direct interface with the virtual world. Under the control of a teacher or professional puppeteer, the avatar addresses the audience, communicating through voice and gesture. Optionally, a teacher or second actor can assist communication between the audience and the avatar. She can disambiguate spatial references, such as when the avatar indicates a member of the audience, by pointing or looking. Another solution would be to use a stereographic (3D) projector, which greatly enhances the illusion of the contiguous virtual space. Today, low-cost stereographic projectors are readily available, although members of the audience will have to wear shutter glasses, expensive if the audience is more than a few people. Passive stereo projectors, the kind used in 3D movies, require only low-cost plastic foil glasses, but the projector is much more expensive. We refer to the Vari House as three-dimensional, but that is really a construction in the cooperating participants’ minds. The narrative and the graphics help their imagination, and the better the 3D cues are, the easier it is. The better your hardware and supporting software is, the better the visual cues can be. The demonstration that we describe here uses low-cost hardware, just a laptop, digital projector, and commodity sound system. We wish to present methods that are accessible to everyone. The operator sees the audience through a Webcam, which he can reorient by remote control. The Webcam image is streamed to a computer monitor, while a real-time duplicate of the audience view appears on a second computer monitor. He controls the viewpoint using the keyboard and mouse and controls the puppet with an Xbox 360 controller. Some of the controls are continuous, such as controlling the puppet’s head with a POV (a miniature joystick used with one finger). Others are discrete gestures, which can be blended together, emphasized, or minimized. We will post a more detailed iED 2013 ImmersiveEducation.org PROCEEDINGS 155 description along with the executable when it is ready. Until then, the documentation for the Egyptian Oracle priest, which is fairly similar, is available online (http://publicvr.org/html/pro_oracle.html). The first time we implemented the puppet for Vari House, we reused the gestures for Egyptian Priest, because they came with the open source code. The movements worked well enough, but we had to make changes, and we desire more. Many of the priest’s gestures are appropriate for an imperious official, but not quite right for our young tour guide. As with traditional puppetry, digital puppets’ form and movements are crafted to support the narrative. We used a surround-sound system for our first demonstration of the Vari House, the same one described by Nambiar [44]. The house has period background music, which the audience hears while they are “in” the courtyard. When the audience viewpoint is outside, they hear wind and birds, bees near the hives, and so on. The puppeteer speaks into a microphone so his voice comes through the surround sound system. Optionally, the voice of the mediator can also be fed through the sound system, which guarantees that she will be heard and makes her more similar to the avatar. Navigation We employed three basic navigation modes in our demonstration to show their relative advantages: Jump Cut: The operator presses a key on a keyboard, which instantly moves the avatar to a specific location facing the virtual world. It also moves the audience viewpoint to a nearby point, facing the avatar. “Jump cuts” is a term from film, where the location and orientation of the view changes suddenly, but there is always some narrative connection between the two points of view. Most audiences have become used to them from a lifetime of movies and television. Figure 4 shows a map of the Vari house with six positions. For each one, the camera represents the location of the audience view, and the direction of the camera lens indicates the direction of that view. The overhead schematic of the human figure shows the location of the puppet, facing the direction the human figure is pointing. Changing the narrative requires minor changes in the code to accommodate new jump points, although a little creative coding could make defining jump points easy for an operator. At the first four locations, the puppet stands still and converses with the audience. Position five is the starting point for free walk, fixed camera mode, while position six is the starting point for intelligent following. iED 2013 ImmersiveEducation.org PROCEEDINGS 156 Figure 4. Jump Cut locations (excavator’s schematic plan view of the Vari House; final image © 2012 Learning Sites). Free Walk, Fixed Camera: Initially, the puppet jumps to position five, and the camera is placed much further away than with the jump cuts, facing back toward the house. The puppeteer can then walk the avatar across the screen, or back-and-forth, pointing at and describing features or objects. As with the jump cuts, changing the narrative may require changing the initial location of the puppet and the camera. The puppeteer uses the arrow or w, a, s, and d keys on the keyboard to move the avatar, using the venerable golf cart navigation scheme: up arrow is forward, down arrow is backward, left arrow rotates him left, right arrow rotates him right. Intelligent Following: This navigation mode will be familiar to people who have played avatar-based computer games, an old idea that works well. The avatar walks, the camera stays behind him, attempting to maintain a set distance, while avoiding objects. The puppeteer uses the arrow keys to navigate the puppet, just as with the free-walk mode. Additionally, the puppeteer can use the mouse to rotate the view around the puppet at any time. The camera is always looking toward the puppet, but rotating the camera enough iED 2013 ImmersiveEducation.org PROCEEDINGS 157 will cause it to face him. The puppeteer uses this maneuver when he needs to stop the avatar walking and have the avatar dialogue with the audience. Intelligent following generally works, and could theoretically accommodate any narrative. Many of the important videogames have players using it to control their personal avatars. However, it has to be skillfully handled in life-size theater to look good. For example, the act of swinging the camera around to the front of the avatar takes time in our current implementation. We will need to experiment with it some more. Sample Narrative This is a short, flexible narrative that we used in a recent demonstration. It does not define a fixed dialogue. Instead the storyboard presents material that the actor/puppeteer works with in his conversation with the audience. Much more complex and lengthy narratives are possible with little or no extra programming. The narrative comes in three phases, each with a different navigation mode (defined above): I. Dialogue in Jump-Cut Mode II. Dialogue in Free Walk, Fixed Camera Mode III. Dialogue in Intelligent Following Mode Following is an overview of each of the three narrative phases. iED 2013 ImmersiveEducation.org PROCEEDINGS 158 I. Dialogue in Jump-Cut Mode Introduction Figure 5. Screen shot from the Vari House virtual world showing the initial position of the avatar (© 2013 PublicVR and Learning Sites). • camera position -- not too far away, looking at the avatar with some courtyard in view. • avatar position looking at the audience (at the camera; figure 5 and #1 in figure 4). • avatar actions -- gesture to the audience and then around the interior of the house as he talks • Avatar talking -- "Yah-sue....Hello. My name is Orestes; welcome to my farmhouse. What do you think we farm here?” <<pointing to and speaking directly to someone in the audience, identified the person by an item of clothing or some other feature>> “You in the blue shirt! Do you have an idea? We keep bees, and we farm honey. That's all we do, and we here on the slopes of the Hymettus Mountains produce the best honey in all of Greece. My mom's on her way to Vari town to buy food, and my father is out tending to the bees. My kid brother is outside mending pots, so if you follow me, I'll tell you more about our life." <<avatar and camera jump to position 2>> iED 2013 ImmersiveEducation.org PROCEEDINGS 159 House Construction & Function Figure 6. Screen shot from the Vari House virtual world showing position two of the avatar (© 2013 PublicVR and Learning Sites). • camera position -- not too far away, looking at the avatar and across the courtyard • avatar position -- looking at the audience (at the camera; figure 6 and #2 in figure 4) • avatar actions -- points to various parts of the house as they are described • Avatar talking -- "Ours is a simple house. We built it ourselves from local materials. Rocks, like for the foundation, the courtyard paving stones, and the bases for the posts, were all found out along the hillside. We made the sun-dried mud brick for the walls and cut trees to make the posts, the ceiling beams, and the doors. The rooms are naturally cooled by the breezes coming down the hill behind the house and natural light from this central courtyard spreads everywhere to help us work." <<avatar and camera jump to position 3>> iED 2013 ImmersiveEducation.org PROCEEDINGS 160 Dining Room Figure 7. Screen shot from the Vari House virtual world showing position three of the avatar (© 2013 PublicVR and Learning Sites). • Camera position -- in the doorway to see both the avatar and plenty of the room. • Avatar position -- looking at the audience (at the camera; figure 7 and #3 in figure 4) • Avatar actions -- gestures and points to items there, as described • Avatar talking -- "This is our Andron (dining room to you),” <<avatar points out features and gestures to other rooms>> “where we lie down on these couches to eat and use the side tables for our dishes. We keep our dried food in jars; other food is prepared fresh each day or smoked and salted for later use. Next door is where we sleep...that is, if we aren't enjoying the evening out there in the courtyard. We go to bed when it gets dark and get up for work at first light. That central courtyard provides access to all the other rooms of the house. It's also our main workspace and where some of the small animals we keep are housed overnight. Over on the side, we have a workplace for repairing the beehives and other stuff. Over in one corner is our kitchen. Now, unlike where most of you live, we don't have any running water, so we take turns fetching fresh water from the streams in the valley nearby.” <<avatar points to someone in the audience, and mentions something they are wearing or where they are sitting>> “You get to carry the next batch of water in amphora. As for a bathroom; well, we don't have that either, but there are plenty of trees in the woods outside. We don't even have heat in the house except for the small fire here in the kitchen." <<avatar and camera jump to position 4>> iED 2013 ImmersiveEducation.org PROCEEDINGS 161 Look Outside Open Front Door Figure 8. Screen shot from the Vari House virtual world showing position four of the avatar (© 2013 PublicVR and Learning Sites). • Camera position -- a bit farther away to see both the avatar and a view out the front door • Avatar position -- facing the audience beside one door leaf (figure 8 and #4 in figure 4) • Avatar actions -- points to items there, as described • Avatar talking -- "I think we picked an ideal location for our house. Besides having the breezes from the hillside, we are also sheltered from winter storms--our house has its back to the north winds, just as Aristotle and Xenophon taught us. And we have a great view” <<look outside>> “all the way down to the coast; that's where my mom needs to go to buy supplies--it's about an hour's walk. Out there, in front of the house, is where we keep the active bee skeps (that's what we call the hives). Hey, Dad, someone to see you....Oh, well, he must still be busy tending to the honey collection. Do you want to go outside?" <<avatar turns to move outside; camera jumps to position 5>> iED 2013 ImmersiveEducation.org PROCEEDINGS 162 II. Dialogue in Free Walk, Fixed Camera Mode Figure 9. Screen shot from the Vari House virtual world showing position five of the avatar (© 2013 PublicVR and Learning Sites). • Camera position – facing the house, near the low stone wall with a view of the entire house. • Avatar position -- facing away from the audience (figure 9 and #5 in figure 4). • Avatar actions -- gestures to various house parts as he walks from the right side to the left. • Avatar talking text -- "Here we are outside; I'm under our shaded porch. We like to sit out here and admire our view. Outside you can see the same local construction materials I mentioned before. And over there on the corner of the house is our tower, where we have our locked storeroom and lookout. We keep our valuables safe there and can use the tower to watch for dangers and hide if need be, not that we've ever needed it. Up the hill about half an hour behind the house is a cave of Pan, where he lives and where those of us who live in this area, plus many visitors, go to offer him food and drink so that he continues to protect our bees and the flocks of other farmers.” iED 2013 ImmersiveEducation.org PROCEEDINGS 163 III. Dialogue in Intelligent Following Mode Figure 10. Screen shot from the Vari House virtual world showing position six of the avatar (© 2013 PublicVR and Learning Sites). • Camera position – near the beehives • See the starting position 6 in Figure 4, near the beehives. • Avatar actions --gestures to and points to skeps as features are described • Avatar talking text -- "Here are some of our bee skeps or bee hive containers. The bees won't hurt you, really. Don't mind my dad as he makes sure the containers are working properly. Each skep is made of clay; we make them right here (well, actually me and my brother make them). Each skep has a ceramic lid that's tied to the main skep body by cords wrapped around a forked stick and attached to grooves on the side of the skep. The stick's forked end surrounds a notch in the lid (can you see it?) that acts as a doorway for bees coming in and going out of their hives. The inside walls of the skep body are roughened so the honeycombs can more easily attach themselves. We sometimes remove the lids to check on progress inside; that's probably what my father is doing now. We have dozens of bee skeps scattered across the grounds. Hey, thanks for letting me talk to you about my home. I hope you enjoyed learning about us." iED 2013 ImmersiveEducation.org PROCEEDINGS 164 Figure 11. Screen shot from the Vari House virtual world showing another view of position six of the avatar (© 2013 PublicVR and Learning Sites). Conclusion Applications like Vari House are relatively rare, but rooted in several converging trends: • Online virtual worlds with their human-controlled avatars (puppets). • Game technology and game-like narrative structures for education. • Immersive displays for a variety of uses. • Home theater. Learning through constructive play is powerful, especially when history can be personified in this way. History itself is terribly important, but students gain even more from an interactive virtual worlds experience. Experiences like this show them how to have empathy with other cultures. iED 2013 ImmersiveEducation.org PROCEEDINGS 165 Software and animation for Vari House can always be improved, but it is now sufficient for deployment in K-12 schools and in museums. The next stage, now, is to develop the curricula and supporting materials, but that depends on the context. We are currently seeking partners interested in working with schools or museums and that wish to work with us to develop Vari House and similar projects, further. References and Notes: [1] Andreadis, A.; Hemery, A.; Antonakakis, A.; Gourdoglou, G.; Mauridis, P.; Christopolis, C.; and Anon. “Training and Conservation in Luxor,” ARCE Conservation 2011 [Report]: 1-6. [2] Anstey, J.; Patrice Seyed, A.; Bay-Cheng, S.; Pape, D.; Shapiro, S. C.; Bona, J.; and Hibit, J. (2009).“The Agent Takes the Stage,” International Journal of Arts and Technology, 2009 - Vol. 2, No.4, 277-296. [3] Biers, William R., (1996).“The Archaeology of Greece: an introduction,” Ithaca: Cornell University Press. [4] Bogdanovych, A.; Rodriguez, J. A.; Simoff, S.; Cohen, A.; and Sierra, C. (2009). “Developing Virtual Heritage Applications as Normative Multi-agent Systems” in proceedings of the Tenth International Workshop on Agent Oriented Software Engineering (AOSE 2009) at the Eighth International Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS 2009), Budapest, Hungary, May 10-15, 2009. Accepted for publication. [5] Bowra, C. M. (1965) and the Editors of Time-Life Books, Classical Greece, NY: Time Inc. [6] CAA (2009). Computer Applications and Quantitative Methods in Archaeology, Williamsburg, VA, USA, March or http://www.caa2009.org/CAA2009_CompletePrelimProgram030409.pdf. [7] Cavazza, M.; Lugrin, J.L.; Pizzi, D.; and Charles, F. (2007). “Madame Bovary on the Holodeck: Immersive Interactive Storytelling” in proceedings of the ACM Multimedia 2007 conference, ACM Press, Augsburg, Germany, 2007. [8] Champion, E. (2008a). “Game-based historical learning” in Rick Ferdig, (ed.). Handbook of Research on Effective Electronic Gaming in Education, Information Science Reference, Florida, USA, 219-234. ISBN 978-1-59904-808-6 (hardcover), ISBN 978-1-59904-811-6 (e-book). [9] Champion, E. (2008b). “Otherness of Place: Game-based Interaction and Learning in Virtual Heritage Projects,” International Journal of Heritage Studies, 14(3), 210228. http://www.tandfonline.com/doi/abs/10.1080/13527250801953686#preview [10] Connolly, P.and Dodge, H. (1998). The Ancient City: life in Classical Athens & Rome, Oxford: Oxford University. Press. iED 2013 ImmersiveEducation.org PROCEEDINGS 166 [11] Crane, E.(1983) The Archaeology of Beekeeping, London: Duckworth. (esp. Chapter 4) [12] Cruz-Neira, C.; Sandin, D.; and DeFanti, T. A. (1993). “Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE,” SIGGRAPH '93. [13] Derchain, P. (1965). Le papyrus Salt 825 (BM 10051): Rituel pour la conservation de la vie en Égypte, Brussels: Académie Royale de Belgie. [14] Eversheda, R. P.; Dudda, S. N.; Anderson-Stojanovicb, V. R.; and Gebhard, E. R.(2003). "New Chemical Evidence for the Use of Combed Ware Pottery Vessels as Beehives in Ancient Greece," Journal of Archaeological Science 30.1:1-12. [15] Economou, D.; Mitchell, W.; Pettifer, S.; Cook, J.; and Marsh, J. (2001). “User Centered Virtual Actor Technology,” Conference of Virtual Reality, Archeology, and Cultural Heritage (VAST). ISBN: 1-58113-447-92001. [16] Garland, R. 2008. Daily Life of the Ancient Greeks, 2nd edition, Westport, CT: Greenwood Press. [17] Gillam, R. (2005). Performance and Drama in Ancient Egypt, London: Duckworth. [18] Gillam, R.; Innes, C.; and Jacobson, J. (2010). “Performance and Ritual in the Virtual Egyptian Temple,” Computer Applications and Quantitative Methods in Archaeology (CAA), Granada, Spain, April, 2010. [19] Henson, B. (2009) “Keeping it Real”, American Theater, 87503255, 26.7 [20] Hulusic, V. and Rizvic, S. (2011).“The use of live virtual guides in educational applications,” Third International Conference on Games and Virtual Worlds for Serious Applications, Athens, Greece, May. DOI 10.1109/VS-GAMES.2011.31 [21] Jacobson, J. (2011) Egyptian Ceremony in the Virtual Temple; Avatars for Virtual Heritage; Whitepaper and Final Performance Report to the National Endowment for the Humanities. Digital Startup Grant #HD5120910, 2010-2011 academic year. http://publicvr.org/publications/EgyptianOracleFinalReport.pdf [22] Jacobson, J.; Handron, K.; and Holden, L. (2009). “Narrative and content combine in a learning game for virtual heritage,” Computer Applications and Quantitative Methods in Archaeology, Williamsburg, VA. [23] Jones, J. E.; Graham, A. J., and Sackett, L. H. (1973). "An Attic Country House below the Cave of Pan at Vari," Annual of the British School at Athens68:355-452. [24] Jones, J.E.(1975). "Another Country House in Attica," Archaeology 28.1:6-15. [25] Jones, J.E. (1976). "Hives and Honey of Hymettus: beekeeping in Ancient Greece," Archaeology 29.2:80-91. iED 2013 ImmersiveEducation.org PROCEEDINGS 167 [26] Kenny, P.; Hartholt, A.; Gratch, J.; Swartout, W.; Traum, D.; Marsella, S.; and Piepol, D. (2007). “Building interactive virtual humans for training environment,” in Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) 2007. [27] Ling, R. (1976).The Greek World, NY: E.P. Dutton. [28] Mapes, D. P.; Tonner, P.; Hughes, C. E. (2011). “Geppetto: An Environment for the Efficient Control and Transmission of Digital Puppetry.” HCI (14), v6774, 270278, Springer. [29] Moltenbrey, K. (2001). “Preserving the Past,” Computer Graphics World, September, 2001. [30] Neto, J. N.; Silva, R. S.; Neto, J. P.; Pereira, J. M.; and Fernandes, J. (2011). “Solis’Curse - A Cultural Heritage Game Using Voice Interaction with a Virtual Agent,” Third International Conference on Games and Virtual Worlds for Serious Applications, Athens, Greece, May. DOI 10.1109/VS-GAMES.2011.31 [31] Papagiannakis, G.; Schertenleib, S.; O'Kennedy, B.;Arevalo-Poizat, M., MagnenatThalmann, N., Stoddart, A., andThalmann, D. (2005). “Mixing Virtual and Real Scenes in the Site of Ancient Pompeii,” Journal of Computer Animation and Virtual Worlds, 16(1), 11-24. [32] Roehl, D. B. (1997). “Virtual archeology. Bring new life to ancient worlds,” Innovation, 28-35. [33] Ryu, S. (2005).“Virtual Puppetry and The Process of Ritual,” Computers and Composition (C&C.): Elsevier, 2005. [34] Ryu, S.; Faralli, S.; Bottoni, P.; and Labella, A. (2008).“From Traditional to Virtual Interactive Puppetry: A Comprehensive Approach,” Proceedings of ISEA 2008, 14th International Symposium on Electronic Arts, Singapore: July 25-August 3, 2008. [35] Sanders, D. and Gay, E. (1997) “Exploring the Past,” Hypernexus v.7(3), 29-31. [36] Swartout, W.; Gratch, J.; Hill, R.; Hovy, E.; Marsella, S.; Rickel, J.; and Traum, D. “Toward Virtual Humans,” AI Magazine, v.27(1), 2006. [37] Trowbridge, S. and Stapleton, C. (2009). Melting the Boundaries Between Fantasy and Reality, Computer, v 42, n7, 57-62, July 2009, doi:10.1109/MC.2009.228. [38] Ulicny, B. and Thalmann, D. (2002). Crowd Simulation for Virtual Heritage. First International Workshop on 3D Virtual Heritage, Geneva [39] VAST (2009). VAST International Symposium on Virtual Reality, Archaeology and Cultural Heritage. Malta iED 2013 ImmersiveEducation.org PROCEEDINGS 168 [40] VSMM (2009). Conference on Virtual Systems and MultiMedia (Dedicated to Digital Heritage). [41] Ward, W. L. (1957). Playmaking with Children from Kindergarten through Junior High School, New York, NY: Appleton-Century-Crofts. [42] Wilheim, J. D. (2002) Action Strategies for Deepening Comprehension, New York, NY: Scholastic Professional Books. [43] Wirth, J.; Norris, A. E.; Mapes, D. P.; Ingraham, K. E.; and Moshell, J. M. “Interactive Performance: Dramatic Improvisation in a Mixed Reality Environment for Learning,” HCI (14) 2011: 110-118. [44] Nambiar, A. (2011) Sound Spatialization For the Egyptian Oracle, Masters Thesis for a Degree in Professional Studies, Department of Digital Media, Northeastern University. Acknowledgements Donald Sanders, Ph.D. is an archaeologist, architect, and architectural historian, who have been developing virtual worlds for cultural heritage for over 20 years. As the principal investigator of the Vari House project, he sets all priorities and determines requirements. He also directly supervised the artwork for Vari House itself. Jeffrey Jacobson, Ph.D., is an educational researcher specializing in visually immersive virtual reality for education. He developed the interaction strategy for the Vari House, which is based on the Egyptian oracle, also his work. He supervised the development of the digital puppet. Geoff Kornfeld did all of the artwork for Vari House itself and for the farm family members, except for the avatar. Tatania Roykova created Orestes, the avatar/puppet, modeling his body and look around the bones of the priest from the Egyptian Oracle project. She also improved the animations, which were also carried over from the Oracle project. Jonathan Hawkins composed and recorded all the music and sound effects. Cody Johnson programmed the waypoints and the freewalk and greatly refined the intelligent following. He fixed many bugs and generally upgraded Vari House to a professional-level application. Siddhesh Pandit imported and refined the new animations and models and programmed the first version of the intelligent following. iED 2013 ImmersiveEducation.org PROCEEDINGS 169 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org OUTLIER Human Identity in the New Digital Age: The Distributed Self and the Identity Mapping Project Authors: Richard L. Gilbert, PhD 1,2,3 Affiliations: 1 Professor of Psychology, Loyola Marymount University, Los Angeles and Director of The P.R.O.S.E. (Psychological Research on Synthetic Environments) Project. 2 Co-Chair, The Psychology of Immersive Environments Technology Working Group (PIE.TWG), Immersive Education Initiative. 3 Chair, Western USA Chapter of Immersive Education (iED West) * Correspondence to: richard.gilbert@lmu.edu. Abstract: The advent of cloud computing and the proliferation of digital platforms, including immersive digital platforms, has begun to reshape human identity (defined as a person’s conception and expression of his or her individuality). Just as modernist conceptions of the self as stable, unique, and internal (“The Psychological Self”) were displaced by postmodern conceptions of identity as fragmented and constantly shifting (“The Multiple Self”), the post-millennial rise of a multi-platform and 3D digital environment is ushering in a new stage of identity called “The Distributed Self.” In this conception, consciousness and aspects of the self are increasingly externalized and distributed into both 2D and 3D digital personas (e.g. Facebook, Twitter, 3D virtual worlds) reflecting any number or combination of now malleable traits of race, gender, age, style, body type, personality, and physical health. Within this new model, the source of identity remains internal and iED 2013 ImmersiveEducation.org PROCEEDINGS 170 embodied, but the expression or enactment of this consciousness becomes increasingly external, disembodied, and distributed on demand. In this way, the operation of The Distributed Self is analogous to that of “cloud computing,” where digital resources are stored in the Internet (analogous to our central consciousness) and distributed on demand to multiple digital platforms and devices (similar to the distribution of multiple personas across a range of 2D and 3D environments). As multiple 2D and 3D digital platforms continue to proliferate, creating and coordinating a diverse identity system involving a physical self and multiple online identities across a variety of 2D and 3D virtual platforms will increasingly become a normative process in human development and personality. The newly formed Identity Mapping Project at Loyola Marymount University has begun to gather descriptive data on global patterns of distributed identity. One Sentence Summary: This work considers how the advent of cloud computing and the proliferation of digital platforms, including immersive digital platforms, is changing the nature of human identity. Introduction The history of human identity (defined as a person’s conception and expression of his or her individuality) is typically divided by psychologists, intellectual historians, and philosophers of mind into three stages: The “Pre-Modern Stage,” the “Modern Stage,” and the “Post-Modern Stage.” Each stage is closely tied to revolutionary developments in culture and technology. After reviewing these stages, and their associated cultural and technological context, the current work proposes that the rise of 3D virtual worlds and multiple 2D digital platforms, along with the development of cloud computing, is ushering in a 4th form of human identity called “The Distributed Self.” Once again, our conception and experience of the self is undergoing radical change. Conceptions of Human Identity: A Brief History During the Pre-Modern Stage, a person’s identity (The Social Self) was based upon his or her roles and standing in the social order rather than on a unique organization of behavior and internal experience. In this era, a person was a peasant, merchant, landed gentry, or a lord; he was a farmer, a blacksmith, a soldier, or a knight – not merely someone who happened to occupy one of these occupations and social statuses. Moreover, this defined place in the structuralized whole was fixed and unquestioned. As depicted in Figure 1, the socially embedded self is described, not by a unique and distinctive pattern of feelings and thoughts, but by a set of status identifiers that reflect the person’s roles or position within the social order including his or her birth order, child or parental status, generational group, and function or service within the community. iED 2013 ImmersiveEducation.org PROCEEDINGS 171 Figure 1. The Pre-Modern Social Self The Pre-Modern conception of identity as external (outside of the individual) and contextual (rooted in the individual’s place in the social order) gave way to a “Modernist” view of identity as personal, unique, and internal (The Psychological Self) following major historical developments that occurred in the 17th through 19th centuries. These included the Enlightenment (with its emphasis on internal processes of reason and cognition) and the rise of Industrial Capitalism (with its emphasis on private property, individual rights, and its fluid social and occupational system that enabled greater personal change relative to the old feudal order). In the modernist conception, identity is viewed to be: • • • • • • • internally located unique divided between conscious and unconscious awareness largely continuous but subject to change consistent with developmental experience unitary or singular in its organization physically embodied The foundational logic of modernist conceptions of psychological identity is closely aligned with the natural science fields of geology and archeology. An iED 2013 ImmersiveEducation.org PROCEEDINGS 172 individual’s developmental experiences are presumed to leave unconscious byproducts (however they are conceived) in the interior of the person, analogous to fossils deposited within layers of sedimentary rock. Later on, through the painstaking work of psychotherapy, it is assumed that these hidden elements can be brought to light and the history and organization of the personality can be objectively described, much in the way an archeologist works to unearth fossils in order to reconstruct the history and development of a physical region. Perhaps, owing to this analogy, Freud, the author of the original modernist conception of human identity, considered himself to be an archeologist of the mind. This close relationship between developments in geology and archeology and modernist conceptions of a stratified mind containing imprints of the past raises a point that is relevant to all perspectives on human identity: Psychological conceptions of the self are strongly influenced by, and often directly modeled from, prevailing perspectives in the natural sciences and the operation of dominant forms of technology. Figure 2 below depicts the modernist conception of human identity. Figure 2. The Psychological Self of Modernity iED 2013 ImmersiveEducation.org PROCEEDINGS 173 The classic theories of personality all reflect these modernist assumptions regarding the nature of identity and the self. Each assumes that developmental experiences interact with inborn tendencies to create a relatively stable internal organization that is unique to the individual and subject to accurate description. Where they differ is in their specification of the nature and timing of the formative developmental experiences and especially the type of internal byproduct that is the psychological legacy of these experiences. For example, Freud believed that the primary internal consequence of external experience was a set of resolutions of conflicts between biologically-based longings and the demands of culture (e.g. the Oedipal-Electra conflict) [1], whereas Erikson focused on unconscious ratios of competing attitudes about the self and the world (e.g., Trust vs. Mistrust; Autonomy vs. Shame and Doubt, etc.) [2], and theorists working within Object Relations and Self-Psychology [3, 4, 5] emphasized the formation of internal models of self, other, and “self-in-relation-to-other,” as the essential byproducts of development. Outside of the psychodynamic tradition, the more empirically oriented models of trait theory [6, 7] and cognitive social learning theory [8] proposed that the primary internal consequences of social experience involved either a collection of stable psychological attributes or traits, cognitive schema about the self and the world, or entrenched patterns of behavior. In the latter half of the 20th century dramatic advances in transportation and telecommunications turned the world into a global village where individuals were constantly exposed to new and different perspectives, narratives, styles, and psychological realities. The sheer volume and diversity of these voices and perspectives challenged the very notion of objective truth. How could one believe in the truth of any position, including propositions regarding human identity, when a range of counterpositions was constantly available? In this broad context of intellectual doubt and uncertainty, modernist conceptions of an essential, singular, objectively describable self came under attack by post-modern theorists who argued that there is no stable, consistent, coherent individual identity [9, 10]. Instead they maintained that human identity is fragmented, multiple, and constantly shifting (The Multiple Self), no less than the chaotic and contradictory mix of voices that reverberate through the external world. In essence the multiple self represents a qualitative shift in how human identity is viewed because unlike prior conceptions it does not assume the existence of any structure that constrains the range of self-expression – either an external structure like the fixed social order of pre-modernity, or an internal, psychological structure at the heart of modern conceptions of personality. Just as modernist conceptions of the self are closely aligned with the fields of geology and archeology, the post-modern, multiple self can be understood with reference to developments in the area of computer science [11]. Like a windowed computer operating system that serves as the executive controller for many functionally different applications (only one of which is active and displayed on the screen at any time, while many others are available but in an inactive or latent position), the self is conceived as an iED 2013 ImmersiveEducation.org PROCEEDINGS 174 entity capable of “maximizing” or “minimizing” multiple aspects of identity according to personal desires and the demands of a particular context without regard to whether they form a consistent, coherent, structure. Figure 3 depicts the Multiple Self as an operating system managing a shifting collection of active and waiting processes. Figure 3. The Post-Modern Multiple Self Human Identity in the New Digital Age: The Rise of the Distributed Self The Distributed Self: An Overview The rise of 3D virtual worlds and the introduction of avatar-mediated forms of expression and interaction has the potential to once again reshape humanity’s conception and experience of the self and usher in a fourth stage of identity, one which can be termed “The Distributed Self” [12]. In this conception, consciousness and aspects of the self will be increasingly externalized and distributed into 3-dimensional digital forms (i.e. avatars) reflecting any number of combinations of age, gender, body type, race, ethnicity, style, personality, and physical health. Several studies have shown, for instance, that participants in 3D virtual environments such as Second Life often create avatars that, relative to their physical self, are younger, have a better body or physical appearance, and are ascribed a more positive or idealized personality [12, 13, 14]. Less frequently, they choose avatars with a different iED 2013 ImmersiveEducation.org PROCEEDINGS 175 gender, ethnicity, or skin color [15, 16]. Moreover, it is estimated that among the overall user base of Second Life, 18% have a single “alt” (i.e. a secondary or alternative avatar) in addition to their primary avatar; 32% operate two or more alts; and that in approximately 70% of these cases one function of the alt or alts is to experiment with different aspects of identity or personality [12]. Taken as a whole these data indicate that about a half of active users of Second Life are coordinating a multiple personality system consisting of a physical self plus two virtual identities (a primary avatar and one alt) and about a third are coordinating an identity system involving 4 or more components (a physical self and three or more virtual identities). As depicted in Figure 4, when the allocation of consciousness to nonimmersive digital forms such as email, Facebook, Twitter, or Linked In is added to the physical and 3D virtual components of the overall identity system, the structure of the self becomes more like an organizational flowchart rather than the singular entity in modernist conceptions, or the diverse, but device constrained, model of post-modernism. Figure 4. The Distributed Self iED 2013 ImmersiveEducation.org PROCEEDINGS 176 Whereas the windowed operating system serves as the technological analog to the post-modern conception of The Multiple Self, the rise of “cloud computing” (i.e., the virtualization of computational resources on the Internet so that they can be allocated, replicated, or distributed on demand across multiple platforms and devices) forms the technical basis for the new, distributed conception of identity and self. Within this new model, the source of consciousness remains internal and embodied (in the “cloud” of consciousness), but the expression or enactment of this consciousness becomes increasingly external, disembodied, and distributed on demand to multiple digital forms. Reflecting on this trend and growth projections for 3D virtual worlds, Gilbert, Foss, & Murphy [12 p. 232] note: “As 3D virtual worlds and the global population of avatars continue to grow, creating and coordinating a system composed of a physical self and a diverse set of online identities will increasingly become a normative process in human development. Individuals will manage their multiple personality orders in a manner akin to a choreographer managing a company of dancers or a conductor leading an orchestra and the operation of personality will take on a quality of performance art.” [12]. Next Steps: The Identity Mapping Project (IMP) and Acquiring Descriptive Data Having set forth the concept of The Distributed Self, the next step will be to acquire descriptive data through a process that has been named the “Identity Mapping Project.” The project will involve the additional collaboration of LMU Computer Science professors John Dionisio and Philip Dorin, Currently the web domain “mapmyidentity.com” has been secured and the process of building an online survey to gather data about participant’s 1) Physical Self (e.g. such as their age, sex, education, and country of residence) and 2) their presence in each of 6 digital domains: Email, Blogs, On-Line Forums, Social Networks, Digital Gaming, and 3D Virtual Worlds. After participants complete the survey, a graphical representation of his or her current identity will automatically be generated that can be downloaded for free. The hope and intention of the IMP is to gather a large number of identity maps from a global pool of participants in order to gain a better understanding of how human consciousness is being distributed and coordinated across multiple digital platforms. References: 1. Freud, S. (1962). Three Essays on the Theory of Sexuality, trans. James Strachey. New York: Basic Books. 2. Erikson, E. (1950). Childhood and Society. New York: Norton. Revised paperback edition, 1963. iED 2013 ImmersiveEducation.org PROCEEDINGS 177 3. Bowlby, J. (1982). Attachment and loss: Retrospect and prospect. American Journal of Orthopsychiatry, 52 (4), 664-678. 4. Mahler, M., Pine, F., & Bergman, A. (1975). The psychological birth of the human infant: Symbiosis and individuation. New York: Basic Books. 5. Kohut, H. (1977). The restoration of the self. New York: International Universities Press. 6. Allport, G. (1961). Pattern and Growth in Personality. (1961). Harcourt College Publishing. 7. Eysenck, H. (1991). Dimensions of personality: 16: 5 or 3? Criteria for a taxonomic paradigm. Personality and Individual Differences, 12, 773–790. 8. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Prentice-Hall series in social learning theory. Englewood Cliffs, NJ: PrenticeHall, Inc. 9. Gergen, K. (1991). The Saturated Self: Dilemmas of identity in contemporary life. New York: Basic Books. 10. Lifton, R. (1993). The Protean Self: Human resilience in an age of fragmentation. New York: Basic Books. 11. Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Touchstone Press. 12. Gilbert, R., Foss, J., & Murphy, N. (2011). Multiple Personality Order: Physical and personality characteristics of the self, primary avatar, and alt. In A. Peachey & M.Childs, Eds., Reinventing Ourselves: Contemporary Concepts of Identity in Online Virtual Worlds. London: Springer Publishing. 13. Au, W. J. (2007, April 30). Surveying Second Life [Web log message]. Retrieved from http://nwn.blogs.com/nwn/2007/04/second_life_dem.html 14. Bessiere, K., Seay, A.F., & Kiesler, S. (2007). The ideal self: Identity exploration in World of Warcraft. Cyberpsychology and Behaviour, 10, 530-535. 15. Wallace, P., & Marryott, J. (2009). The impact of avatar self-representation on collaboration in virtual worlds. Innovate, 5. 16. Ducheneaut, N., Wen, M., Yee, N., & Wadley, G. (2009). Body and mind: A study of avatar personalization in three virtual worlds. CHI, April, 4-9. iED 2013 ImmersiveEducation.org PROCEEDINGS 178 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org OUTLIER iAchieve, Empowering Students through Interactive Educational Data Exploration Authors: Josephine Tsay1, Zifeng Tian1, Ningyu Mao1, Jennifer Sheu1, Weichuan Tian1, Mohit Sadhu1, Yang Shi1 Affiliations: 1 Carnegie Mellon University’s Entertainment Technology Center. *Correspondence to: josephinetsay@cmu.edu, ztian@cmu.edu, nmao@cmu.edu, jennifersheu@cmu.edu, weichuat@andrew.cmu.edu, mohitmax@cmu.edu, yangshi@andrew.cmu.edu Abstract: iAchieve is an iPad application developed for students to be able to access their own educational data, expressed through a tree metaphor. Students can browse their own educational data by subject and by grade. The student's information is pulled from a data server containing progress in subjects covered by the United States’ Common Core State Standards. This information is customized per student and includes test scores, as well as game scores. By exposing students to their own educational data, we empower them to take control of their own education. Our application is not meant to be stand-alone; it aims to facilitate a conversation between student and parent and/or educator about current educational testing practices and focus, as related to each individual student. Instead of the school system holding the information or report card, instead it sits with the student throughout their learning development K-12. iED 2013 ImmersiveEducation.org PROCEEDINGS 179 One Sentence Summary: iAchieve empowers students to take control of their own educational data through an iPad application that is plugged into scores or other learning data generated by the individual student and/or the student’s academic record and activities; data is visually expressed by subject, as well as grade level, through an interactive tree metaphor. Introduction By exposing students to their own educational data, iAchieve empowers students to take control of their own educational progress. This information, traditionally expressed in the form of a report card or permanent record, is no longer withheld from the student, and instead follows the student through his or her own educational journey. In support of formative assessment in K-12 United States education, iAchieve enhances progress towards a student’s learning goal by facilitating student record-keeping [1]. Educational institutes, parental and educational figures, as well as students themselves, can input data into the iAchieve application to track educational progress towards a desired learning output. The goal is to move away from expressing isolated test scores the way current student dashboards and report cards represent educational data. Instead of presenting data in the form of pure numbers and missed marks, instead the application engages students through visuals by bridging the gap between the functional and aesthetic aspects of data visualization. Students interact with the educational data they have generated over time towards an understanding or mastery of a particular subject or subjects. The iAchieve iPad application allows for student and parental direct access and control of the student’s educational data. Instead of the information sitting on the school’s servers, through the flexibility of the data structure, a student’s data is pulled, stored, and expressed locally on the student’s device. By presenting data through an interactive visual scene, iAchieve helps shorten the feedback loop between a student and his or her own educational mastery. To achieve this proof of concept, the application currently targets the middle school student demographic, ages 11-3, or 6th to 8th grade. The application uses the Common Core State Standards [2] and educational game data to create three student scenarios: low, general, and high academic achievers. Development time for this iPad application takes 4 months, or 16 weeks, from conception to current state. Two months’ dedication to the tree design ensures the clarity of the visualization for our initial target of 6-8th grade students. Scope The iPad application, developed with xCode, the cocos2d engine, and the Objective-C programming language, uses customized individual student data to produce varied visualizations and interactions associated with a tree metaphor, based on the following representations: iED 2013 ImmersiveEducation.org PROCEEDINGS 180 • Each Tree represents a specific subject, and contains educational data for that subject for grades 6-8 in its current state. • The bird’s eye view of the data consists of multiple trees with different subjects. When the student clicks on the bird icon, it reveals a dashboard view, or whole world view, of tree subjects and grade levels representing the capacity of the application to contain 12 grades and ‘n’ subjects. • Branches represent sub-subjects. For instance, if Math is a subject tree, then Geometry is a branch. • Flowers on the trees represent progress for specific subjects. Flowers in this case refer to objects that appear on a tree based on educational data. In a scores-driven scenario, a colored flower represents scores of over 85%. • The style is based on a 2.5 D [3] style, with layers to represent the current, or most recent, data in the foreground. Using the swipe gesture, students navigate through different rows of trees associated to another grade level. The tree can support different types of data and will look different accordingly. For example, an English tree with game data only, as compared to an English tree with school data only, as juxtaposed with an English tree with game and school data, all look different. A. Tree Logic Each subject tree procedurally generates based on specific data sets generated by the student, as seen in Figures 1-2. Pieces of the tree are provided through graphical sprite sheets [4]. The tree grammar follows consistent rules involving tree trunks, splits, branches, and canopies [5]. Figure 1. First generation tree logic based on different scores. iED 2013 ImmersiveEducation.org PROCEEDINGS 181 Figure 2. First generation tree logic based on different sprite sheets. A specific graphic style enhances the visualization from the simple first generation tree logic. Figure 3 shows how the procedurally generated graphical tree utilizes data input in its formation. Figure 3. Tree canopy and branch length indicates how long it took to complete a particular sub-subject. Tree trunk indicates current month in a 12-month school year. B. Tree as Visual Representations A key lesson in the development of a tree data visualization centers on whether this type of data can be a truly interactive experience, with no shiny buttons required, with every part of the tree used to represent information [6]. iED 2013 ImmersiveEducation.org PROCEEDINGS 182 Figure 4 presents a first attempt at tree data visualization through a cherry blossom, where the branches indicate a sub-subject, and additional branches from the split are sections of that particular sub-subject. Flowers, when tapped, display information related to a student’s progress or mastery in a specific sub-subject (for example, a competency score, within the context of this learning branch). The entire tree is interactive and contains educational data. Figure 4. First generation tree visualization based on tree logic. The next iteration simplifies the interaction and moves from a literal tree representation to a figurative one. Rather than simply translating data, the subject tree encapsulates a student’s academic history at a moment in time. To accomplish this, references to key inspirations timeline data visualizations, such as the Tree View of European History, must be drawn: “A novel design is used in the Geschichtsbaum Europa (author unknown) to show a thematic view of the entire history of Europe in a tree structure. With the base of the trunk at 0 A.D., branches for different countries Germany, France, Spain, ... and domains of thought (politics, church, culture) grow upward and outward. Dated events appear as the leaves of the tree. Events B.C. are shown under the roots of the tree” [7]. To bridge the gap between the functional and aesthetic goals behind expressing educational data for middle school students, Figure 5 provides further clarity of the visualization based on the lessons learned from previous tree generations. iED 2013 ImmersiveEducation.org PROCEEDINGS 183 Figure 5. First zoom level of the application showing subjects by trees, progress by flowers/fruit, grade in layers, and navigation via wooden tags on the top lefthand corner. C. User Interaction A student interacting with a tree in the iPad application environment navigates via built-in gestures available on an iPad device. Because the iAchieve application allows free-form exploration at each zoom level, the gestures implemented must be as intuitive as possible. 1. Swipe – Directional arrows at the top and bottom of the iPad screen suggest the swipe gesture, allowing students to move between layers of grade levels. Each row of subject trees below to a specific grade level. 2. Tap – Similar to a computer mouse click for a website, the tap gesture indicates selection of a particular tree, branch, or flower. 3. Pinch – The pinch gesture allows a student to zoom in and out of a given area. 4. Shake – An additional feature allows students to log out through shaking the iPad. Shaking the iPad extends the tree metaphor by alluding to shaking a tree and having petals or leaves fall away. ‘Media S1’ from the ‘Supplementary Materials’ shows an example of the interactions in the iPad through our project team website. iED 2013 ImmersiveEducation.org PROCEEDINGS 184 D. Tree Data Flexibility of the iAchieve application tool allows for data from a variety of sources to be visualized. 1. Data Structure Starting with hypothetical test score data, based on the Common Core State Standards, the Math and English subject trees currently express progress from Mary, Sean, and Rocky, who are fictional students of varying competencies. To view the example data set, please refer to ‘External Database S1’ in the ‘Supplementary Materials’ section at the end of the manuscript. Rather than hard-coding individual students’ data, the application includes a database structure that supports connection to an outside database server. To ensure scalability for ‘n’ number of students and ‘n’ subjects, the database structure pulls student data when specifically called. In addition, the application supports visualization based on existing data [8]. 2. Data Dimensions Each zoom level of our application allows the student to access a particular set of data dimensions. Students can access grade level, subject, sub-subject, time it took to complete a sub-subject or section, current month of a 12-month school year, and when the student was first exposed to a subject, whether it was through taking a test or playing a game. Figure 6 gives an example of the zoom levels. Additionally, students may upload personal artifacts to a subject tree created by them. Figure 6. Example zoom levels. iED 2013 ImmersiveEducation.org PROCEEDINGS 185 E. User Showcase Between February and May 2013, the team showcased the application on 10 separate occasions to solicit feedback and observe user interaction. The showcases included our demographic but also our peers, faculty advisors, client sponsors, and EA employees. During the showcases in April 2013, all users preferred our educational data visualization as compared to a traditional dashboard [9] or spreadsheet, such as Figure 7. Figure 7. Example of an existing dashboard. Results Students find the application “organized” and “pretty” [10]. While all students understood the tree, grade, and subject metaphor, they asked for more labels. At the time of this manuscript, labels stay on until the user taps a snail on the upper right-hand corner to turn off the help labels. We also included a tutorial section as indicated in Figure 8, at the application startup to help orient our users and let them know that the tree is built based on their educational data. iED 2013 ImmersiveEducation.org PROCEEDINGS 186 Figure 8. iAchieve tutorial. Based on these responses, the next generation of the tree becomes iconic and less realistic. Even students who do not own an iPad do not have trouble picking up the gestures through trial and error. Interpretations During a showcase of the iAchieve application with girl scouts touring the Electronic Arts headquarters campus, we take note of students’ reactions when logging into the application under different usernames associated with low, general, or high achieving data. Upon login, students do not notice a difference with their own trees until they look over at what is on their neighbors’ iPad. The interpretation is that our application does not give students direct negative feedback. Rather, our visualization inspires curiosity: what do these trees mean? Why are there flowers on some trees and not others? Students are not immediately discouraged by the view of their own trees, regardless of actual academic progress. Yet, students have the ability to share how much and how little they want to with their peers. Once the peer comparison aspect is introduced, students start wondering how they can affect their own trees. iED 2013 ImmersiveEducation.org PROCEEDINGS 187 Many of the students we showed our application to also initially thought it was a game. We interpret this as a success, because they are pulled into the application regardless of knowing exactly what it was. Parents also asked if this application was available on the App Store, and shared their own stories of frustration with not being able to own the data from parent-teacher conferences. Conclusions Students and parents we have sampled, particularly parents, are ready for an engaging way to own students’ educational data. The grade range from our demographic serves as a transitional period between the student being entirely dependent on the parental and educational guidance from elementary school, to when the student studies largely independently or with other student peers while in high school. In addition to games and learning applications, iPad development can support data visualization efforts. Students and parents are equipped and ready to use applications that are not stand-alone and can be plugged into other systems, while taking advantage of the language already set through gesture-based, touch devices. To develop for smart phones and touch devices in an unconventional way, it is essential to keep in mind existing associations users may have. Due to the screen size and ease of transport, the iPad is ideal for this type of project, where the student can carry their educational data with them and explore while on the go. References and Notes: 20. Catherine Garrison and Michael Ehringhaus, Ph.D. Formative and Summative Assessments in the Classroom. (2007). http://www.amle.org/portals/0/pdf/publications/Web_Exclusive/Formative_Summativ e_Assessment.pdf. 21. Common Core Standards. (2012). http://www.corestandards.org/. 22. Note: Definition of ‘2.5D’ used in the video game industry, http://en.wikipedia.org/wiki/2.5D 23. Note: Definition of a sprite sheet, a method commonly used in the video game industry, http://en.wikipedia.org/wiki/Sprite_(computer_graphics). 24. Hugh Smith. Procedural Trees and Procedural Fire in a Virtual World. (2012). http://software.intel.com/en-us/articles/procedural-trees-and-procedural-fire-in-avirtual-world. 25. Edward Tufte. “Presenting Data and Information with Edward Tufte.” Phoenix Sheraton Crescent Hotel, Phoenix Arizona. 1 February 2013. One day workshop. iED 2013 ImmersiveEducation.org PROCEEDINGS 188 26. Michael Friendly. “Geschictesbaum Europa - Tree view of European history.” Timelines and Visual Histories. York University. 2007. http://www.datavis.ca/gallery/timelines.php. 27. Note: Anonymous educational game data provided by Pearson, for the 7th grade level. 28. Note: Students were shown our application and one pulled from an existing educational company. Please also reference Figure 7. 29. Note: Feedback based on a showcase to middle school boys touring Electronic Arts on March 19, 2013. Acknowledgments: Faculty sponsors include Jiyoung Lee and Carl Rosendahl, from Carnegie Mellon University’s Entertainment Technology Center. Academic client sponsors include GlassLab, the Games Learning and Assessment Lab, from the Institute of Play, in partnership with Electronic Arts Spring 2013. Games data provided from Pearson. Supplementary Materials: Materials and Methods: Software - xCode Cocos2D engine; Peripherals (other than the usual keyboard/mouse) - 4 Mac minis, 4 PC/Mac switchers, an iPad1 and an iPad4. Version control – SVN. References (01-10): Garrison, C., & Ehringhaus, M. (2007). Formative and summative assessments in the classroom. Retrieved from http://www.amle.org/Publications/WebExclusive/Assessment/tabid/1120/Default. aspx Smith, Hugh. (2012). Procedural Trees and Procedural Fire in a Virtual World. Retrieved from http://software.intel.com/en-us/articles/procedural-trees-andprocedural-fire-in-a-virtual-world. Tufte, Edward. “Presenting Data and Information with Edward Tufte.” Phoenix Sheraton Crescent Hotel, Phoenix Arizona. 1 February 2013. One-day workshop. iED 2013 ImmersiveEducation.org PROCEEDINGS 189 Friendly, Michael. “Geschictesbaum Europa - Tree view of European history.” Timelines and Visual Histories. York University. 2007. http://www.datavis.ca/gallery/timelines.php. Media S1: For a video demonstration of the application, feel free to visit the iAchieve website at http://www.etc.cmu.edu/projects/iachieve/?page_id=191. External Databases S1 (excerpt of data): StudentID Name Grade Subject SubSubject SubSubjectLabel Chapter Test1 Test2 Test3 Test4 StartDay EndDay 52734 Sean 6 Math RP Ratios 6.RP.A.1 60 50 50 80 1 10 52734 Sean 6 Math RP Ratios 6.RP.A.2 90 70 80 80 10 18 52734 Sean 6 Math RP Ratios 6.RP.A.3 80 90 90 18 23 52734 Sean 6 Math NS Number System 6.NS.A.1 80 100 45 49 52734 Sean 6 Math NS Number System 6.NS.B.2 80 70 50 61 52734 Sean 6 Math NS Number System 6.NS.B.3 60 90 62 65 52734 Sean 6 Math NS Number System 6.NS.B.4 100 100 66 68 52734 Sean 6 Math NS Number System 6.NS.C.5 80 50 70 69 78 52734 Sean 6 Math NS Number System 6.NS.C.6 70 90 80 79 85 52734 Sean 6 Math NS Number System 6.NS.C.7 67 73 83 104 115 52734 Sean 6 Math NS Number System 6.NS.C.8 90 130 132 52734 Sean 6 Math EE Equations 6.EE.A.1 50 60 70 133 139 52734 Sean 6 Math EE Equations 6.EE.A.2 70 70 70 170 180 52734 Sean 6 Math EE Equations 6.EE.A.3 90 90 100 201 207 52734 Sean 6 Math EE Equations 6.EE.A.4 100 100 208 213 52734 Sean 6 Math EE Equations 6.EE.B.5 70 100 214 217 52734 Sean 6 Math EE Equations 6.EE.B.6 80 90 90 218 227 52734 Sean 6 Math EE Equations 6.EE.B.7 80 80 100 228 231 52734 Sean 6 Math EE Equations 6.EE.B.8 84 67 74 230 240 52734 Sean 6 Math EE Equations 6.EE.C.9 87 89 241 248 52734 Sean 6 Math G Geometry 6.G.A.1 100 249 252 52734 Sean 6 Math G Geometry 6.G.A.2 80 100 253 258 52734 Sean 6 Math G Geometry 6.G.A.3 90 90 100 259 267 52734 Sean 6 Math G Geometry 6.G.A.4 80 70 90 268 274 52734 Sean 6 Math SP Statistics 6.SP.A.1 50 60 60 275 283 52734 Sean 6 Math SP Statistics 6.SP.A.2 60 70 80 284 289 52734 Sean 6 Math SP Statistics 6.SP.A.3 70 70 90 290 298 52734 Sean 6 Math SP Statistics 6.SP.B.4 70 80 90 299 308 iED 2013 ImmersiveEducation.org 70 80 80 85 Test5 60 90 78 80 100 100 75 100 70 80 PROCEEDINGS 190 52734 Sean 6 Math SP Statistics 6.SP.B.5 80 52734 Sean 7 Math RP Ratios 7.RP.A.1 60 50 50 52734 Sean 7 Math RP Ratios 7.RP.A.2 80 90 90 52734 Sean 7 Math RP Ratios 7.RP.A.3 90 70 80 80 52734 Sean 7 Math GG Glasslab Game 7.GG.A.1 93 89 80 89 52734 Sean 7 Math NS Number System 7.NS.A.1 80 100 52734 Sean 7 Math NS Number System 7.NS.A.2 90 52734 Sean 7 Math NS Number System 7.NS.A.3 70 90 80 52734 Sean 7 Math EE Equations 7.EE.A.1 50 60 70 52734 Sean 7 Math EE Equations 7.EE.A.2 70 70 70 52734 Sean 7 Math EE Equations 7.EE.B.3 90 90 100 52734 Sean 7 Math EE Equations 7.EE.B.4 70 100 52734 Sean 7 Math G Geometry 7.G.A.1 100 80 80 89 309 315 1 12 13 20 42 56 22 124 57 60 116 121 121 128 129 139 139 146 147 156 157 162 170 176 End of database excerpt. Contact the authors for the complete database (full data set). iED 2013 ImmersiveEducation.org PROCEEDINGS 191 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org POSTER Mobile Assignment at the Museum: How a Location-based Assignment can Enhance the Educational Experience Authors: Amir Bar 1 Affiliations: 1 University of Houston One Sentence Summary: An electronic location-based assignment at a museum of fine arts resulted in higher student engagement and performance. Abstract: This presentation will discuss and demonstrate a pioneering mobile application envisioned and designed by a graduate student. The museum app provides a visual analysis assignment as well as relevant information to students who visit the Houston Museum of Fine Arts to fulfill an Art History course assignment. The museum app was developed last year and went through several improvement cycles. It was finally used last summer in a large lecture class and created high engagement, high assignment participation, and better student performance. The mobile application is designed to provide several crucial elements. It gives students structure regarding a type of assignment that is new to them. It is an immediately accessible means to complete an assignment. Most importantly, while they do not have a iED 2013 ImmersiveEducation.org PROCEEDINGS 192 professor physically present, they have access to one through a video showing the professor modeling the assignment with a painting near the target painting to be analyzed. Results show that the museum app helps students in the large lecture class (where it can be difficult to foster engagement) to be more engaged learners and to view the museum more positively, even as a destination they want to revisit. Introduction This project explores the effectiveness of using a location-based digital assignment for non-major undergraduate art students who were sent to the museum. This class was faceto-face, but location-based assignments could be used to enhance both face-to-face and online classes. While many courses are moving to the online setting, the significance of sending students to an actual museum is not fading. Student assignments and feedback on the museum experience show that using this app increased engagement with the assignment, the course and the museum. Getting students who are not traditionally museumgoers to appreciate art in the way these students did creates a level of engagement that is much more important than just the objective of the class. Although this is only one project, the results show some promise for the potential of this approach. The audience The Visual Analysis assignment is given as part of a non-major art history class at the University of Houston. The students taking this class are usually sophomores who have to take one art class as part of the core classes for their degree. The class is offered (currently) in two venues, one as a completely online course, and the other one in a faceto-face lecture (about 150-200 students per class). Many of the students who take this class have never visited the Museum of Fine Arts in Houston (MFAH) or any other fine arts museum. The problem The visual analysis assignment was designed to make students analyze the visual aspects of a painting by evaluating many aspects, which the artists choose while creating the painting (i.e., colors, line, shading, meaning, etc.). Until the development of the mobile assignment, the visual analysis assignment included a visit to the museum where students were encouraged to bring with them an essay question, take notes, and go home to write a 2-3 page essay. The students for whom this was their first visit to the museum found it difficult to do the assignment because their essay was written at home from their notes without access to the artwork itself. iED 2013 ImmersiveEducation.org PROCEEDINGS 193 The solution The main goal of the mobile assignment was to bring the assignment to the museum “LIVE,” where students can complete it while standing in front of the painting. The mobile assignment allows students to complete the assignment at the museum by taking advantage of the free Wi-Fi connection or by using cellular access from their mobile device’s Internet provider. The mobile assignment also allows the instructor to deliver specific instructions on how to complete the assignment as well as a sample video of the instructor modeling the analysis of a painting very near the target painting. The mobile assignment is a combination of assignment and mobile course material delivery. Students now complete the assignment at the museum and can enjoy the rest of the museum visit without the stress of needing to write an essay at home. The design The design of the mobile assignment has three main parts: Part 1 provides two motivational videos with an opportunity for extra credit. The two videos include two segments from the audiobook Moon Palace (1). The book by Paul Auster tells the story of an older man who hires a younger man to help him write his life story. The older man who used to be a painter sends the younger man to the museum to conduct a visual analysis of a painting, which is very similar to the analysis required in the UH art class. The use of a scene from a novel seems to draw the students into the process, judging from the references they make to it in their feedback. This more creative introduction appears to help students start thinking about art more than instructions from their professor, the previous introduction to the assignment. In addition to watching the videos, the students need to answer two questions about them in order to get extra credit points (another aspect of increasing the motivation). Part 2 breaks the initial one essay question into 21 smaller questions. This approach was taken in order to make it easier to write their answers on a smartphone. Typing an essay on the phone is obviously more complicated than writing 21 sentences or even paragraphs. Also, by breaking the essay question into subject-oriented questions, the student can easily focus on each aspect. Part 3: For each of the 21 questions, a “model” answer was created using a similar, nearby painting as the example. An “example” answer is provided for each question via text and video. Students can see both the depth and length of the answer that the professor is expecting. In addition, a short video with additional information for students who get “stuck” is provided. This approach was guided by the UDL principle I [1] of offering information via multiple channels (visual, audio, text), which offer scaffolding for each question. iED 2013 ImmersiveEducation.org PROCEEDINGS 194 The entire assignment was developed on the platform of Survey Monkey. This platform allows a variety of options including the use of images and videos. In addition, it automatically adapts to the device the student uses, something that a specific mobile App does not do. Also Survey Monkey allows collection of the results in an Excel file which makes the grading process much more efficient. The Results The museum mobile assignment was tested as an extra credit opportunity during the Spring/2012 semester, and in Summer/2013 it was given for the first time as an official assignment for a grade. Sixty-six students (a smaller number because of summer school) took the class and completed the assignment. The students were given the opportunity to take the assignment as a paper-based assignment (in which they wrote the answers on paper and submitted them via the app at home) or they could access the mobile assignment via laptop, tablet computer, or smartphone. Thirty-nine students chose to use a laptop, three students used a smartphone, one used a tablet computer, two used the printed assignment, and 21 did not answer that question on the survey (it was optional). Figure 1. iED 2013 ImmersiveEducation.org PROCEEDINGS 195 Students were also asked to give feedback on the assignment. The specific question they were asked was “Please write us about any positive or negative experience you had with regard to this assignment.” The results were: 69% of the students gave only positive feedback, 14% gave positive and negative feedback, 11% gave only negative feedback, and 6% skipped this question. Figure 2. Out of the students who gave negative or mixed feedback, 15 comments were collected. Out of these 15 comments, five were with regard to the Wi-Fi connectivity at the MFAH; five were with regard to not having a place to sit (the assignment takes more than an hour to complete); two were about the assignment design; one was about the difficulty level, and one was about the length of the assignment. iED 2013 ImmersiveEducation.org PROCEEDINGS 196 Figure 3. The positive feedback included 44 comments that demonstrated that the students enjoyed the technical aspect of the assignment as well as the design that included the videos as an additional tool to help students. Students seemed to also enjoy the overall experience of visually analyzing an artwork. In addition, an interview with the grader for this class (a graduate student from the art department) reveals that the time it took her to grade the assignment for each student was five minutes rather than 20 minutes for the essay assignment. She mentioned that the overall number of words was similar (and in some cases higher with the new assignment), but the fact that it was already organized in an Excel file in which each answer had its own column helped with the efficiency of the process. iED 2013 ImmersiveEducation.org PROCEEDINGS 197 Figure 4. Outcome The visual analysis assignment was presented at a conference (Fostering Deep Learning) at the University of Houston and as a result, many professors showed interest in using location-based digital assignments for their course. Today there are four different mobile assignments being used in five courses at the University of Houston. The courses are in the Art Department (undergraduate), and in the Human Development & Consumer Science Department (graduate course). In addition, there are four more designs that will be completed for the Fall/2013 semester. References and Notes: 1. CAST. (2012). Universal design for learning guidelines version 2.0. Retrieved on February 10, 2013, from http://www.cast.org/publications/UDLguidelines/version1.html iED 2013 ImmersiveEducation.org PROCEEDINGS 198 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org POSTER Serious Games: A Comprehensive Review of Learning Outcomes Authors: Kathrine Petersen Affiliations: 1 Center for Computer Assisted Learning (CoCo Lab) University of Sydney *Correspondence to: Kathrine Petersen at kpet6924@uni.sydney.edu.au Abstract: A comprehensive review of learning outcomes in the area of “serious games” research demonstrates that learning outcomes can be gained from the virtual experience. Games and simulations have shown positive learning outcomes for learning content knowledge, reasoning and accuracy. However, these learning outcomes depend on how motivating and engaging the games are to students, the application of successful design elements and applying the appropriate learning method to compliment the “serious game”. It was found that specific descriptions, development of in-game quests, prior game play experience, ability to engage with virtual peers and mentors, and the level of self-direction in learning played important roles in these successful learning outcomes. One Sentence Summary: Overall, research into “Serious Games” for learning indicates that learning outcomes can be gained through the implementation of well-designed virtual environments that use curricular content and appropriate learning methods. iED 2013 ImmersiveEducation.org PROCEEDINGS 199 Overview A review of recent epistemic study results of “serious games” show that there are gains in learning outcomes while using games and simulations for learning content knowledge [10]; [24]; [9]; [16]; [1]; [20]). Meanwhile, results also indicated “significant gains” in learning outcomes can be obtained using “serious games” for learning [17]; [4]; [2]; [6]. Furthermore, some study results demonstrated significant gains in reasoning [18] and significant gains in accuracy [12]. However, Clark et al. [5] noted significant gains only existed as long as a complete interface for learning was implemented. The motivation and engagement of students when learning using games and simulations was recorded as a positive research result by many researchers [14]; [13]; [22]; [19]. Klopfer, et al. [14] indicated that students engaged in self-directed learning and Ketelhut et al. [13] noted that while students engage in learning using games and simulations their behavior improved. Furthermore, Squire [23] mentioned that games and simulations encouraged academic practices, and that underachieving students were motivated and enjoyed the experience. Similarily, students were able to develop new vocabularies and better understanding [22]. When interviewing students about their experience with games and simulations, Klopfer et al. [14] found that students rated technology highly, and that it positively impacted their learning. Likewise, Ketelhut et al. [13] and Rosembaum et al. [19] observed that games and simulations used for learning were good for developing inquiry skills. Furthermore, Clark et al. [3] observed that technology was able to create a real experience, and Dieterle [6] found that this ‘real’ experience helped students connect with and visualize the profession that was emulated within the learning space. Serious Games: Epistemic Study Results Greenfield et al. [8] researched University students in the U.S. and Rome to test real-world transfer of skills, and after playing a video game for 2.5 hours, were then asked to take a test to measure their ability to generalize and apply principles from demonstrated examples (e.g. images of components in an electronic circuit). Results from the study showed that video game performance significantly correlated with improvement in scores on the electronic circuit. Another study conducted by McClean et al. [16] of 238 college students playing “Virtual Cell” and 273 students playing “Geography Explorer” (http://geographyexplorer.com/geography_links) found that students playing “Virtual Cell” showed 40% gains over lecture, and students playing “Geography Explorer” showed 15-40% gains over lecture. iED 2013 ImmersiveEducation.org PROCEEDINGS 200 Figure 1. Virtual Cell Figure 2. Geography Explorer Rosenbaum et al. [19] studied 21 urban high school students playing “Outbreak @ the Institute”, an augmented reality game where players take on the roles of doctors, technicians and public health experts trying to contain disease outbreak. As part of the study students interacted with virtual characters, and employ virtual diagnostic tests and medicines while they moved across the “real” university campus with handheld devises. Results from the use of video, surveys and interviews of students showed that students were able to perceive the game as authentic, felt embodied in the game, engaged in the inquiry and understood the dynamic nature of the model in the game. Figure 3. Outbreak @ the Institute iED 2013 Figure 4. Outbreak @ the Institute ImmersiveEducation.org PROCEEDINGS 201 In Ketelhut et al. [13] “River City” (http://muve.gse.harvard.edu/rivercityproject/) study students collaboratively solved 19th century city problems with illness by interacting with each other, digital artifacts in the game, computer agents in the game and various data collection tools. Approximately 2000 adolescents students were tested and results showed that students learned biology content, that students and teachers were highly engaged, that students attendance improved and that disruptive behavior dropped. Furthermore, it was found that students were building 21st Century skills in virtual communication and expression, and that using this type of technology in the classroom can facilitate good inquiry learning. Another study involving “River City” conducted by Dieterle [6] who tested 574 students that were required to complete curriculum, demonstrated significant differences from pre-test to post-test in terms of Science Content Understanding (CONTENT) and Self-Efficacy in Scientific Inquiry (SEISI) p (< .01). Results showed that students that created and shared artifacts on the Internet were well suited to problem solving disease transmission. Meanwhile, students who felt highly connected to media, tools, and people they used for communication, expression and problem solving in curriculum, were more likely to believe that they are able to successfully complete the activities that a scientist might engage in. Another “River City” study conducted by Clark et al. [3] found that of all the students who participated in the study, over 1000 students indicated that they felt like real scientists for the first time. Figure 5. River City Figure 6. River City Neulight et al. [18] conducted a study involving 46 six grade students from two separate classes, who were investigated for their understanding of virtual infectious diseases from participatory simulations in the game-like world “Whyville” and “Whypox” (http://i.whyville.net/smmk/virus/whypoxLab). Neulight et al. [18] found that there was a significant shift in student’s responses between pre-test and post test from pre-biological to biological explanations (t(44) = -3.5, p = 0.001; t(44) = -3.496, p = .001) iED 2013 ImmersiveEducation.org PROCEEDINGS 202 demonstrating that twice as many students reasoned about natural infectious disease with biological reasoning by the end of the curriculum. Figure 7. Whyville Steinkuehler et al. [25] studied the scientific habits of mind in discussion forums around the commercial massively multiplayer online role-playing game “World of Warcraft” by analysing 1,984 posts by users in 85 different discussion threads. While the game involves fantasy themes, they found that 86% of the posts involved social knowledge construction and more than 50% of the posts demonstrated evidenced systems-based reasoning. Meanwhile, they also found that roughly 10% demonstrated evidenced modelbased reasoning and 65% displayed evaluative epistemologies supportive of argumentation as a means for knowledge construction. Figure 8. World of Warcraft iED 2013 ImmersiveEducation.org PROCEEDINGS 203 In a study of “Taiga” (Quest Atlantis) (http://www.questatlantis.org/) by Barab et al. [2] they compared the learning outcomes of 51 undergraduate participants in 4 conditions; electronic book-based content, “simplistic framing” of content presented as a web-based 2D curriculum, immersive world-based (pair-based), and single player immersive worldbased. The results showed that learners in either of the virtual world-based conditions significantly outperformed learners in electronic book group (p=.01), and outperformed book and ‘simple-framing’ groups on a transfer test (p=.01). In another “Quest Atlantis” study, Hickey et al. [9] looked at the impact of a 15-hour “Taiga” ecological sciences curriculum in “Quest Atlantis”, and this was done by assessing gains in individual science content and socio-scientific inquiry with a performance assessment that presented related problems in related context. Furthermore, the study consisted of randomly sampled release tests to assess the gains and involved a 6th Grade teacher using the curriculum with two classes. Results showed that students using “Quest Atlantis” obtained greater gains (0.3 and 0.2 SD) than students in classes that used expository text on the same concepts. After formative feedback and development for the in-game Quest, the same teacher used the “Taiga” curriculum in all four of his classes that resulted in substantially higher gains in understanding and achievement (1.1 SD and 0.4 SD). Figure 9. Quest Atlantis Jacobson et al. [10] investigated 10th grade students under two conditions in Singapore using the “NIELS” model simulation that teaches concepts about electricity. The first condition involved “productive failure” in which students worked on a problem for each model followed by structured problem activities specified on worksheets, while the “nonproductive failure” condition involved worksheets for both activities. Interestingly, the iED 2013 ImmersiveEducation.org PROCEEDINGS 204 results showed that the “productive failure” group performed better on the post-test assessments of declarative and conceptual understanding. In another study, Son et al [21] conducted three experiments to examine the influence of different descriptions and perceptual instantiations of scientific principle of competitive specialization. The studies demonstrated that intuitive descriptions led to enhanced domain specific learning, but also deterred transfer. The studies also indicated that “idealized” graphics were more effective than concrete graphics, even when unintuitive descriptions are applied to them, however when graphics are (e.g. definite, specific) learning and transfer largely depend on the particular description. Annetta, et al. [1] conducted a study of 74 fifth grade students playing “Dr. Fiction” a teacher Multiplayer Educational Gaming Application (MEGA) using pre-post test design. The found that students overall did significantly better on the post-test (p < .001, effect size .65). The study also looked at gender differences and found no significant differences between genders based on performance results. However, when Anneta et al. [1] studied 66 students playing a “teacher-created” game that looked at Genetics results showed no significant improvement in post-test when compared to 63 students not playing the game. In Squire [23] study using the game “Sick at South Beach” that involved 55 students using the augmented reality game-based curriculum, results showed that fictional elements of the augmented reality game situated the learning experience and encouraged academic practices. Furthermore, Squire [23] found that student created inscriptions influenced student’s emerging understanding, that the game-based curriculum’s design enhanced student’s conceptual understandings, and learning through technology enhanced curriculum and encouraged student’s identities as independent problem solvers. Figure 10. Sick at South Beach iED 2013 ImmersiveEducation.org PROCEEDINGS 205 Meanwhile, Kafai et al. [12] conducted a study using “Whypox” that focused on student’s use of two simulations of epidemic outbreaks; one fast and one slow. It was found that during the “Whypox” outbreaks, more than 1400 simulations performed by 171 players occurred at the peak of the study. Kafai et al. [12] also found that 68% of players conducted some form of systemic investigation by running the simulations 3 or more times. They also found that 49% of those players demonstrated significant improvements in the accuracy of their predictions, and that 70% of players pursued engineering type goals in the process rather than scientific strategies as indicated by the relationship between independent variables and the accuracy of users’ predictions. In a more recent study, Falcao et al. [7] conducted a Qualitative study that explored Digital Technologies and learner focus using technology-based learning domains. Falcao etal. [7] found that the technology encouraged a more purposeful, conceptual exploration amongst learners; that learners not only focused their attention on the learning activities within the technology-based domains, they explored and tried to understand how the technology-based learning domain worked and how the artifacts affected the technology. #Summary of Epistemic Studies Table Researcher designed “Serious Games” The “Urban Science” Study Shaffer [20] conducted a mixed methods (Qualitative & Quantitative) study using the “Urban Science” Computer Game. While evaluating the computer game “Urban Science”, designed for learning complex ecology planning problems, it was found that players were able to successfully problem solve, and develop relevant and innovative solutions. Shaffer [20] thus deemed “Urban Science” a valuable educational tool for starting a discussion about interconnectedness or urban systems. Furthermore, Shaffer [20] suggested that research with “Urban Science” demonstrates that novices benefit by working on problem solving and reflecting on problem solving through working with peers and mentors. Figure 11. Urban Science iED 2013 ImmersiveEducation.org PROCEEDINGS 206 The “Omosa” Study In a more recent study, Jacobson et al. [11] conducted research that involved the development of an Agent-Based Virtual Environment (ABVW); an immersive virtual world for experiencing and exploring a "Virtual" biology fieldwork ecosystem. The system incorporated computational biology models of predator-prey interactions that were linked to a 2D agent-based modeling tool. Furthermore, the study involved grade 9 students engaging in computational scientific inquiry using the ABVW over a two-week period. Jacobson et al. [11] incorporated pre-post testing of biology content knowledge (dependent variable) and student’s attitudes toward science inquiry as part of the study. They also used multiple choice questions and open-ended short answer questions to assess content knowledge, while the likert-scale was used to measure attitudes towards science inquiry. Furthermore, 12 items were used for the attitudinal scale, as factor analysis and reliability tests suggested that eight items were not reliable. The “Omosa” study results indicated that there were no significant correlations between grouping variables (gender, class type, and the treatment condition). However class type was found to produce differential results, as selective class student’s attitudes towards science inquiry were higher than comprehensive class students, although this was not a significant difference. Jacobson et al. [11] found that there were no significant correlations between attitudes towards scientific inquiry and content knowledge results, as total attitudinal scores and the total content knowledge scores showed no significant difference. Although the selective class had a higher score on the post-test than the comprehensive class, this was not a significant difference (F(1,25)) = .629, p = .435). However, for the combined scores of both classes on these items, a 1-tail repeated measures ANOVA showed that the posttest score was significantly higher than the pre-test (F(1,25) = 3.661, p =. 03) (Jacobson et al, 2012, p.10) Figure 12. Omosa VWorld iED 2013 ImmersiveEducation.org PROCEEDINGS 207 Jacobson et al. [11] expressed interest in extending the approach in their study to foster learning of historical and cultural knowledge including the linking of virtual learning experiences in informal settings, such as museums with formal settings in science and history classes (Jacobson et al, 2012, p.12-13). Jacobson et al. [11] also stated that limitations existed with the size of the study and that results therefore are provisional. Furthermore, they could not confirm learning gains on the post-test were due to the learning activities in the “Omosa” ABVW, the learning activities in the simulation “Omosa NetLogo”, or a combination of both learning activities. In order to overcome this, Jacobson et al. [11] recommend future research could explore the learning efficacy of these two types of learning experiences separately, and expressed a need to extend the research to include more students and schools. Similarly, future research could explore the use of computational scientific enquiry in regular science classes involving different science content and grade levels. Overall, Jacobson et al. [11] demonstrated that ABVWs show potential for enhancing learning of scientific knowledge and skills, and are likely to enhance more general interest in science as well. They also showed that virtual and serious game approaches for creating learning experiences show potential to convey both a deeper sense of the wonders of the natural world and an understanding of the scientific enterprise at its core (Jacobson et al, 2012 p.13), highlighting the likeminded opinion of Young et al [26]. Conclusion “Serious games” that are designed well using school curricular can produce positive learning outcomes for students. Research to date has demonstrated that serious games and simulations help students learn scientific knowledge, urban science skills, apply medical knowledge and learn history curricular. However, these positive results did not occur in poorly designed virtual environments, were concrete graphics were used over “idealized” graphics, were 2D reading books were used instead of the virtual environment and were static learning was used instead of situated learning. Overall, there is much to be gained from reviewing achievements in “serious games” research and what produces positive learning outcomes for students learning standardsbased curricular. References and Notes: 1. L. Annetta, J. Mangrum, S. Holmes, K. Collazo & M. T. Cheng, Bridging reality to virtual reality: Investigating gender effect and student engagement on learning through video game play in an elementary school classroom. International Journal of Science Education, 31(8), 1091-1113 (2009a,b). DOI:10.1080/09500690801968656 iED 2013 ImmersiveEducation.org PROCEEDINGS 208 2. S. A. Barab, B. Scott, S. Siyahhan, R. Goldstone, A. Ingram-Goble, S. Zuiker & S. Warrant, Transformational play as a curricular scaffold: Using videogames to support science education. Journal of Science Education and Technology 18, 305320 (2009). 3. J. Clarke & C. Dede, Making learning meaningful: An exploratory study of using multi-user environments (MUVEs) in middle school science. Paper presented at the American Educational Research Association Conference, Montreal, Canada (2005). 4. D. Clark & D. Jorde, Helping students revise disruptive experientially supported ideas about thermodynamics: Computer visualizations and tactile models Journal of Research in Science Teaching 41(1), 1-23. (2004) DOI:10.1002/tea.10097. 5. D. B. Clark, B. Nelson, C. M. D’Angelo, K. Slack & M. Menekse, Intended and unintended Learning in SURGE. Results presented at the Technology Enhanced Learning in Science 2009 meeting. Minneapolis, MN. Published in Proceedings of the 9th International Conference of the Learning Sciences - Volume 2 Pages 230-231 (2009). 6. E. Dieterle, Neomillennial learning styles and River City. Children, Youth and Environments, 19(1), 245–278 (2009b). 7. T. P. Falcao & S. Price, Where the attention is: Discovery learning in novel tangible environments. Interacting with Computers, 23(5), 499-512 (2011). 8. P. M. Greenfield, L. Camaioni, P. Ercolani, L. Weiss & B. A Lauber, Cognitive socialization by computer games in two cultures: Inductive discovery or mastery of an inconic code. Journal of Applied Developmental Psychology, 15, 59-85 (1994). 9. D. Hickey, A. Ingram-Goble, & E. Jameson, Designing Assessments and Assessing Designs in Virtual Educational Environments. Journal of Science Education and Technology, 18(2), 187–208 (2009). 10. M. Jacobson, B. Kim, S. Pathak & B. Zhang, Learning the physics of electricity with agent based models: Fail first and structure later? Paper presented at the Annual Meeting of American Educational Researchers Association. (San Diego, CA: April 5, 2009). 11. M. Jacobson, L. Markauskaite, N. Kelly, P. Stokes & P Kuan Ling Lai, Agentbased Models and Learning about Climate Change: An Exploratory Study. The University of Sydney, Australia (2012). 12. Y. Kafai, M. Quintero & D. Feldon, Investigating the ‘Why’ in Whypox: Casual and Systematic Explorations of a Virtual Epidemic. (In Games and Culture, January 2010). DOI: 10.1177/1555412009351265 13. D. Ketelhut, C. Dede, J. Clarke & B. Nelson, A multi-user virtual environment for building higher order inquiry skills in science. Paper presented at the 2006 AERA Annual Meeting (San Francisco, CA, 7 to 11 April 2006) (http://muve.gse.harvard.edu/rivercityproject/documents/rivercitysympinq1.pdf). 14. E. Klopfer, S. Yoon & J. Perry, Using palm technology in participatory simulations of complex systems: A new take on ubiquitous and accessible mobile computing Journal of Science Education and Technology, 14(3), 285-297 (2005). 15. E. Klopfer, S. Yoon, & L. Rivas, Comparative Analysis of Palm and Wearable Computers for Participatory Simulations. Journal of Computer Assisted Learning, iED 2013 ImmersiveEducation.org PROCEEDINGS 209 20, 347-359 (2004) (http://web.media.mit.edu/~mikhak/courses/tsr/readings/klopfer-04.pdf) 16. P. McClean, B. Saini-Eidukat, D. Schwert, B. Slator & A. White, Virtual Worlds in Large Enrollment Science Classes Significantly Improve Authentic Learning In J. A. Chambers (Ed.), Selected Papers from the 12th International Conference on College Teaching and Learning (pp. 111–118). (Jacksonville, FL: Center for the Advancement of Teaching and Learning, 2001) DOI:10.1.1.90.4132 17. E. Meir, J. Perry, D. Stal, S. Maruca & E. Klopfer, How effective are simulated molecular19 level experiments for teaching diffusion and osmosis? C ell Biology Education, 4, 235-248 (2005) DOI:10.1187/cbe.04-09-0049. 18. N. Neulight, Y. Kafai, L. Kao, B. Foley & C. Galas, Children’s participation in a virtual epidemic in the science classroom: Making connections to natural infectious diseases Journal of Science Education and Technology, 16(1), 47-58 (2007). 19. E. Rosenbaum, E. Klopfer & J. Perry, On location learning: authentic applied science with networked augmented realities. Journal Science Education Technology, Volume 16, Issue 1, pp. 31-45 (2006) DOI: 10.1007/s10956-0069036-0 20. D. Shaffer, How Computer Games Help Children Learn: Palgrave Macmillan (2006). 21. J. Son & R. Goldstone, Fostering General Transfer with Specific Simulations. Pragmatics and Cognition, 17, 1-42 (2009). 22. K. Squire, 2005). Changing the game: What happens when video games enter the classroom? Innovate: Journal of Online Education 1 (6) (2005). (http://effectiveclassroom.org/goto/http://website.education.wisc.edu/kdsquire/ten ure-files/manuscripts/26-innovate.pdf) 23. K. Squire, From information to experience: Place-based augmented reality games as a model for learning in a globally networked society. Teachers College Record, 112(10), 2565–2602 (2010). (http://website.education.wisc.edu/kdsquire/tenurefiles/01-TCR-squire-edits.pdf). 24. K. Squire, M. Barnett, J. Grant & T. Higginbotham, Electromagnetism Supercharged!: learning physics with digital simulation games. In Y. B. Kafai, W. A. Sandoval, N. Enyedy, A. S. Nixon, & F. Herrera, (Eds.), Proceedings of the 6th International Conference on Learning Sciences (pp. 513–520). (Los Angeles: UCLA Press, 2004). 25. D. Steinkuehler & S. Duncan, Scientific habits of mind in virtual worlds. Journal of Science Education and Technology, 17(6), 530-543. (2008) 26. M. Young, S. Slota, A. Cutter, G. Jalette, G. Mullin, B. Lai, Z. Simeoni, M Tran & M. Yukhymenko, (2012). Our Princess Is in Another Castle: A Review of Trends in Serious Gaming for Education in Review Of Educational Research 2012 82: 61 (originally published online February 2012) DOI: 10.3102/0034654312436980 (http://rer.sagepub.com/content/82/1/61). Acknowledgments: Professor Michael Jacobson for sharing his insight and knowledge. iED 2013 ImmersiveEducation.org PROCEEDINGS 210 Supplementary Materials: Table 1. Summary of Epistemic Research into “Serious Games” Researcher/s Theory Methodology Participants/Results Son & Goldstone (2009) Qualitative Conducted three experiments to examine influence of different descriptions and perceptual instantiations of scientific principle of competitive specialization Hickey, Ingram-Goble & Jameson (2009) Mixed Methods (Qualitative & Quantitative) Barb, Scott, Siyahhan, Goldstone, Ingram-Goble, Zuiker & Warrant (2009) Mixed Methods (Qualitative & Quantitative) Ketelhut, Dede, Clark & Nelson (2006) Mixed Methods (Qualitative & Quantitative) Dieterle (2009b) Mixed Methods (Qualitative & Quantitative) Studied the impact of a 15hour “Taiga” ecological sciences curriculum in “Quest Atlantis” and this was done by assessing gains in individual science content and socio scientific inquiry with a performance assessment that presented related problems in related context. Randomly sampled release tests were used to assess the gains. Involved a 6th Grade teacher using the curriculum with two classes. In another “Taiga” (Quest Atlantis) study that compared the outcomes of 51 Undergraduate participants in 4 conditions; electronic book-based content, “simplistic framing” of content presented as a web based 2D curriculum, immersive world-based (pair-based), and single player immersive world-based. In “River City” simulation students collaboratively solved 19th century city problems with illness by interacting with each other, digital artifacts in the game, computer agents in the game and various data collection tools. Approximately 2000 adolescent students were tested. Tested 574 students that were required to complete curriculum using “River City”. Their studies demonstrated that intuitive descriptions led to enhanced domain specific learning, but also deterred transfer, while "idealised" graphics were more effective than concrete graphics even when unintuitive descriptions are applied to them, however when graphics are (definite, specific) learning and transfer largely depend on the particular description. Students obtained greater gains (0.3 and 0.2 SD) than students in classes that used expository text on the same concepts. After formative feedback and development for the in-game Quest the same teacher used the “Taiga” curriculum in all four of his classes that resulted in substantially higher gains in understanding and achievement (1.1 SD and 0.4 SD). iED 2013 ImmersiveEducation.org They found that learners in either of the virtual world-based conditions significantly outperformed learners in electronic book group (p=.01), and outperformed book and “simple-framing” groups on a transfer test (p=.01). Their results showed that students learned biology content that students and teachers were highly engaged, that students attendance improved and that disruptive behavior dropped, that students were building 21st Century skills in virtual communication and expression and importantly that using this type of technology in the classroom can facilitate good inquiry learning. Results showed that significant differences from pre-test to post-test in terms of science content Science Content PROCEEDINGS 211 Neulight, Kafai, Kao, Foley & Galas (2007) Mixed Methods (Qualitative & Quantitative) 46 six grade students from two separate classes were investigated for their understanding of virtual infectious diseases from participatory simulations in a game-like world (“Whypox” & “Whyville”) Squire (2005) Qualitative Clark & Dede (2005) Qualitative The study is a comparative case study involving "Civilization lll" being used for activities exploring world history. Two high school sites were chosen based on teachers interest in using games as supportive learning tools and whose interest reflected a concern about finding ways to engage kids who felt alienated from school in learning experiences. In the urban school, teachers expressed a need to help students who had little interest in history. The second site was an afterschool context where educators hoped to engage students with school-based learning in general including the development of historical understanding. Study in “River City” including over 1000 students. Squire (2010) Mixed Methods (Qualitative & Quantitative) iED 2013 “Sick at South Beach” simulation study with 55 students using the augmented reality gamebased curriculum. ImmersiveEducation.org Understanding (CONTENT) and SelfEfficacy in Scientific Inquiry (SEISI) p ( < .01). Students that created and shared artifacts on the internet were well suited to problem solving disease transmission, while students who felt highly connected to media, tools and people they used for communication, expression and problem solving in curriculum were more likely to believe that they are able to successfully complete the activities that a scientist might engage in. They found that a significant shift in student’s responses between pre-test and post test from pre-biological to biological explanations (t(44) = -3.5, p = 0.001; t(44) = -3.496, p = .001) demonstrating that twice as many students reasoned about natural infectious disease with biological reasoning by the end of the curriculum. Squire found that most urban middle school students required considerable game play prior to learning in order to develop basic game concepts. After school students showed more motivation, while 25% of urban middle school students complained that the game was too hard, complicated or uninteresting. However, 25% of mostly underachieving students loved playing the game and thought it was a “perfect” way to learn history, some considering the experience the highlight of their year. Underachieving students tended to consider school-mandated history to be propaganda, while they tended to enjoy the scenarios that enabled them to explore historical conditions and helped them develop new historical vocabularies as well as a better understanding of geography and world history. Found that of all the students who participated in the study, over 1000 students indicated that they felt like real scientists for the first time. Showed that fictional elements of the augmented reality game situated the learning experience and encouraged academic practices, student created inscriptions influenced students’ emerging understanding, the gamebased curriculum’s design enhanced student’s conceptual understandings and learning through technology enhanced curriculum and encouraged student’s identities as independent PROCEEDINGS 212 Shaffer (2006) Mixed Methods (Qualitative & Quantitative) Urban Science Computer Game Kafai, Quintero & Feldon (2010) Mixed Methods (Qualitative & Quantitative) Studied “Whypox” and focused on students use of two simulations of epidemic outbreaks, one fast and one slow. During “Whypox” outbreaks, more than 1400 simulations performed by 171 players occurred at the peak of the study. Rosembaum, Klopfer & Perry (2006) Qualitative Holbert (2009) Mixed Methods (Qualitative & Quantitative) Annetta, Mangrum, Holmes, Collazo & Cheng (2009a) Mixed Methods (Qualitative & Quantitative) Anneta, Mangrium, Holms, Collazo & Cheng Mixed Methods (Qualitative & Studied 21 urban high school students playing “Outbreak @ the Institute”, an augmented reality game where players take on the roles of doctors, technicians and public health experts trying to contain disease outbreak. Students interact with virtual characters and employ virtual diagnostic tests and medicines while they move across the ‘real’ university campus with handheld devises. Collected data during ethnographic observations of and individual clinical interviews with children playing popular video games (“Mario Kart”, “Wii” and “Burnout Paradise”). 74 fifth grade students playing “Dr. Fiction” a teacher Multiplayer Educational Gaming Application (MEGA) using pre-post test design. Studied 66 students playing a teacher-created game that iED 2013 ImmersiveEducation.org problem solvers. While evaluating the computer game “Urban Science” designed for learning complex ecology planning problems, it was found that players were able to successfully problem solve and develop relevant and innovative solutions and thus deems “Urban Science” a valuable educational tool for starting a discussion about interconnectedness or urban systems. Furthermore, research with “Urban Science” suggests that novices also benefit by working on problem solving and reflecting on problem solving through working with peers and mentors They found that 68% of players conducted some form of systemic investigation by running the simulations 3 or more times, that 49% of those players demonstrated significant improvements in the accuracy of their predictions and that 70% of players pursued engineering type goals in the process rather than scientific strategies as indicated by the relationship between independent variables and the accuracy of users’ predictions. Results using video, surveys and interviews of students showed that students were able to perceive the game as authentic, felt embodied in the game, engaged in the inquiry and understood the dynamic nature of the model in the game. Results showed that children’s intuitive schema of velocity, acceleration and momentum were at play while they were playing these games. Students overall did significantly better on post-test (p < .001, effect size .65). The study also looked at gender differences and found no significant differences between genders based on performance results. Showed no significant improvement in post-test when compared to 63 PROCEEDINGS 213 (2009b) Quantitative) looked at Genetics students not playing the game. Greenfield, Camaini et al (1994) Mixed Methods (Qualitative & Quantitative) Results showed that video game performance significantly correlated with improvement in scores on the electronic circuit. McClean, Saini-Eidukat, Scwert, Slator & White (2001) Mixed Methods (Qualitative & Quantitative) Duncan & Stienkuehler (2008) Qualitative University students in the U.S. and Rome were studied to test real-world transfer of skills and after playing a video game for 2.5 hours, were then asked to take a test to measure their ability to generalize and apply principles from demonstrated examples (images of components in an electronic circuit). Study of 238 college students playing “Virtual Cell” and 273 students playing “Geography Explorer”. Studied the scientific habits of mind in discussion forums around the commercial massively multiplayer online roleplaying game “World of Warcraft”, analysing 1,984 posts by users in 85 different discussion threads Falcao et al (2011) Qualitative Digital Technologies and learner focus research using technology-based learning domains. Jacobson, Taylor, & Richards (2012) Mixed Methods (Qualitative & Quantitative) This study involved the development of “Omosa”, an Agent-Based Virtual Environment (ABVW) consisting of an immersive virtual world for experiencing and exploring a complex ecosystem as part of "virtual" biology fieldwork. The system incorporated predator-prey interactions based on computational biology modeling techniques that were linked to a 2D agentbased modeling tool. The study involved grade 9 students who used the ABVW over a two-week period to engage in computational scientific inquiry. Forty-seven iED 2013 ImmersiveEducation.org They found that students playing “Virtual Cell” showed 40% gains over lecture and students playing “Geography Explorer” 15-40% gains over lecture. While the game involves fantasy themes, they found that 86% of the posts involved social knowledge construction and more than 50% of the posts evidenced systems-based reasoning, while roughly 10% evidenced model-based reasoning and 65% displayed evaluative epistemologies supportive of argumentation as a means for knowledge construction. Researchers found that the technology encouraged ‘a more purposeful, conceptual exploration’ amongst learners; that learners not only focused their attention on the learning activities within the technology-based domains, they explored and tried to understand how the technology-based learning domain worked and how the artifacts affected the technology The main dependent measures on the pre-test and post-test were biology content knowledge and the attitudes toward science inquiry. Results showed no significant correlations between grouping variables including gender, class type, and the treatment condition, however, class type was a significant variable. The selective class students having significantly higher knowledge scores in both pre-test and post-test, and this correlation became higher on the post-test (r = 0.741, p < 0.01). There were no significant correlations between total attitudinal scores and the total content knowledge scores. Results showed a significant decrease in the attitudes about science pre/post test (t(23)?= 3.20, p < 0.05) with no variables of the dataset (i.e., gender, class type) contributing to this significance. Although the selective PROCEEDINGS 214 students in total participated in the study, 28 boys (59.6%) and 19 girls (40.4%). There were 22 students (46.8%) and 25 students (53.2%) in the comprehensive and selective classes respectively. class had a higher score on the posttest than the comprehensive class, this was not a significant difference (F(1,25)) = .629, p = .435). However, for the combined scores of both classes on these items, a 1-tail repeated measures ANOVA showed that the post-test score was significantly higher than the pre-test (F(1,25) = 3.661, p =. 03). Author Notes: This study is a prelude to exploring research into using “serious games” for teaching literature curricular by exploring successful research outcomes in other learning domains. iED 2013 ImmersiveEducation.org PROCEEDINGS 215 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org POSTER Testing Manual Dexterity in Dentistry using a Virtual Reality Haptic Simulator Authors: G. Ben Gal 1 A. Ziv2,3 , N. Gafni 4 , E. Weiss 1 Affiliations: 1 Department of Prosthodontics, Faculty of Dentistry, Hebrew University-Hadassah, Jerusalem, Israel Department of Medical Education, Tel Aviv University, Sackler School of Medicine, Israel 3 Israel Center for Medical Simulation (MSR), Chaim Sheba Medical Center, Israel 4 National Institute for Testing & Evaluation (NITE), Jerusalem, Israel 2 Overview The dentist's work requires extensive and current theoretical knowledge, as well as fine manual dexterity for performing delicate procedures in the patient’s mouth. Dental schools traditionally use a simulative environment, in which the students carry out procedures on a mannequin with acrylic teeth and jaws. Evaluation of performance by the instructors lacks consistency, both per rater and between raters, as the small dimensions and angles make human assessment difficult. Virtual reality simulators (VRS), which provide real-time performance feedback, have been used in dental education for more than a decade and are increasingly recognized as a powerful training tool. However, their scoring and evaluation have not been validated. A newly developed haptic simulator (IDEA Dental ®, USA) was used in the experiments described here (Figure 1). The user is required to perform virtual drilling tasks in various geometric shapes, while receiving force feedback based on three-dimensional images displayed on a screen (Figure 2). The tasks approximate those encountered in the preparation of dental cavities. The simulator registers working time and accuracy for each task. iED 2013 ImmersiveEducation.org PROCEEDINGS 216 Figure 1. Phantom Omni®, Sensable™ Technologies, MA Figure 2. Simulation as viewed on the computer screen, straight line task. To test the psychometric properties of this assessment, a series of experiments was carried out. Various aspects were examined, including: content validity, internal reliability, predictive validity and construct validity. Initially, we examined the relevance of the simulator’s use and its haptic proximity to the real professional environment, using questionnaires administered to 21 dentists and 12 iED 2013 ImmersiveEducation.org PROCEEDINGS 217 dental students after short encounters with the system. The results showed that participants rated the simulator as having significant potential for the teaching and assessment of manual dexterity in dentistry, and affirmed that the skills it simulates are relevant to the field. Table 1 shows the results of the questionnaire regarding the estimated benefit of the simulator as a teaching and evaluation tool. The difference in the ratings between the two groups was not statistically significant, but a clear trend toward a higher evaluation by the students was observed. Table 1. Potential benefit of the simulator as estimated by dental educators and students on a scale of 1 to 7. The extent to which the simulator can be useful in: Total no. of participants N=33 mean SD mean SD mean SD 1. Teaching manual skills. 2. Self-learning of manual skills. 3. Evaluation of manual skills. 5.24 5.42 4.75 5.05 5.33 4.50 1.53 1.43 1.77 5.58 5.58 5.17 0.76 0.86 1.14 1.33 1.26 1.60 Dental educators N=21 Students N=12 Given the positive results, it was decided to compile a simulated test. A pilot test consisting of 12 tasks was given to 106 participants. Based on the result of the pilot test, a final test, also consisting of 12 tasks, was prepared. The test was given to 142 participants who were divided into three groups according to their dental experience: 88 dental students (first, fourth and sixth year), 37 dentists and 17 participants who are not in the dental field. A scale of 0 to 200 was chosen for the test. The mean score and standard deviation of the three groups is shown in Figure 3. Dentists Dental Students Not in Dental Field Figure 3. Mean score and standard deviation in the VR test by groups iED 2013 ImmersiveEducation.org PROCEEDINGS 218 The test’s reliability and the correlation between the quality of performance and experience in dentistry were calculated A high internal reliability was found: Cronbach Alpha = 0.82. The correlation (Spearman Rank Correlation) between performance on the test and dental experience was 0.47. The correlation between the preliminary simulator test and practical drilling standard test, performed after one semester’s training, was calculated for 46 freshmen students. The Pearson correlation (predictive validity) was 0.33 (P <0.05). Summary The experimental results strongly support the content, construct and predictive validity of manual dexterity tests using the simulator. It appears that that the simulator can be used as an objective, standardized, reliable and fair testing tool for manual dexterity dentistry. iED 2013 ImmersiveEducation.org PROCEEDINGS 219 iED 2013 PROCEEDINGS ISSN 2325-4041 ImmersiveEducation.org PRESENTATIONS, DEMOS AND WORKSHOPS Contact the Director of the Immersive Education Initiative to obtain an expanded rendition of these proceedings that contains presentations, demos and workshops: http://ImmersiveEducation.org/director iED 2013 ImmersiveEducation.org PROCEEDINGS 220 COPYRIGHT, LICENSE AND USAGE NOTICE Copyright © 2013, All Rights Reserved 2013 Copyright held by the individual Owners/Authors of the constituent materials that comprise this work. Exclusive publication and distribution rights held by the Immersive Education Initiative. Copyrights for components of this work owned by others than the Author(s) must be honored. Abstracting with credit is permitted. OPEN ACCESS USES: Permission to make digital or hard copies of this work for personal or classroom use, at no cost (no fee), is granted by the Immersive Education Initiative under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. Conditions and terms of the license, in human-readable and legal form, are defined at http://creativecommons.org/licenses/by-nc-nd/4.0/ and summarized below: • Attribution: You must give appropriate credit and provide a link to the license. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. • Non-Commercial: You may not use the material for commercial purposes. • No Derivatives: You may not remix, transform, or build upon the material, or otherwise alter or change this work in any way. ALL OTHER USES: Specific permission and/or a fee is required to copy otherwise, to re-publish, and/or to post or host on servers or to redistribute to lists, groups or in any other manner. Request permissions from the Director of the Immersive Education Initiative at http://ImmersiveEducation.org/director iED 2013 ImmersiveEducation.org PROCEEDINGS 221