LexiConversion: A Chat Application With An Inline Translator
Transcription
LexiConversion: A Chat Application With An Inline Translator
LexiConversion: A Chat Application With An Inline Translator The Final Prototype Team Members: Felicia Chen TA08: I wrote the code for this. I also consulted with my teammates as to what to implement. TA09: I helped explain our application to our users and took notes/video on some of the sessions. I also wrote several sections of the report and helped proofread it. TA10: I gave feedback as to the script. I also acted as the narrator and as an onscreen personality. Krithi Ramaswamy TA08: I reviewed the application for correctness of functionality through UI testing. TA09: I created the Pre and Post Study Surveys, took notes during the user testing session, and wrote multiple sections of the Report. TA10: I worked on the story idea and script. I acted in the film and filmed some scenes. Eyobe Bisrat TA08: I helped give feedback for the app and also provided evaluative testing TA09: I assisted in conducting user testing, moderating, taking video and also taking notes.I also helped proofread and revised the report. TA10: I worked on the script, was the cameraman/an actor and also did all of the video editing Section 1: Abstract The primary concept for our application, LexiConversion, is to provide an inline translator for a mobile chat application that can facilitate communication between speakers of different languages and also aid in the process of learning other languages. Our three primary tasks for LexiConversion were translating a message, translating a phrase outside of the messages, and then creating and sending a new chat. We evaluated the results of user testing by aggregating all of the notes, selecting key points, and discussing amongst us what points were most pertinent. From this testing, we found that users universally expected to be able to exit a popup keyboard by pressing anywhere on the screen outside of it, that first instinct was always to press the message bubble, and that users seemed to want to have important screen elements visible while typing. For future revisions, this implies that we will have to study the Android and iPhone default chat applications to make LexiConversion function the same way. This also implies that we will need to work on adjusting the layout for better visibility. Section 2: Prototype Implementation Our prototype was an extension of an opensource application called Kontalk. We chose to use this because its source code was free, readily available, and had a clean aesthetic close to the standard Android messaging application. Kontalk was also written purely in Android Java, which one of our group members was familiar with, so it seemed the easiest means of implementing it. We also considered using Xabber, a Jabber wrapper for Android, but the implementation using that would require a complete graphical overhaul. Because Kontalk was Android Java, we extended it also using Android Java. LexiConversion’s concept is that it is a chat application with an inline translator, so we used an online API for translation Microsoft Translate. Because natively this API is not written in Java, we used a Java wrapper for it available on Google Code. Our design decisions were informed by feedback from the midfi and lowfidelity prototyping. People tended to prefer the top navigation bar, so we used that; they also stated that they prefer to have navigation to other states accessible at every state. Thus, we designed it so that either the navigational tabs were accessible in every state that would not be cluttered from it (meaning, all but the new chat and the chat threads, which were both textheavy) or that there were several ways to reach the homescreen using the back button built into Android or an extra button built into the state. Users also preferred the fastest, fewestclicks way of translation, and suggested that we have translate be a mere tap of the message bubble. Furthermore, our users stated that since our functionality was so simple, instructional labels like a specific translate button would be unnecessary. Thus, we implemented the application with reachability and keeping the interface very clean in mind. We also knew that the design principle at the core of LexiConversion efficiency was at stake if we made the process to translate a message anything more than one tap. We also realized that efficiency and familiarity were at stake if we did not include a means of exiting out of a popup keyboard As we built the system, we rapidly iterated and tested, as Android can be very finicky, especially with the layout. After implementing each feature, we tested it and checked to see if the rest of the software would break afterwards. We also periodically checked in with each other to see if we all approved of the implementation. We left out the ability to actually send messages through the New Chat screen and also the functionality for viewing all contacts. This is because of the underlying implementation of what we built the application off of the navigational layout we added to it was not included in the Android Fragment that the contact access list was originally part of; as this was difficult to work around and we do not consider searching through contact profiles a primary task we omitted this from the prototype. Similarly, a working function like our New Chat was not available in the original Kontalk, and it was too difficult to implement for this prototype. Section 3: Task Descriptions Our three primary tasks in this assignment are based around what LexiConversion aims to do chat and translate. Our first task is to translate a received message. Our second task is to translate a word not in a message. Finally, our third task is to compose a new message, or start a new chat thread. We felt that these tasks combined the most common functions of online translators and chat applications like KakaoTalk and Android SMS messenger. Task 1: Translate message The user selects a message from a chat thread and translates it inline by tapping on the message bubble. The translated version of the message then appears underneath the original, separated by a bar. Tapping this bubble again hides the translation, allowing the user to better learn the language and also to clean up the screen. This functionality of the application prevents the user from the slow process of copying the message, opening a separate translation site or application on their phone, and then going back to the messenger. Task 2: Translate a word. The user translates a single word from one language to another using the Dictionary. The user access the dictionary tab at the top of the screen. The user then selects the appropriate languages from the dropdown menus. From here, the user can enter text in the “Input” box. The translation occurs in real time, as the user types in the message, allowing the user to not have to press any additional buttons. This is similar to the way Google Translate’s online site works. This is intended to be used when a user is trying to type in the language he/she is learning and is unsure of just a few terms. This can also be used when a user only wants to translate part of a message and thus wants to avoid using the message inline translator, which translates the entirety of the message text. Task 3: Send a message from the new message template (aka “Start New Chat”). The user accesses the new chat/compose message button in the action bar at the top right corner of the screen. The user then begins typing in the name of his/her intended recipient and selects the recipient from an autocompleting dropdown menu. This allows the user both to chat with a new person and to extend chats without searching through disorganized chat logs. From this point, in the text field below, the user enters the message they would like to send and presses the “Send” button underneath to send the message. Section 4: Usability Tests Pilot Testing We pilot tested the interactive prototypes amongst the group by meeting up and also distributing the APK. We interacted with the prototype and discussed then what we could change to make it better, particularly graphically. Here, we decided to make the logo bolder. We also decided for sure that we wanted navigation at the top rather than at the side, and that we should change the color of the Android bars to go with the “international” theme. Finally, we decided that we should change the dividing bar between the original and translated message, as the original looked unprofessional. Participant Recruiting We recruited through our classmates on Piazza, for convenience. We assumed that our participants, as Computer Science and Computer Engineering majors, would actually be our target demographic anyway as we sought users who interacted with smartphones, and they are often present in Computer Science classrooms. Demographically, they were all in our HCI class, and all owned smartphones. All participants stated an interest in learning other languages, critical towards wanting to use LexiConversion. Study Method We set up our study with members of GNOTES and TerpGenius in an empty CSIC classroom. We performed the study by first pitching our application to potential participants. Volunteers were then made to fill out a preexperiment survey form for additional filtering. Our accepted participants then signed our consent forms. Following this, we gave them Felicia’s phone with LexiConversion on it, as it is a popular Android phone model (Samsung Galaxy S4) with the “standard” rightbottom back button. We then went through our three tasks in this order: translate message, translate word, start new chat, with someone recording video and another taking notes from behind the participant. Each study lasted approximately ten minutes. Following each study, each participant was given a paper survey to fill out. Notes were taken by hand in a notebook, and we recorded video with an HTC One cell phone zoomed in on the Android screen so that we could trace the participant’s movements closely. The paper poststudy survey asked our users how they felt about the application, what they would like to see improved, and also if they considered the application useful/pleasant enough to use. Analytical Method We analyzed the data we collected by all rereading over the collection of notes we had taken, then discussing at length what we wanted to use for final changes to the prototype before we submitted it. We picked and chose elements that we thought would be easiest to implement, and also by what was stated with the most frequency particularly, exiting out of a soft keyboard by tapping elsewhere on the screen was deemed important. Section 5: Results from User Testing A discovery we made during the user testing session was that the idea of “starting a new chat” was confusing for users. We found unanimously that the users had trouble performing the second task, which was to send Eyobe a new message via the compose message button found in the top right corner of the main (chat logs) screen. All of the users first attempted to open Eyobe’s contact from the existing message log and send him a message from inside the message thread. We had to rephrase to them that they need to start a new chat. Even so, all of the users had trouble locating the compose message button. While we thought this design was standard, as it is the default position and shape of this button in Android’s builtin SMS text messenger, perhaps the size, color, and/or positioning of the button led them to have difficulty understanding the task. It is also possible that we did not describe the task accurately we used the same descriptor for the task as we used in previous tests (“start a new chat”), when the button for “new chat” was labeled with text rather than a picture. Perhaps this would have gone better had we stated “compose a message to Eyobe.” Another discovery we made during the user testing session was that users had difficulty maneuvering the keyboard when entering information. They all gave feedback that sometimes they had trouble viewing the text boxes they were entering into because the keyboard was blocking the textboxes. To improve this design, we could have the textboxes move above the keyboard when the keyboard appears. Other than these two issues, all of the users reported to us that they had a relatively easy time navigating through the application. When asked to translate a message, they knew intuitively to tap the chat bubble to make the translation appear. During the earlier stages of the design process,we were concerned that without a translate button on every bubble, the users would not know how to translate the message. But during the user testing session we realized that users intuitively know that to translate the message they simply have to expand the bubble. The users also knew intuitively that from the task to translate a word from Spanish to English they had to navigate to the dictionary tab. Perhaps because of the positioning of the dictionary tab made the task easier to navigate through and execute. Other than the two issues regarding starting a new chat and maneuvering the keyboard, all of our user testers said that the application was easy to use. They found that many of our design choices aided in their ability to intuitively interact with the UI. Section 6: If We Had To Do It All Over Again... We definitely feel that the iterative design processes combined with multiple user testing sessions helped us to select the best combination of design choices from our list of selections. Our final application contained features from both prototypes 1 and 2. Having the user testing sessions helped us to better understand the mindset of the user of the application when analyzing and reflecting on our own design choices. For example, during the midfidelity prototype phase, we found that a vertical alignment of tabs was difficult for users to understand without a designated threeline swipemotion icon that would also collapse the tabs. We did not realize that this arrangement would be difficult for the user to understand until we were able to see this from a user’s standpoint. In each phase of the design process, our application underwent a variety of enhancements, and began to look more and more like the final result. Therefore we feel that each of the steps in the design process were essential to producing an effective final design. One thing that could potentially be done differently is to start coding earlier in the design process, perhaps at the midfidelity prototyping stage. By the Midfidelity stage, most of the design choices would be finalized, and only minor tweaks would be necessary to enhance the application based on user feedback. The end result would likely have more effectiveness of functionality if more time were afforded to coding and revising the application code. Section 7: Video Making Process The video making process included story development, script development. Once the script was written we had to figure out what shots we needed to capture for each scene. For example, in certain scenes, We had a long shot of someone typing in their phone, then a zoom in on the message they are typing. In another scene in which Felicia and Krithi are translating the text message from Eyobe on their computers, we needed a front view shot of them first sitting down in the airplane chairs and taking out their laptops, then a shot from behind the two facing the computer screen to show the translation in progress on the computer. Then a shot from the side view is taken when the air host/hostess arrives to tell them to put away their computers. We had to plan out exactly what shots were required in the development of each scene to make it easiest to understand and display the necessary information about our application. We gave a lot of importance to set layout as well as camera angles. We also had to plan other details including the specific messages we will send during the film and find appropriate translations for message sent from other languages, and song selections. For the first section of the film we wanted to present a more frantic tone and chose to play “The Barber of Seville” in the background. We all worked together in the story and script development process, each contributing ideas about what scenes and shots should be included. We all acted in the film and each took turns filming. We all had roles in the voice over. We worked together on developing the script for the narration. Felicia did the main voice over narration, and Eyobe and Krithi did the voiceovers for the messages they sent through the LexiConversion application. Eyobe was tasked with editing the video. For the editing process we used a combination of two softwares: a free professional video editing software called Lightworks; the second video editor we used was Windows Live Movie Maker. Lightworks was utilized for its ability to create specialized video effects that have a wide range of customizability. Though this software was difficult for exporting to low level framerates and smaller sizes so we opted to using Windows Live Movie Maker for its accessibility for compressing video to video sharing standards. We decided on this particular approach to presenting our application through brainstorming story ideas. We considered a variety of scenarios in which having the LexiConversion application would be more beneficial than the existing technologies. Some of the advantages of LexiConversion are the small time duration it takes to complete a translation and the fact that the features of messaging and translating are all in one place. When in a hurry, one does not have enough time to open a separate translating tool, type in the message into the tool. Another benefit of LexiConversion is that the user does not have to worry about translating their response for the recipient. They can send the message in the language that is most comfortable for them and assume that the recipient will be able to conveniently translate the message on their end. In our story we wanted to show the additional effort and time it takes when a person has to translate received messages as well as the messages they plan to send using and external translating tool. We chose to develop a story in which Felicia and Krithi are traveling from foreign countries to the US and don’t speak enough english to quickly communicate their arrival times with Eyobe. We felt that trying to catch a flight would be a frantic situation in which communication and translation would need to be done as quickly as possible. In the film, Felicia and Krithi are faced with the time constraint of completing the translation before the PA on the airplane instructs everyone to turn off cell phones and all mobiles devices. In doing so we were able to fully present a problem for which LexiConversion provided the best solution. Section 8: Project Webpage Making Process We chose to use the Wordpress webpage hosting site to develop our project webpage. We chose Wordpress because of the variety of features Wordpress allows us to incorporate in our design as well as the shorter amount of time necessary to build a webpage using this tool. The webpage making process involved deciding on the information to present, deciding how to layout this information, how to make the site consistent and aesthetically pleasing, and create a logo that clearly represents our application. We wanted our design process to be given special focus in the site so that viewers can understand the process we followed to design and build LexiConversion. But at the same time we didn’t want to overwhelm the viewer with an excess of information on a single page. On the front page we added images of the prototype from each phase which linked to the summary page for that phase.Since images are more visually appealing to users than words, the users would be more likely to click on the image, which would lead them to the page containing the report summary for that prototype image. Once accessing the the summary page, the viewer has the option to learn more by reading our full report, which we attached inside the summary page. In designing the logo for the website, we wanted to express lexi conversions function as a chatting application as well as as a translator. To do this, Krithi created a logo positioning to chat bubbles in a yangyang formation. One of the bubble contains text in courier new font, and other in Samarkan, a font that mimics the hindi script but incorporates English lettering. Having the Samarkan font would lead the user to understand that LexiConversion allows for messaging between languages. For the rest of our webpage text we chose to incorporate the Segoe UI font we felt has a professional yet modern appeal. We chose to incorporate soothing colors like gray and black for text and background and the secondary color scheme for the header and logo components. We chose shades of these colors that would not be visually overwhelming to the user. We felt that this combination of soothing versus energetic colors would give the site the greatest aesthetic appeal, in that the black and grays would give a sense of professionalism, while the color expresses the fun nature of the LexiConversion application. Appendix Appendix A: Study Plan 1. 2. 3. 4. 5. Recruit participants using Piazza (for the sake of time and ease). Meet in CSIC and move into a quiet classroom. Give out preexperiment survey. Introduce consent forms and explain LexiConversion. Give participant phone with LexiConversion on it, and 3 messages from Brian, all in Korean. Begin recording video and taking notes on participant behavior. 6. Have participant translate latest message from Brian. After this participant finishes the three tasks, delete message (collapsing function not implemented yet). 7. Have participant translate a word (using the dictionary). 8. Have participant try sending a new message to Eyobe (using button at top right of screen must explain that it uses a blank message option instead of extending an existing conversation). 9. Give participant postsurvey to fill out; collect them. 10. Repeat steps 59 with other participants. 11. Aggregate data and discuss amongst group members what we should take into consideration. Appendix B: Raw Notes from User Testing Sessions Appendix C: PreStudy and PostStudy Surveys: