Appendix E: Recommender Systems Postclass Knowledge Test and Survey

Transcription

Appendix E: Recommender Systems Postclass Knowledge Test and Survey
Appendix E: Recommender Systems Postclass
Knowledge Test and Survey
Q0 Part 1: This Recommender Systems Knowledge Test is part of a research study that examines the
relationship among online learning environments, teaching and learning practices, student engagement,
and learning outcomes. You were selected as a possible participant because you have enrolled in a
Recommender Systems course delivered by the University of Minnesota as a Massive Open Online
Course (MOOC). The survey is comprised of twenty (20) multiple choice questions about Recommender
Systems and should take only 4 or 5 minutes to complete. This study is being conducted by Drs. J.D.
Walker and Paul Baepler from Academic Technology Support Services and Dr. Joseph A. Konstan of the
Department of Computer Science and Engineering at the University of Minnesota. You may ask any
questions you have by contacting Dr. Walker (Email: jdwalker@umn.edu; Phone: 612.624.1097) or Dr.
Konstan (Email: konstan@cs.umn.edu; Phone: 612.625.1831). By completing this survey, you express
your consent to participate in the study. Thank you.
Q22 Please enter the email address that you use to access Coursera. (If you are a University of
Minnesota student, please enter your U of M email address.)
Q1 What is the difference between a weighted hybrid recommender and a mixed hybrid recommender?
 A mixed hybrid is a weighted hybrid where each underlying recommender is given the same weight.
(1)
 Each recommendation from a weighted hybrid is based on combined scores of different
recommenders; recommendations from a mixed hybrid can come from a single recommender. (2)
 In a weighted hybrid, recommenders are run in parallel; in a mixed hybrid, the output of one is fed as
in input into the other. (3)
 There is no difference. They are two names for the same type of hybrid. (4)
 I have no idea. (5)
Q2 In a TFIDF algorithm, IDF stands for “Inverse Document Frequency.” What does that mean?
 IDF means that more popular documents are given a lower weight so they don’t get recommended
too often. (1)
 IDF means that terms count for more in a profile or query if they don’t appear in very many
documents. (2)
 IDF means that the optimal number of times for a term to appear in a document is one. Any more is
penalized by the algorithm. (3)
 IDF is a technique that attempts to recommend each document equally often. (4)
 I have no idea. (5)
Q3 Which of these is NOT an advantage of SVD-based algorithms compared with user-user collaborative
filtering?
 Faster time for building the model. (1)
 Faster time for computing individual recommendations. (2)
 Potential to avoid overfitting and resulting odd recommenations. (3)
 Potential to get recommendations based on users with similar tastes who don’t have items in
common. (4)
 I have no idea. (5)
Q4 Which of these examples illustrates the difference between a decision-support metric and an accuracy
metric for predictions?
 Mean square error (MSE) is on a different scale than MAE or RMSE. (1)
 Breese score and MRR both give more weight to the items at the top of the list. (2)
 Precision focuses on the items returned in the list; recall focuses on the correct items whether or not
returned. (3)
 MAE doesn’t tell you whether an error of 1.5 stars is betwen 0.5 and 2 stars, or between 3 and 4.5
stars. Only the latter would count as a reversal. (4)
 I have no idea. (5)
Q5 Which of these is an example of a product association recommender?
 Amazon.com’s “people who bought this” recommender. (1)
 MovieLens’ “Top Movies for You” recommender. (2)
 The Zagat Guide’s scores for food, service, and decor. (3)
 The Entree Restaurant Guide’s “find me a restaurant that’s cheaper” recommender. (4)
 I have no idea. (5)
Q6 Which of these is NOT an example of an implicit rating?
 The number of times you listen to a song. (1)
 The fact that you labelled the movie Star Wars with five stars. (2)
 The fact that you printed out an article from an online newspaper. (3)
 The fact that purchased a Waring Blendor on Amazon.com. (4)
 I have no idea. (5)
Q7 Cosley reported on an experiment where he deliberately showed incorrect prediction scores to users.
What was the result of the experiment?
 User ratings weren’t affected by the errors, and users didn’t notice the change. (1)
 User ratings weren’t affected by the errors, but user opinion of the system was hurt. (2)
 User ratings were shifted, but users didn’t notice the change. (3)
 User ratings were shifted, and user opinion of the system was hurt. (4)
 I have no idea. (5)
Q8 Which of these best describes a trust-based recommender?
 A recommender that users have enough experience with to automatically purchase the items it
recommends. (1)
 A recommender that lets you view and edit the user profile it builds. (2)
 A recommender that uses user-user collaborative filtering, but replaces historic ratings similarity with
a different measure -- a “trust score.” (3)
 A recommender that rejects or ignores ratings from users who have proven to be unreliable. (4)
 I have no idea. (5)
Q9 What is a unary rating?
 A rating on a scale between 0 and 1. (1)
 A rating where the only possibilities are like, don’t like, and don’t know. (2)
 A temporal rating that is monotonic, i.e., it can never go down as time moves forward. (3)
 A rating with only two possibilities, “liked/consumed/purchased” and “don’t know.” (4)
 I have no idea. (5)
Q10 Which of these is an example of ephemeral personalization?
 Showing a set of the products that are most popular right now. (1)
 Showing a set of products commonly purchased together with the products in your shopping cart. (2)
 Showing products based on the recent purchases of users in your neighborhood. (3)
 Asking the customer for ratings of products so you can personalize recommendations later. (4)
 I have no idea. (5)
Q11 Which of the following is a correct example of collaborative filtering?
 Sharing mail filtering rules with your friends to help filter out junk email. (1)
 Building and using a profile of music genres you like to play music for you and your friends. (2)
 Choosing songs to listen to based on how much other people with similar tastes like those songs. (3)
 Building a list of users from whom you want to reject e-mail. (4)
 I have no idea. (5)
Q12 Which of these is NOT likely to be an effective means for explaining user-user recommendations to a
user, if your goal is to get the user to believe the recommendation?
 Giving a brief overview of the data -- e.g., 20 ratings, 18 of them liked it. (1)
 Giving a graph showing each neighbor’s rating against your similarity with that neighbor and number
of items rated in common. (2)
 Showing how often similar recommendations were correct in the past. (3)
 Giving related supporting information about the user’s preference for product features (e.g., movie
actors, product brands). (4)
 I have no idea. (5)
Q13 What is the core idea behind dimensionality reduction recommenders?
 To reduce the computation from polynomial to linear. (1)
 To strip off any product attributes so products appear simpler. (2)
 To reduce the computation time from O(n^3) to O(n^2). (3)
 To transform a ratings matrix into a pair of smaller taste-space matrices. (4)
 I have no idea. (5)
Q14 Which of the following features adds the most value to a user-user collaborative filtering algorithm?
 Weighting the neighbors by the correlation with the target user. (1)
 Decreasing the weight of neighbors that have few items in common. (2)
 Capping the neighborhood size at a modest number such as 50-200. (3)
 Using normalized ratings and de-normalizing to the target user’s rating distribution. (4)
 I have no idea. (5)
Q15 When using an SVD-based algorithm, what is “folding in?”
 Mapping values outside the normal range to minimum or maximum values. (1)
 Adding new users or items to the system without computing a new factorization. (2)
 Factoring the ratings matrix into two matrices multiplied by a matrix of eigenvalues. (3)
 Directly predicting a rating by taking the dot product of a user-vector and an item-vector. (4)
 I have no idea. (5)
Q16 What’s the difference between a recommendation and a prediction?
 Only a prediction involves an explicit estimate of “how much you like it.” (1)
 Recommendations only come in lists -- say top 5 or top 10. (2)
 Predictions focus on whether you’ll like it in the future, while recommendations are about liking in the
past. (3)
 Recommendations come from another user; predictions come from the computer. (4)
 I have no idea. (5)
Q17 Which of these best describes a case-based recommender?
 A recommender that provides recommendations for large sets of products sold together. (1)
 A recommender the uses correlations among users to predict which items each user would enjoy. (2)
 A recommender that uses product ratings to build a profile of attribute interests. (3)
 A recommender that uses a database of examples and forms queries from user requests to explore
items that meet user criteria. (4)
 I have no idea. (5)
Q18 In dialog- and critique-based recommenders, how does the system figure out what to show the user?
 Based on the correlations with other users of the system. (1)
 The user presents an attribute-based query or an attribute-based refinement to an existing product or
product set. (2)
 The system has a long-term profile of user content preferences. (3)
 The user selects their own set of neighbors -- people who they trust. (4)
 I have no idea. (5)
Q19 Which of these statements is true specifically of a situation where there are more items (or products)
than users in a ratings matrix?
 An item-based recommender algorithm is less likely to be a sensible choice. (1)
 The user-user algorithm should use cosine similarity instead of correlation. (2)
 Dimensionality reduction will probably not work. (3)
 Content-based algorithms are more likely to be effective. (4)
 I have no idea. (5)
Q20 Which of the following metrics is commonly used to measure recommender system accuracy?
 P-value (1)
 Statistical significance (2)
 Mean absolute error (3)
 Reciprocal precision (4)
 I have no idea. (5)
Q26 PART 2: Post-class survey. This survey asks about your experiences, reactions, and opinions
regarding the Recommender Systems course. It should take only 2 or 3 minutes to complete.
===============================
Q1 Some students enroll in Coursera courses planning to complete the entire course, while others plan
only to complete certain parts of the course. Thinking back to your plans when you enrolled in this class,
which of the following best describes your experience?
 I completed more of this course than I planned to (1)
 I completed about as much of this course than I planned to (2)
 I completed less of this course than I planned to (3)
Q2 Did you find the course useful even though you completed less of it than you planned to?
 Yes (1)
 No (2)
Q3 What factors prevented you from completing this course? (Please rate how much you agree or
disagree with each statement.)
Strongly
Disagree (1)
Disagree (2)
Neither Agree
nor Disagree
(3)
Agree (4)
Strongly Agree
(5)
Time
commitment
exceeded my
ability (1)





Lost interest
on account of
subject matter
(2)





Lost interest
on account of
presentation
and
assessment
style (3)





Got behind
and could not
catch up (4)





Began taking
another course
(5)





Q4 Would the following have made you more likely to complete the class? (Please rate how much you
agree or disagree with each statement.)
Strongly
Disagree (1)
Disagree (2)
Neither Agree
nor Disagree
(3)
Agree (4)
Strongly Agree
(5)
Reducing the
weekly time
commitment
needed to take
the course (1)





Making the
course
material easier
(2)





Making the
course
material more
difficult (3)





Making the
credential
more valuable
(4)





Making course
shorter (5)





Q5 Many different factors can affect a student's level of participation in an online course. To what degree
did each of the following negatively impact your participation in this course?
Not at all (1)
Small degree (2)
Moderate degree
(3)
Large degree (4)
Unfamiliarity with
technology used in
the course (1)




Problems with my
internet
connection (2)




Problems with my
computer (3)




Time zone issues
(4)




Lack of time due
to family
responsibilities (5)




Lack of time due
to work
responsibilities (6)




Q6 To what degree do you agree or disagree with the following statements about this course and
instructor?
Strongly
Disagree (1)
Disagree (2)
Somewhat
Disagree (3)
Somewhat
Agree (4)
Agree (5)
Strongly
Agree (6)
The instructor
presented the
subject matter
clearly. (1)






The
instructor(s),
course staff,
and/or
automated
course
materials
provided
feedback
intended to
improve my
course
performance.
(2)






I have a
deeper
understanding
of the subject
matter as a
result of this
course. (3)






My interest in
the subject
matter was
stimulated by
this course.
(4)






Q7 Approximately how many hours per week did you spend working on homework, reading, and projects
for this course?
 0-2 hours per week (1)
 3-5 hours per week (2)
 6-9 hours per week (3)
 10-14 hours per week (4)
 15 hours per week or more (5)
Q8 Compared to other MOOCs/online courses I have taken, the amount I learned in this course is:
 Less (1)
 About the same (2)
 More (3)
 I have not taken other MOOCs/online courses (4)
Q9 Compared to other MOOCs/online courses I have taken, the difficulty of this course is:
 Less (1)
 About the same (2)
 More (3)
 I have not taken other MOOCs/online courses (4)
Q10 To what degree did each of the following features/components of this course contribute to your
learning?
Not at all (1)
Small degree
(2)
Moderate
degree (3)
Large degree
(4)
Did not use (5)
Video lectures
(1)





Video
interviews (2)





Taped oncampus
discussion
sessions (3)





Assigned
readings (4)





Written
homework
assignments
(5)





Programming
assignments
(6)





Exams (7)





Interactions
with
instructor/TAs
(8)





Interactions
with
classmates in
the class forum
(9)





Feedback from
classmates
through peer
grading (10)





Seeing
classmates'
solutions to
assignments
through peer
grading (11)





Q26 How important were the following factors in helping you decide which course activities to spend time
on?
Not at all important
(1)
A little important
(2)
Moderately
important (3)
Very important (4)
Your own
assessment of
their value in
helping you learn
the course
material (1)




Whether they were
marked as
"required" by the
course instructors
(2)




How much they
counted towards
achieving a
statement of
accomplishment
(3)




How interesting
the activity
seemed (4)




Whether other
students seemed
to be doing those
activities (5)




Q25 With respect to overall academic quality, what is your impression of the Computer Science
department at the University of Minnesota?
 One of the top 2 or 3 in the world (1)
 One of the top 10 in the world (2)
 One of the top 25 in the world (3)
 One of the top 50 in the world (4)
 One of the top 100 in the world (5)
 Don't know (6)
Q29 Which track did you take in this course? (Please see https://www.coursera.org/course/recsys for a
description of the tracks.)
 Programming track (1)
 Concepts track (2)
 Neither, I did not complete assignments in the course (3)
 Not sure (4)
Q11 What feature/component of this course do you think would benefit most from improvement and
specifically what improvement(s) would you suggest?