Material and Detailed Results of Experiment on Model

Transcription

Material and Detailed Results of Experiment on Model
Material and Detailed Results of Experiment on
Model Comprehension
Janet Feigenspan
University of Magdeburg
feigensp@ovgu.de
Don Batory
University of Texas
batory@cs.utexas.edu
Taylor Riché
National Instruments
riche@cs.utexas.edu
February 2, 2012
1
Introduction
When learning equations in physics, we experienced that understanding the
derivation is easier than memorizing the equation. During talks to physics
teacher, we also found that equations are explained with their derivation. In
software engineering, we can present architectures as directed graphs. However,
they are complex, but we can also derive them from simple to complex. In an
experiment, we evaluated whether deriving a graph increases its understandability. This paper presents the material we used in the experiment.
2
Pilot Study
We used Gamma and Upright in our pilot study. One group worked with the
derivations, the other with the complete model. In Figure 1, we present the
slides for the complete models of Gamma and Upright. In the experiment, each
page was printed on one sheet of paper. In Figure 2, we show the slides for the
derivation of the models.
1
Hash Join in Gamma Illustra*on of a rela*onal join in Gamma. Two streams of tuples A and B are joined. That is, each tuple in A whose join key equals a tuple in stream B will be paired in the output A-­‐join-­‐B. PART I 1 2 Parallel Implementa*on of Hash Join in Gamma A1
A
BLOOM: • clear bit map M • read each A tuple, hash its join key, and mark corresponding bit in M • output each tuple A • aPer all A tuples read, output M • there a n BLOOM boxes, i.e., as many as there are substreams of A BLOOM
HSPLIT
M1
An
BLOOM
BFILTER • read bit map M • read each tuple of B, hash its join key: if corresponding bit in M is not set discard tuple (as it will never join with A tuples) • else output tuple in substreams from B’1 to B’n • there a n BFILTER boxes, i.e., as many as there are substreams of B A1
HJOIN
Mn
A1
B’1
B1
BFILTER
B
HSPLIT • splits tuples of streams A and B to substreams A1 to An and B1 to Bn, respec*vely • passes substreams of A to BLOOM boxes MERGE
An
HSPLIT
HJOIN
BFILTER
Bn
B1
An
A B
Bn
HJOIN • joins tuples of A and B’ • there a n HJOIN boxes, i.e., as many as there are substreams of A and B B’n
MERGE • merges the result of all HJOIN boxes • outputs the joined stream of tuples A and B 3 4 Crash Fault Tolerance Server C1 C2 S ... Cn PART II • Clients C1 to Cn send messages to a server S and get a response • To remove single points of failure (server S, , ), we can redesign this model (next page) 5 6 Diagram Element Crash Fault Tolerance Server without Single Points of Failure Explana4on A client that sends messages to a server S and receives its response. C1 to Ck denotes different clients. A server. (S1 to Sn denotes different copies or replicas of a server). Each server receives exactly the same sequence of messages C1 Rt1 A1 QA
C2 Rt2 A2 ... ... Cn Rtn S1 QS1 QA S2 QS2 ... ... ... ... Am QA Sk QSn 1 A rou*ng box that receives a client message and routes it to some or all agreement nodes An agreement box. It receives messages from an Rt box, talks to other agreement nodes, and votes on the next message to deliver to a server. A quorum box. When it receives a sufficient number of iden*cal messages (votes) from A boxes, it forwards a single, agreed-­‐upon message to a server to process. Another quorum box. When it receives a sufficient number of iden*cal responses from server replicas, it forwards a single, agreed-­‐upon result to a client. Unserializer/demul*plexor. Takes a stream of messages and produces mul*ple streams, one stream per des*na*on. Serializer/mul*plexor. Takes mul*ple streams of messages as input and produces a single stream of messages as output Reliable broadcast: all messages received will be broadcast and received by all recipients. 7 Figure 1: Slides with complete models.
2
HSPLIT • splits tuples of streams A and B to substreams A1 to An and B1 to Bn, respec=vely • passes substreams of A to BLOOM boxes BLOOM: • clear bit map M • read each A tuple, hash its join key, and mark corresponding bit in M • output each tuple A • aHer all A tuples read, output M • there a n BLOOM boxes, i.e., as many as there are substreams of A BFILTER • read bit map M • read each tuple of B, hash its join key: if corresponding bit in M is not set discard tuple (as it will never join with A tuples) • else output tuple in substreams from B’1 to B’n • there a n BFILTER boxes, i.e., as many as there are substreams of B PART I HJOIN • joins tuples of A and B’ • there a n HJOIN boxes, i.e., as many as there are substreams of A and B MERGE • merges the result of all HJOIN boxes • outputs the joined stream of tuples A and B 1 2 Expose Use of Bloom and BFilter Hash Join in Gamma A
A
BLOOM
M
HJOIN
BFILTER
B
A B
B’
BLOOM: • clear bit map M • read each tuple from stream A, hash its join key, and mark corresponding bit in M • output each tuple A • aHer all A tuples read, output M Illustra=on of a rela=onal join in Gamma. Two streams of tuples A and B are joined. That is, each tuple in A whose join key equals a tuple in stream B will be paired in the output A-­‐join-­‐B. On the next pages, you see how a join is implemented in Gamma BFILTER • read bit map M • read each tuple from stream B, hash its join key: if corresponding bit in M is not set discard tuple (as it will never join with A tuples) • else output tuple 3 4 Next, Parallelize Each Box A1
• In the diagram below: A
BLOOM
MERGE
A
MERGE
M
M1
An
An
BLOOM
A
A1
HSPLIT
Mn
A
BLOOM
M
HJOIN
BFILTER
B
A B
B’
• • • HSPLIT splits stream A into substreams A1 to An by hashing A‘s join key There are as many BLOOM boxes as there are substreams of A Merge substreams of A and resultant bit maps into M 5 6 A1
M1
M
BFILTER
MSPLIT
A
HJOIN
HSPLIT
A1
B’1
Mn
MERGE
B’
MERGE
B’n
• • • • • HSPLIT
Bn
A B
B1
B1
B
B1
An
An
B
BFILTER
Hash split stream B on its join key into substreams B1…Bn Split bitmap M into M1…Mn (where Mi filters Bi) There are as many BFILTER boxes as there are substreams of B and M Filter the Bi substreams using bitmap Mi Recons=tute stream B’ • • • • HSPLIT
Bn
Bn
HJOIN
Split both streams using same hash func=on There are as many HJOIN boxes as there are substreams of A and B For each i, join substreams Ai and Bi Merge result of each HJOIN box 7 8 Figure 2: Slides with derivation of model
3
A1
A1
BLOOM
M1
A
HSPLIT
An
An
BLOOM
A
MERGE
MERGE
Mn
MSPLIT
M1
HSPLIT A1
An
HJOIN
A1 B1
HJOIN
An Bn
B’1
M
HSPLIT
BFILTER
MERGE
B’n
B’1
A
B’
MERGE
Mn
B1
B
SUBSTITUTING PARALLEL VERSIONS OF EACH BOX YIELDS… B’n
HSPLIT
• • Bn
BFILTER
Subs=tute (inline) parallel implementa=ons for each box There are three op=miza=ons possible, discussed on the next pages 9 10 Op=mize Op=mize A1
MERGE
A1
A
HSPLIT
An
An
is equivalent to A1
is equivalent to B1
An
M1
MERGE
B1
MERGE
B1
B
HSPLIT
Bn
Bn
Mn
M
M1
MSPLIT
Mn
M1
is equivalent to Mn
Bn
• Exactly the same op=miza=on is here: The Mi substreams are merged and then recons=tuted, unchanged. MMERGE followed by MSPLIT is the iden=ty mapping • The Ai substreams are merged and then recons=tuted, unchanged. MERGE followed by HSPLIT is the iden=ty mapping • Ac=on: remove the MMERGE and MSPLIT boxes • The same holds for the Bi substreams: they are merged and then recons=tuted • Ac=on: remove the MERGE and HSPLIT boxes 11 A1
A
BLOOM
HSPLIT
M1
An
BLOOM
A1
HJOIN
Mn
BFILTER
B1
An
Bn
MERGE
An
HSPLIT
HJOIN
Bn
A1
B’1
B1
B
12 BFILTER
A B
B’n
PART II Final architecture of HJOIN in Gamma is produced by applying these op=miza=ons (see above). 13 14 Crash Fault Tolerance Server Expose Single Agreement Node C1 C2 C1 S C2 ... A S ... Cn Cn • Clients (C1...Cn) send messages to a server (S) and get a response • To remove single points of failure (server S, , ), we can redesign this model (next pages) • A is an agreement node • A is an ordered queue of messages, passing messages one at a =me 15 16 Figure 2: Slides with derivation of model
4
B
Next, Parallelize the A and S Boxes Parallelize S Box S1 • In the diagram below: S2 C1 QS ... A C2 ... Cn Sk S • S1 to Sk: Copies of Server • Each server receives exactly the same sequence of messages • QS: Quorum Server; collects a quorum of iden=cal messages and transmits message when a sufficient number of copies are received • Reliable broadcast; all recipients receive the message 17 18 Parallelize A box A1 Rt A2 QA ... Am • Create m copies of agreement node A • Introduce router (Rt) • QA: Quorum node; when a sufficient number of iden=cal messages is received, QA forwards the message • S=ll single points of failure ( , Rt, , QA, , , QS, ) SUBSTITUTING PARALLEL VERSIONS OF EACH BOX YIELDS… 19 C1 A1 C2 20 S1 C1 Rt1 A1 S1 C2 Rt2 A2 ... ... ... ... ... ... ... Cn Am Sk Cn Rtn Am Sk Rt A2 QA S2 QS QA S2 QS • Swap the order of and Rt • S=ll single points of failure ( , Rt, , QA, , , QS, ) Rt1 • In the following steps, single points of failure will be removed by swapping the order in which boxes are encountered Rt Rt2 equals ... Rtn 21 Rt1 A1 QA1 S1 C2 Rt2 A2 QA2 S2 ... ... ... ... ... Cn Rtn Am QAk Sk C1 Rt1 A1 QA1 S1 QS1 C2 Rt2 A2 QA2 S2 QS2 ... ... ... ... ... ... Cn Rtn Am QAk Sk QSn C1 QS • Alter the order of , QA, and 22 • Alter the order of , QS, and • Final version of synchronous crash fault tolerance server without single points of failure QA1 QS1 QA equals QA2 QS ... QAk equals QS2 ... QSn 23 Figure 2: Slides with derivation of model
5
24 Diagram Element ExplanaBon A client that sends messages to a server S and receives its response. C1 to Ck denotes different clients. A server. (S1 to Sn denotes different copies or replicas of a server). Each server receives exactly the same sequence of messages A rou=ng box that receives a client message and routes it to some or all agreement nodes An agreement box. It receives messages from an Rt box, talks to other agreement nodes, and votes on the next message to deliver to a server. A quorum box. When it receives a sufficient number of iden=cal messages (votes) from A boxes, it forwards a single, agreed-­‐upon message to a server to process. Another quorum box. When it receives a sufficient number of iden=cal responses from server replicas, it forwards a single, agreed-­‐upon result to a client. Unserializer/demul=plexor. Takes a stream of messages and produces mul=ple streams, one stream per des=na=on. Serializer/mul=plexor. Takes mul=ple streams of messages as input and produces a single stream of messages as output Reliable broadcast: all messages received will be broadcast and received by all recipients. Figure 2: Complete Slides for Gamma
To measure the comprehensibility of Gamma, we distributed the questions
presented in Figure 3. This Figure also includes the questionnaire we gave
subjects to evaluate the tasks.
Finally, we show the questions for Upright in the form of the text booklet
we used for the two experimental runs in Figure 4
6
1. Redraw the graph that you saw which shows how Gamma computes parallel joins. 2. Answer the following questions. Mark the answer you think is correct. Only one of the answers is correct 2.1 Why do we need to split the B stream into substreams? a) Because we need several BLOOM filters to implement a join. b) Because joining shorter streams is faster than joining larger streams. c) To be more efficient. 2.2 Why do we need the BLOOM and BFILTERs? a) To delete tuples of B that cannot be joined with A. b) To test whether A and B can be joined. c) To select tuples of A that can be joined with tuples of B. 2.3 Why do we need a bit map of hashkeys of tuples of A? a) To efficiently select tuples of A that can be joined. b) To be able to join tuples of A and B. c) To speed-­‐up joins. 2.4 Must the hash functions of both HSPLIT boxes hash on the same key? a) Yes, because the bit map contains the hash values of A tuples, which are compared with the hash values of B tuples. b) Yes, because it reduces the number of joins of smaller streams. c) No, because the joined tuples of A and B will be merged anyway. 2.5 Why are the A tuples and B tuples not just simply put in the HJOIN box, after the HSPLIT box? a) The BLOOM and BFILTER steps reduce the number of B tuples to join. b) The BLOOM and BFILTER steps eliminate unneeded attribute data of tuples. c) The BLOOM and BFILTER steps reduce the number of A tuples to join. 3. Modify the model such that the BLOOM box is not part of the join anymore How motivated were you to solve the tasks? Task 1 2.1 2.2 2.3 2.4 2.5 3 very unmotivated medium unmotivated motivated very motivated very easy How difficult did you find the tasks? Task 1 2.1 2.2 2.3 2.4 2.5 3 very difficult difficult medium (An overview of all tasks is on the backside)
Figure 3: Tasks for Gamma
7
easy 1. Redraw the graph you just saw which shows an asynchronous crash fault tolerance server. Please, be as precise as you can. Experiment on Model Comprehension Stop! Please do not turn the page!
Stop! Please do not turn the page!
2. Answer the following questions. Mark the answer you think is correct. Only one of the answers is correct 2.5 What is the difference between a broad cast (
) and a demultiplexor ( )? a) A broadcast sends messages to all nodes on the right, whereas a demultiplexor looks at who the message is intended for and sends it to that recipient. 2.1 Why are the A nodes required to communicate amongst themselves? b) A broadcast looks at who the message is intended for and sends it to that recipient, whereas a demultiplexor sends messages to all nodes on the right. a) To combine messages from the clients to reduce server load. b) The A nodes split the load coming from the clients. c) A broadcast sends messages immediately to all nodes on the right, whereas a demultiplexor collects a certain amount of messages before sending them to all nodes on the right. c) The A nodes run a decision making protocol that can tolerate if some number of them crash. 2.2 Why are there k servers (S1... Sk)? 2.6 Why are the servers (S) sending messages to the agreement nodes (A)? a) To be more efficient, a certain server handles messages of a certain subset of clients. a) To let an agreement node know that a client is not responding. b) To let an agreement node know that it is busy and prevent it from overload. b) They are all replicas of one server to tolerate a certain number of crashes. c) They parallelize the message handling. If one server is busy, another server handles a message. c) To ask for the current time and to send their current checkpoints to the agreement nodes. 2.3 Why do we use the Rt nodes to replicate the incoming client messages? 2.7 Why are the servers (S) sending messages to agreement nodes (A) via quorum nodes (Q) instead of directly sending them? a) The A nodes process the client messages in parallel to reduce load. b) One of the A nodes keeps the client message as back up. a) To ensure that all agreement nodes get the message of the server. c) In case one of the A nodes fail, the other A nodes can still run the decision protocol. b) The agreement nodes need to confirm the checkpoint of a recovering server. c) This way, the server does not have to wait for the synchronization message of the agreement node, but can continue to handle messages of clients. 2.4 Why are the servers (S) sending messages to quorum nodes (QS), instead of directly to the clients (C)? a) Since the servers are just replicas, the message of each server has to be combined to one message. 2.8 Why do the servers (S) have to communicate amongst themselves? b) To make sure that the message is processed by a sufficient number of servers. a) A recovering server must get the state from another server replica. b) To obtain a consistent state from the recovery server. c) To make sure that a client does not receive a message from a server more than once. c) For successful recovery, all server replicas have to decide which message to send to the agreement nodes (A). Figure 4: Testbooklet for Upright
8
3. Add a server Sk+1 to the asynchronous crash fault tolerance server, such that it can now tolerate one more server crash. You can find an overview of all tasks on the back side. How motivated were you to solve the tasks? Task 1 2.1 2.2 2.3 2.4 2.5 3 very unmotivated medium unmotivated motivated very motivated very easy How difficult did you find the tasks? Task 1 2.1 2.2 2.3 2.4 2.5 3 very difficult difficult medium easy Age: _____ Gender: ________ How experienced are you with... Pipe & Filter Architectures Crash Fault Tolerance Servers Modelling Stop! Please do not turn the page!
very inexperienced medium experienced very inexperienced experienced 1. Redraw the graph you just saw which shows an asynchronous crash fault tolerance server. Do you have any remarks or comments about the experiment? 2. Answer the following questions. Mark the answer you think is correct. Only one of the answers is correct 2.1 Why are the A nodes required to communicate amongst themselves? a) The A nodes run a decision making protocol that can tolerate if some number of them crash. b) The A nodes split the load coming from the clients. c) To combine messages from the clients to reduce server load. 2.2 Why are there k servers (S1... Sk)? a) They are all replicas of one server to tolerate a certain number of crashes. b) To be more efficient, a certain server handles messages of a certain subset of clients. c) They parallelize the message handling. If one server is busy, another server handles a message. 2.3 Why do we use the Rt nodes to replicate the incoming client messages? a) In case one of the A nodes fail, the other A nodes can still run the decision protocol. b) One of the A nodes keeps the client message as back up. c) The A nodes process the client messages in parallel to reduce load. 2.4 Why are the servers (S) sending messages to quorum nodes (QS), instead of directly to the clients (C)? a) To make sure that the message is processed by a sufficient number of servers. b) Since the servers are just replicas, the message of each server has to be combined to one message. c) To make sure that a client does not receive a message from a server more than once. 2.5 What is the difference between a broad cast ( ) and a demultiplexor ( )? a) A broadcast sends messages to all nodes on the right, whereas a demultiplexor looks at who the message is intended for and sends it to that recipient. b) A broadcast looks at who the message is intended for and sends it to that recipient, whereas a demultiplexor sends messages to all nodes on the right. c) A broadcast sends messages immediately to all nodes on the right, whereas a demultiplexor collects a certain amount of messages before sending them to all nodes on the right. 2.6 Why are the servers (S) sending messages to the agreement nodes (A)? a) To let an agreement node know that a client is not responding. b) To let an agreement node know that it is busy and prevent it from overload. c) To ask for the current time and to send their current checkpoints to the agreement nodes. 2.7 Why are the servers (S) sending messages to agreement nodes (A) via quorum nodes (Q) instead of directly sending them? a) To ensure that all agreement nodes get the message of the server. b) The agreement nodes need to confirm the checkpoint of a recovering server. c) This way, the server does not have to wait for the synchronization message of the agreement node, but can continue to handle messages of clients. 2.8 Why do the servers (S) have to communicate amongst themselves? a) A recovering server must get the state from another server replica. b) To obtain a consistent state from the recovery server. c) For successful recovery, all server replicas have to decide which message to send to the agreement nodes (A). 3. Add a server Sk+1 to the asynchronous crash fault tolerance server, such that it can now tolerate one more server crash. Thank you for participating in this experiment. You made a great contribution to science!  Figure 4: Testbooklet for Upright
9
3
Experimental Run 1
In Tables 1 - 4, we present the results of the first experimental run.
Group
Age
Gender
Experience with Crash
Fault Tolerance Servers
Experience
with Modelling
BigBang
BigBang
BigBang
BigBang
Central tendency
22
20
30
20
23.0
male
male
male
male
–
1
1
2
3
1.5
1
2
3
4
2.5
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Central tendency
21
23
22
36
23
21
34
25.7
male
male
male
male
female
male
male
–
1
1
2
2
2
2
3
2
4
1
4
3
1
3
3
3
Central tendency: arithmetic mean for age, median for experiences. Scales for experiences: 1:
very inexperienced; 2: inexperienced; 3: medium; 4: experienced; 5: very experienced.
Table 1: Experimental Run 1: Overview of subjects’ background.
10
Group
Task 1
Task 2
Task 3
BigBang
BigBang
BigBang
BigBang
Median
5
3
3
2
3
2
5
5
3
4
1
1
0
0
0.5
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Median
4
4
4
4
3
4
2
4
–
4
5
4
5
4
4
4
0
0
0
0
1
0
0
0
U value
significant?
7
no
10
no
19
no
Task 1 and 3: Number of removed, added, or
replaced elements; Task 2: Number of correct
answers; –: Subject gave no answer.
Table 2: Experimental Run 1: Correctness of tasks.
Group
Task 1
Task 2.1
Task 2.2
Task 2.3
Task 2.4
Task 2.5
Task 3
BigBang
BigBang
BigBang
BigBang
Median
2
4
3
4
3.5
3
3
4
3
3
3
3
4
3
3
3
3
4
3
3
3
3
4
3
3
3
3
4
3
3
3
2
4
4
3
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Median
3
2
4
4
4
3
5
4
–
2
2
4
4
4
5
4
–
2
2
4
4
4
5
4
–
2
2
4
4
4
–
4
–
2
2
4
4
4
4
4
–
2
2
4
4
4
5
4
–
2
3
4
5
4
5
4
1: very unmotivated; 2: unmotivated; 3: medium; 4: motivated; 5: very motivated;
–:Subject gave no answer
Table 3: Experimental Run 1: Overview of subjects’ motivation.
11
Group
Task 1
Task 2.1
Task 2.2
Task 2.3
Task 2.4
Task 2.5
Task 3
BigBang
BigBang
BigBang
BigBang
Median
2
2
2
4
2
2
4
4
4
4
2
4
5
4
4
2
4
5
4
4
2
4
3
4
3.5
2
4
3
3
3
1
–
5
4
3
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Median
3
4
2
2
2
2
2
2
4
4
4
3
4
4
3
4
4
4
4
3
4
4
5
4
3
4
3
3
3
4
4
3
4
4
4
3
4
4
4
4
4
4
2
3
2
4
3
3
3
4
5
3
3
3
2
3
1: very difficult; 2: difficult; 3: medium; 4: easy; 5: very easy; –: Subject gave no answer.
Table 4: Experimental Run 1: Overview of subjects’ estimated difficulty.
4
Experimental Run 2
In Tables 5 - 8, we present the results of the second experimental run. Note that
for estimation of motivation and difficulty we have no response for the sixth and
seventh comprehension task, because we neglected to adjust the questionnaire
accordingly.
12
Group
Age
Gender
Experience with Crash
Fault Tolerance Servers
Experience
with Modelling
BigBang
BigBang
BigBang
BigBang
Central tendency
22
21
22
22
21.8
male
male
female
male
–
2
2
1
1
1.5
3
2
3
1
2.5
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Central tendency
22
30
21
23
22
28
24.3
male
male
male
female
male
male
–
2
1
1
1
2
1
1
3
4
2
3
3
3
3
Central tendency: arithmetic mean for age, median for experiences. Scales for experiences: 1:
very inexperienced; 2: inexperienced; 3: medium; 4: experienced; 5: very experienced.
Table 5: Experimental Run 2: Overview of subjects’ background.
Group
Task 1
Task 2
Task 3
BigBang
BigBang
BigBang
BigBang
Median
2
3
3
1
2.5
7
7
2
3
5
3
3
3
1
0.5
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Median
4
4
4
4
3
4
2.5
–
4
5
4
5
4
5
3
1
3
3.5
4
3
3
U value
significant?
10.5
no
11.5
no
8
no
Task 1 and 3: 4-point scale: 1: no clue; 2: some
idea; 3: almost correct; 4: correct. Task 2: Number of correct answers; –: Subject gave no answer.
Table 6: Experimental Run 2: Correctness of tasks.
13
Group
Task 1
Task 2.1
Task 2.2
Task 2.3
Task 2.4
Task 2.5
Task 3
BigBang
BigBang
BigBang
BigBang
Median
5
2
4
1
3
4
4
4
1
4
4
4
4
1
4
4
4
4
1
4
4
4
4
1
4
4
4
4
1
4
3
5
4
1
3.5
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Median
4
4
4
3
3
4
4
4
4
3
3
3
4
3.5
4
4
3
3
3
4
3.5
4
4
3
3
3
4
3.5
4
4
3
3
3
4
3.5
4
4
3
3
3
4
3.5
4
4
4
3
3
4
4
1: very unmotivated; 2: unmotivated; 3: medium; 4: motivated; 5: very motivated.
Table 7: Experimental Run 2: Overview of subjects’ motivation.
Group
Task 1
Task 2.1
Task 2.2
Task 2.3
Task 2.4
Task 2.5
Task 3
BigBang
BigBang
BigBang
BigBang
Median
3
–
2
1
2
4
4
2
1
3
4
4
2
1
3
4
3
2
1
2.5
3
4
2
1
2.5
5
3
2
1
2.5
–
3
4
1
3
Derivation
Derivation
Derivation
Derivation
Derivation
Derivation
Median
2
1
3
3
2
3
2.5
4
2
3
3
5
3
3
4
2
3
3
5
3
3
4
2
3
3
5
3
3
2
2
3
3
2
3
2.5
4
2
3
3
5
3
3
4
3
4
2
5
5
4
1: very difficult; 2: difficult; 3: medium; 4: easy; 5: very easy; –: Subject gave no answer.
Table 8: Experimental Run 2: Overview of subjects’ estimated difficulty.
14