As you point out, “this is America”
The goal is to teach, but we also want to be able evaluate learning and ability. It seems like the ideal system would be one that chooses the best pedagogy for learning, and maps an effective method of evaluation on top of it. I am under the impression group work is a better pedagogy than individual work (or at least that mixed group and individual work is better than individual work alone).
I think people say this more as a way to motivate team members. Oneness with the group is a powerful mindset, and it enables many actions that aren’t justified on a purely individualistic grounds. But there are still individuals, the best players can still be more or less reliably identified.
I’m worried this might be the case, but I’m hopeful too. We have a few variables to play with: n (the size of the class), k (the size of the group), and p (the number of projects).
We also have the option of using non-random groups to get better information. One way might be to make it random with restrictions, (e.g. random for first project, no overlap between first project and second project group, no overlap between second and third project groups, etc). Another might be to intentionally group together or keep apart certain students based on their grades on previous projects, in order to distinguish among them. I might be that putting the best students with the worst students will tend to differentiate them more, or that intentionally balancing the groups will tend to show which students are pulling extra weight and which are not.
I’ve written a short program to generate test data, written in python. I’ll add it here if anyone wants to play with it or mock me for how bad I suck at programming.
[tab][code]import random
random.seed()
students = list()
Create 25 ‘students’, each a two item list:
- An index number (as an identifier), and
- A random integer between 0 and 100 (their ‘contribution’)
for i in range(25):
students.append([i,random.randint(0,100)])
Print the students, number 0 to 24 (python starts list indexes at zero).
for i in students:
print ((‘Student %s’ % str(i[0])) + ": " + str(i[1]))
projects = list()
projStudents = list()
Create 10 projects, in which the students will be groups randomly into groups of 5.
for i in range(10):
for j in students:
projStudents.append(j)
random.shuffle(projStudents)
# Each project will generate a list of groups with the scores they receive.
grouplist = list()
while projStudents:
group = list()
# There has to be a better way to do this, but this is a quick and dirty way to pick 5 random students.
group = [projStudents.pop(),projStudents.pop(),projStudents.pop(),projStudents.pop(),projStudents.pop()]
grade = 0
for j in range(5):
grade += group[j][1]
# This is a roundabout way to get the average contribution, which is the group’s grade.
grade = grade/5
# The grade is added to the group as the 6th value
group.append(grade)
grouplist.append(group)
# Add this grouplist to the projects list, and go to the next project.
projects.append(grouplist)
print (‘\nFinal outcomes : \n’)
x = 0
for i in projects:
# Print the projects, number 0 through 9
print ((‘project %s’ % x) + “:”)
for h in i:
printlist=list()
score=0
for idx, item in enumerate(h):
if idx != 5:
printlist.append(item[0])
print (printlist)
print (‘Score = ’ + str(h[5]))
print (’')
x+=1[/code][/tab]
And here’s some sample output:
[tab][code]Student 0: 86
Student 1: 91
Student 2: 8
Student 3: 10
Student 4: 15
Student 5: 33
Student 6: 82
Student 7: 17
Student 8: 99
Student 9: 18
Student 10: 24
Student 11: 20
Student 12: 95
Student 13: 35
Student 14: 90
Student 15: 91
Student 16: 35
Student 17: 99
Student 18: 40
Student 19: 13
Student 20: 25
Student 21: 8
Student 22: 3
Student 23: 15
Student 24: 80
Final outcomes :
project 0:
[16, 8, 2, 7, 9]
Score = 35.4
[18, 14, 23, 6, 1]
Score = 63.6
[17, 15, 10, 24, 4]
Score = 61.8
[0, 19, 13, 12, 3]
Score = 47.8
[11, 5, 20, 21, 22]
Score = 17.8
project 1:
[21, 8, 13, 5, 3]
Score = 37.0
[16, 12, 6, 19, 17]
Score = 64.8
[0, 24, 1, 22, 15]
Score = 70.2
[10, 7, 11, 4, 9]
Score = 18.8
[18, 23, 2, 20, 14]
Score = 35.6
project 2:
[11, 2, 20, 18, 5]
Score = 25.2
[21, 14, 12, 10, 19]
Score = 46.0
[8, 24, 1, 6, 0]
Score = 87.6
[7, 22, 13, 23, 16]
Score = 21.0
[17, 9, 15, 4, 3]
Score = 46.6
project 3:
[11, 1, 20, 10, 2]
Score = 33.6
[14, 17, 22, 24, 12]
Score = 73.4
[0, 5, 13, 19, 23]
Score = 36.4
[16, 21, 15, 8, 7]
Score = 50.0
[18, 4, 6, 3, 9]
Score = 33.0
project 4:
[1, 0, 24, 21, 22]
Score = 53.6
[2, 8, 16, 18, 23]
Score = 39.4
[12, 6, 7, 9, 17]
Score = 62.2
[11, 15, 3, 10, 5]
Score = 35.6
[4, 19, 13, 14, 20]
Score = 35.6
project 5:
[11, 15, 13, 7, 10]
Score = 37.4
[16, 5, 9, 4, 12]
Score = 39.2
[2, 18, 22, 3, 0]
Score = 29.4
[24, 1, 20, 21, 8]
Score = 60.6
[14, 19, 23, 17, 6]
Score = 59.8
project 6:
[23, 12, 3, 6, 20]
Score = 45.4
[14, 15, 2, 24, 16]
Score = 60.8
[19, 9, 22, 11, 8]
Score = 30.6
[21, 13, 7, 0, 18]
Score = 37.2
[4, 10, 17, 1, 5]
Score = 52.4
project 7:
[16, 5, 17, 13, 3]
Score = 42.4
[23, 6, 21, 2, 1]
Score = 40.8
[7, 20, 12, 8, 14]
Score = 65.2
[15, 0, 19, 11, 4]
Score = 45.0
[10, 18, 24, 9, 22]
Score = 33.0
project 8:
[15, 24, 23, 21, 5]
Score = 45.4
[18, 19, 4, 14, 16]
Score = 38.6
[13, 8, 1, 3, 17]
Score = 66.8
[0, 12, 7, 10, 9]
Score = 48.0
[6, 11, 20, 22, 2]
Score = 27.6
project 9:
[23, 13, 21, 4, 19]
Score = 17.2
[24, 9, 12, 10, 5]
Score = 50.0
[15, 22, 16, 8, 6]
Score = 62.0
[17, 11, 18, 3, 1]
Score = 52.0
[2, 14, 0, 20, 7]
Score = 45.2[/code][/tab]
This output lists 25 students (number 0 through 24) and their ‘contribution’ score. Grades are just the average of contribution score (that’s a gross simplification, but useful for present purposes). Then it lists the 10 projects, numbered 0-9, showing the students in each group and then the group score (which should be the average of those student’s contribution scores.
It’s an open question whether this models actual group interaction, but my approach right now is to see how much I can determine from a simple model, and then add complexity and tweak the solution to account for it.
The usefulness of this simple model is that we know what each student’s contribution score, and we can see how well we can approximate that score looking only at their group scores. If that’s possible, that should alleviate some of the complaints voiced here so far that looking at group scores alone is ‘vulgar’ or ‘unfair’. If we can approximate an individual character trait well while looking only at group work, we can fairly evaluate individuals without sacrificing pedagogy.