“When you’re taking Building Virtual Worlds, every two weeks we get peer feedback. We put that all into a big spreadsheet and at the end of the semester, you had three teammates per project, five projects, that’s 15 data points, that’s statistically valid. And you get a bar chart telling you on a ranking of how easy you are to work with, where you stacked up against your peers. Boy, that’s hard feedback to ignore.” - Randy Pausch, Last Lecture
Everyone has difficulty hearing certain types of feedback. Effectiveness and acceptance of any review depends on the review itself, the giver and receiver. So often there’s the question of objectivity.
360 Reviews adds more people to the mix to give a more legitimate view of a person’s performance or fit. This seems to put a lot of burden on everyone involved and writing styles hardly makes it anonymous.
Instead, performance evaluations can be simpler: a list of traits everyone gets rated on.
A system like this would need:
- an understanding of minimal amount of participants to be meaningful
- intense result interpretation support by HR and managers
- the burden of articulating qualities be removed from the reviewer, employing collection methods that resemble popular trait tests that spits out final list of traits for them.
With enough input, as Mr. Pausch says, that’s hard feedback to ignore.