How does it work ?

Learn how Cocertify measures student performance.


The grading phase is anonymous to avoid biases.


Each student grades several answers, and each answer is graded by several students.


Part of the student score depends on grading accuracy.


Other factors are analysed to detect abnormal behaviour.


To avoid biases

Cocertify redistributes answers anonymously for peer-grading. In other words, the students do not know the authors of the answers they’re grading.

Cocertify randomly selects what answers to grade, so that it’s really hard to come up with a strategy to counter the anonymity.


To get various points of view

Each student grades 5 papers, and each paper is graded between 3 and 5 times.

The multiple points of views are a factor of reliability (see reflexive).


To motivate peer evaluation

Each answer being graded several times, Cocertify assesses the overall consensus between the grades.

If a student is close to the consensus each time, he will get a better grade.

If no consensus appears after several corrections, Cocertify gathers more grades to ensure reliability.


To prevent abnormal behaviour

An analyst tracks peer evaluation in Cocertify. He monitors behavioural alerts on time spent, lack of consensus etc…

If some signals catch his attention, he can investigate further and decide to undertake corrective actions (get more grades, contact a student, change a score etc…)


The data shows that, naturally, students scores are correlated with staff scores. We show that this correlation increases when using Cocertify.

Thanks to the anonymity of the gradings, the multiplicity of points of views, and the accounting for grading accuracy, Cocertify is able to reflect accurately student understanding.