Our APIs learn from your existing content, and create or classify the rest.
Our model is designed for the typical 3-layer structure of an online educational curriculum - chapter --> sub-chapter --> concept / learning outcome. With accuracies >95% at the sub-chapter level, we are able to achieve an accuracy >70% @ the concept level even when there’s just 1 previous example of content tagged to it.
Right now, you’ve got to either hire a ton of SMEs to grade open-responses, or reduce the number of open-response questions asked. Most exam questions are open-response so this reduces the relevancy of questions prepped on to questions asked on the exam.
But auto-grading MCQs is easy. So we designed models which can convert any open-response question (math problems, physics explanations, chemistry equations, etc.) and generate several relevant distractors. We measure the similarity of our distractors and if they fall below our thresholds, we reject them - assuring you a high quality conversion of exam questions to MCQs.