There has been little evidence to suggest that adaptive learning (machine-guided learning which adapts to observed student weaknesses and strengths) is much better than traditional approaches. A new (2015) study continues this trend, finding no evidence that adaptive learning provides benefit. (pdf)
The article does not identify the publisher or product, but it appears to be LearnSmart from McGraw-Hill. (pdf)
First the authors provide a useful taxonomy of the adaptive software market, summarized in the graphic below. The taxonomy is one of the more useful pieces of the article, and may deserve its own page eventually:
The paper then details the result of a two semester long study of the performance of a publisher-supplied adaptive learning solution. The methodology uses a basic randomized control trial structure, with most variables kept constant. The material was for a digital literacy course, and the example shown of interaction makes it appear as though the outcomes were lower-level Bloom’s outcomes.
The results did not indicate any benefit to the adaptive condition. Here is a map of the grade distribution in the two conditions:
Some caveats: the modules created seem to be neither difficult or conceptual in nature. There is no room here for self-paced benefit either — after mastering the material here the student was done. Finally, the domain does not seem to require strict sequencing of concepts, which is often a use adaptive systems are put to.
A blog post on the results (post)