Studiosity AI-driven Writing Feedback+

AI has been making great leaps and strides over the past 18 months, and it seems we’re about to experience another example of its capabilities via the advent of Studiosity’s AI-driven feedback service, which they’ve called Writing Feedback+. UNE is currently exploring the possibility of trialling Writing Feedback+, and ongoing use will come at an added cost [TBD], but the details of this are yet to be approved/confirmed. Studiosity’s current Writing Feedback service draws on the expertise of 1100 graduates from across the world to provide feedback on the drafts of student assessment work. At UNE, roughly 10% of our students currently use Studiosity, with many of those coming from Health and Education.

Studiosity’s Writing Feedback+ AI system undertakes similar work. It captures a mass of data, but filters priorities [based on student requests] to provide key pieces of feedback to students to as not to overwhelm them. That is to say, it might capture 100 errors/concerns throughout a paper, but it will only show students 20 of those.

It’s also very fast! Whereas human feedback via Writing Feedback took on average 3 hours and 24 mins, the AI takes on average just 1 minute and 5 seconds. I’ve had the opportunity to review the work that Writing Feedback+ can do, and contrast this with some typical response from humans in the Writing Feedback service [we were even able to contrast feedback on the same papers]. My thoughts are that the AI is very similar in its feedback on fundamentals, like spelling, grammar and structure [and several people in the room mistakenly thought the AI feedback was the human feedback!], but I remain unconvinced on the quality and accuracy of its feedback on critical thinking and analysis. That said, improvements are being made continually, and the 1100-member staff of Writing Feedback are also working to provide quality assurance for the new system.

As noted, the system captures a mass of data and students can tailor the feedback requested from the AI to their needs. In the long-term, staff may be able to use this to gather data on the type and number of concerns appearing in draft work. For example, Unit Coordinators may be able to draw on that mass of data to identify broader issues of concern across a cohort, such as patterns of students within their unit needing skill development in how to structure their essay. In the trials they’ve run thus far, Studiosity are receiving almost identical feedback from students regarding feelings of confidence and satisfaction with the feedback/help received. That is to say, student satisfaction with Writing Feedback is 94%, and student satisfaction with Writing Feedback+ is also 94%. That said, while student satisfaction is one measure of success, I’m not convinced it’s the best way to measure the quality of the feedback provided, and I believe more qualitative testing and analysis is needed [and is ongoing].

However, in all of this, one of the biggest concerns is the loss of the human element. It has been hinted that in the future Studiosity will replace the bulk of their Writing Feedback work with Writing Feedback+, and it seems possible that they’ll cut their 1100-strong graduate staff down to 400. For those of you interested in the impact of AI on academia and its environs, here we have a clear example of how AI seems set to actively replace human work, and it may be worth paying close attention to what happens in this space.