SDC 2024 – Finalists

Islam Barchouch (INSA, Rennes – France)

Interpretation of semistructured traces for the conception of a Pen-Based Intelligent Tutorial System

Abstract:  This work presents a significant advance in the conception of IntuiSketch, an intelligent tutorial system for learning through drawing, using pen-based tablets. The system’s application is specifically aimed to produce anatomy sketches to help students in health-related fields (medicine, kinesitherapy, etc.). The main objective of this research work is twofold. For the recognition phase, the system must be capable of interpreting semi-structured handwritten sketches on real time. To achieve this, the system relies on a recognition engine based on a two-dimensional representation of the document, the visual grammar rules CD-CMG. To help the student to learn, the system must provide corrective or guidance feedbacks based on what the student has produced (in student mode) and in relation with what is expected. Those feedbacks are generated by an intelligent tutor based on a knowledge graph coherently related to the properties of the anatomy sketches produced by the teacher (in author mode).
The work presented in this abstract aims to make the recognition of handwritten anatomical traces more flexible in order to introduce more tolerance and defer feedbacks to key stages in the drawing process due to the recognition difficulties caused by elements being dependent to a reference element drawn incorrectly by the user, even minor errors can lead to inaccuracies in the recognition process. To achieve this, we have extended the CD-CMG visual grammar language by introducing two categories of constraints into the CD-CMG interpretation rules. This extension classifies the shape, positioning and geometric relationship constraints of the CD-CMG rules into two distinct categories: primary constraints, which are essential for a basic minimum interpretation of handwritten anatomical traces and must be validated to ensure recognition of the produced drawing, and secondary constraints, which allow the intelligent tutor to evaluate the quality of the student’s work. This enables the intelligent tutor to provide real-time and/or delayed, qualitative and accurate feedbacks based on the drawing properties.
This new approach allows IntuiSketch to efficiently recognise anatomy drawings, even in cases where secondary constraints are not fully satisfied. In this way, the system is able to provide both real-time feedbacks when primary constraints are not respected, allowing the student to correct immediately and continue his work; but also delayed feedbacks at key stages of the drawing process when secondary constraints are not respected, thus avoiding interrupting the student’s production. Those feedbacks are adapted to each student’s individual achievements, promoting better and more effective learning experiences.
The development of our intelligent tutorial system was based on two specific anatomy drawing exercises, focusing respectively on the spine and the eye. They were chosen for their representativeness of different anatomical structures and the specific challenges they present in terms of recognising drawing details. The experiments are being carried out by the LP3C laboratory, a partner of the ANR SKETCH project (through another thesis in psychology and uses), and will be done with students to evaluate the performance of the IntuiSketch intelligent tutorial system. The objective is to analyse both the effectiveness of the system in recognising anatomical traces more easily and in detecting errors, as well as the acceptability of this two-level feedbacks: immediate feedbacks for major errors and delayed feedbacks for minor errors. This methodological approach will enable an in-depth evaluation of the relevance and usefulness of the IntuiSketch system in the specific context of learning anatomy through drawing.

Rob Schmidt (Athabasca University – Canada)

Authorship Forensics Portal

Abstract: It has become increasingly important for many facets of society to be able to distinguish between documents written by humans versus those that are generated from generative artificial intelligence platforms like OpenAI’s ChatGPT. The Authorship Forensics Portal (AFP) is a web-based suite of automated tools that utilizes Convolutional Neural Networks (CNN) based on image recognition technology in combination with Statistical Natural Language Processing (SNLP) to create predictive models. These models can be used to assist in determining whether future documents are written by either a ChatGPT or by a Human. Preliminary research using the same methodologies contained in the AFP has demonstrated a significant ability to distinguish between human and ChatGPT written text. In an experiment consisting of 212 human written documents and an equal number of ChatGPT 3.5 and ChatGPT 4 written documents synthesized from the original paper titles, the models were able to successfully distinguish human versus ChatGPT authorship with a top precision of 0.9806 (F0.5 Score 0.96). The AFP not only provides us with the ability to perform these types of experiments faster and more efficiently, but it also could also assist in democratizing access to advanced authorship attribution tools, making it accessible to users without extensive backgrounds in machine learning or programming. This capability could significantly broaden the scope and scale of authorship verification efforts, enhancing integrity in digital content creation.