Each year in the UK alone it is estimated that over 150,000 people suffer a stroke. Of these, a significant number will experience Apraxia or Action Disorganisation Syndrome (AADS). Because this impairs their ability to complete simple daily tasks, like making a cup of tea or teeth-cleaning, it is a significant obstacle to independent living. In the EU CogWatch project, researchers in EECE are working with psychologists and engineers to develop interactive technology to retrain patients to overcome this problem. By attaching sensors, such as accelerometers, gyroscopes and force-sensitive resistors to the objects involved in these tasks, and combining this information with hand coordinates obtained from video processing, a patient’s actions can be recognised automatically. This information can then be collected to monitor the patient’s progress through the task. If the patient makes an error, or hesitates for too long, this is detected by the system and appropriate cues and feedback are provided.

The key computer algorithms for recognising a patient’s actions and tracking his or her progress through a task are inspired by EECE’s long-standing expertise in automatic speech recognition and language processing. There are strong similarities between the interactions between a patient and the CogWatch system, and between a user and a spoken dialogue system. As in speech recognition, the raw data that is to be recognised is sequential and highly variable, and statistical methods based on hidden Markov models are applicable. However, there are significant differences between action recognition and speech recognition. In speech words occur in sequences constrained by grammar, but if the patient is using both hands actions may occur simultaneously or at least in overlapping time, and the constraints on the order of actions are not well understood. For these reasons the CogWatch action recogniser is realised as an array of multiple detectors, each responsible for recognising a single action. The outputs from these detectors are passed to a Task Model, which keeps track of the patient’s progress through the task. In the CogWatch system, the task model is a statistical dialogue model based on a Markov Decision Process.

Although EECE’s focus is on technological aspects of the CogWatch project, the key research questions are psychological. In particular, can interacting with the CogWatch system improve a patient’s long-term ability to conduct everyday tasks independently? The answer to this question will come from trials which are currently being conducted with patients and control subjects in the UK and Germany.

An article describing the CogWatch project was a winner of the Wellcome Trust Science Writing Prize 2013 and was featured recently in The Guardian.

In addition to the School of Electronic, Electrical and Computer Engineering and the School of Psychology at the University of Birmingham, the partners in the CogWatch project are Technische Universität München, Universidad Politecnica Madrid, The Stroke Association, Headwise, RGB Medical Devices and the BMT Group.


Figure 1: A milk jug fitted with a CogWatch Instrumented Coaster (CIC) (left). The CIC (right) consists of a three-axis accelerometer to detect motion, three force-sensitive resistors to detect changes in weight and a bluetooth module to communicate the sensor outputs to the CogWatch system.



Figure 2: Example outputs from sensors attached to a mug (top two graphs) and kettle (bottom two graphs) for the action “pour water into mug”. The data from the force sensitive resistors (FSRs) attached to the mug (second graph from top) show the increase in weight of the mug as it is filled. The data from the kettle FSRs (bottom graph) clearly show the points where the kettle is lifted from the table and then returned to the table.

03CogWatch
03---CogWatch2