By now, most of us are aware of the sequences of rules (known as algorithms) that determine what we see (and don’t see) when we use social media and other online services. While much of the public debate on social media algorithms focuses on the possible effects on our politics, research by Professor Sylvie Delacroix at the University of Birmingham is revealing effects that may run much deeper than that; damaging our individual ‘intuitive intelligence’.
Professor Delacroix, a professor in law and ethics and a member of the Institute for Interdisciplinary Data Science at the University of Birmingham, recently spoke to the CScience online publication, about her forthcoming book (Habitual Ethics?), whose last chapter she recently outlined as part of the Montreal Speaker Series in the Ethics of AI.
Intuitive intelligence is the intelligence we rely on before we even begin to think about a problem or challenge. For example, a firefighter may intuitively assess the right moment to enter a building on fire. This is different from ‘deliberative intelligence’, whereby a person may rationally assess different forms of evidence, before making a ‘rational’ decision.
According to Professor Delacroix, we hone the habits that underlie our intuitive intelligence in part through fortuitous, novel encounters. Whereas these encounters provide us with an opportunity to re-examine our habits, the algorithms we encounter when using online services have the opposite effect. This is not only because they serve up familiar content, which reinforces rather than challenges the intuitive intelligence that is built on the basis of our habits. It is also because these algorithms are not designed in a way that allows for constructive feedback loops. In ‘offline’ environments, the way we shape -and are shaped by- the environment we inhabit allows us to learn from that environment.
This is not the case in online environments that are optimised to maximise user engagement: not only is the way these environments shape us opaque, there is also no room for a ‘return movement’ that would allow us to shape -and experiment with- these environments. Given the importance of such experimentation for the process that shapes our intuitive intelligence, this is bad news.
Fortunately, Professor Delacroix believes there are practical things we can do to protect and develop our intuitive intelligence from the deadening effects of everyday algorithms. The first is to develop tools to highlight results from ‘phantom’ optimisation systems and allow us to experiment with such alternative systems in non-value loaded contexts. This could be as simple as showing viewers how their Netflix recommendations are compiled and what titles would be highlighted in alternative recommendation systems. In value-loaded contexts, allowing end-users an insight into outputs that would be generated by differently trained algorithms matters if we are serious about nurturing the long-term contestability of systems deployed in contexts such as health, justice or education. This is outlined in more detail in ‘Diachronic Interpretability and Machine Learning Systems’.
Professor Delacroix’s second area of focus is on ‘data trusts’. These trusts would serve as trusted intermediaries between ourselves and the online services we rely on in our everyday lives. Allowing users to pool together the rights they have over their data within data trusts has the potential to give us all greater power to determine the terms and conditions that preside over service providers’ data collection, as well as choosing what data we share and for what purposes. By empowering increasingly disenfranchised groups to choose a data trust that corresponds to their attitude to risk and aspirations (and change when the latter evolve), data trusts have the potential to reverse our deeply ingrained ‘habit of passivity’ when it comes to data governance. The consequent opportunities for renewed experimentation and debate are key to supporting the intuitive intelligence upon which many of our attitudes when it comes to data rely.