Imagine walking into a crowded room and recognising everyone gathered there – even though you’ve never met them before. How great would it be to know who was a lottery winner and who was a convicted fraudster?
Thanks to the rapid advancement of technology, this isn’t a far-fetched scenario. Face recognition systems are already in use and the first version of Google Glass ‘spectacles’ are due out soon. But is it a scenario we want?
Even now, we live in a world where computers record our movements and actions. Our mobile phones track where we go and who we speak to; CCTV cameras capture us as we go shopping. With more sensors and devices springing up all the time, ordinary people’s privacy is being increasingly invaded.
Before long, everything we say and do will be recorded. As a society, we haven’t yet decided what we want to do with this mounting collection of data. The general consensus is that our right to privacy should be balanced with law enforcers’ right to uncover crime and terrorist activity.
‘Clearly we should be willing to give up a bit of privacy for security,’ says Mark Ryan, Professor of Computer Security at the School of Computer Science. ‘We are already used to doing that to a degree, such as having our bags searched at airports; but how much privacy are we prepared to give up? If someone said “we’re going to put internet-connected cameras in your bedroom”, most of us would consider that unreasonable. So it’s a question of balance.’
Our computer scientists, led by Mark, are working on finding technologies that would support this balance; for example, we are working on a system that would allow the security services to read a proportion of people’s emails, but without knowing who wrote them. Only if they spotted something sinister would they be able to trace the sender and recipient. The number of emails they had read, the number of times they had uncovered the sender and recipient, and crimes solved as a result would be publicly verifiable.
In this way, we believe our work will prove invaluable as a way of helping to solve crime and prevent terror attacks, while at the same time limiting the authorities’ ability to ‘spy’ on its citizens.
What we still need to decide, as a society, is whether we want a stranger to know everything about us before we’ve even said ‘hello’. My work is to develop privacy-balancing technologies,’ says Mark, ‘but in order to do this, we need to figure out what sort of technologies we want. Otherwise, there’s a danger that we’ll end up developing and building the wrong things, because we’re not using the right balance between security and privacy. So the discussion and the development of these technologies have to be done in tandem.’
Led by Mark Ryan, Professor of Computer Security, we have devised an e-voting system that can identify and monitor any votes that may have been carried out under coercion. Called Caveat Coercitor, the system would alert authorities to such votes. For instance, a coercer might change a legitimate vote by installing malware on the victim’s computer, or vote on their behalf by stealing their voting password from the post. This system would enable the authorities to analyse the impact of such votes on the election result. The coercers could then be traced and prosecuted.
‘With e-voting, you want your vote to be private,’ explains Mark. ‘For an election to be free and fair, it’s important no one knows how you voted. But, obviously, your vote isn’t completely unknown, otherwise it couldn’t be counted. So we want the data to be readable in some ways, but not in others. We want the vote to be counted, but not associated with the voter.’
Making smart technology a whole lot smarter
We have proved that a lot of ‘smart’ technology is, in fact, not-so-smart because it isn’t secure: Dr Flavio Garcia has used reverse engineering and cryptanalysis to crack the codes of smart cards used for controlling access to official buildings and transport systems around the world, thereby demonstrating they could be cloned by criminals.
Dr Tom Chothia has used similar methods to demonstrate that the Basic Access Control protocol on e-passports was flawed, leaving it vulnerable to a replay attack that makes it possible to trace a passport.
The main reason a lot of smart technology is insecure is that manufacturers are loath to make public the designs of their ciphers in order to prevent rival companies copying their products. But keeping the keeping the designs secret doesn’t prevent them from being hacked. And if ‘goodies’ – such as university computer scientists – can do it, so too can ‘baddies’.
As an indirect result of the work done by Flavio and Tom, more companies are willing to put their designs in the public domain, thus allowing security weaknesses to be identified and put right.
We are also developing more secure access control systems (ACSs) than the ones currently in place. ACSs are used to regulate access to resources such as files, database entries and web pages. Because computers are increasingly interconnected, it’s important to prevent malicious users from gaining access to these resources.