Season Four: Brain and Mental Health
Season Three: The History of Pandemics
Season Two: Climate Change
Season One: Artificial Intelligence
Special Episode: A brief history of Quantum Computing
php/* */ ?>
Season Four: Brain and Mental Health
Season Three: The History of Pandemics
Season Two: Climate Change
Season One: Artificial Intelligence
Special Episode: A brief history of Quantum Computing
Copyright: © University of Oxford
Once we believed that the world around us behaved according to the laws of classical mechanics, and it took us hundreds of years to work out that actually something else was going on. Quantum computing offers what we believe to be the best way to process information based on the laws of physics as we now know them. But how did we discover that quantum mechanics could offer such developments in computing? And why did this realm remain hidden for so long?
For this special episode of Futuremakers, Peter Millican, Professor of Philosophy, set out to discover the truth about a global race to develop the world’s first scalable quantum computer. He met a diverse range of researchers, who gave him their thoughts on the powerful next realm of computation their work opens up, via the fundamental building blocks, to the ultimate goal of a truly universal quantum computer. Keep listening to find out why there's a race to create this technology, if Oxford's researchers believe we'll ever achieve our goal, and what it could mean for society if we did.
In the final episode of our series, we’re looking back at the themes we’ve discussed so far, and forward into the likely development of AI. Professor Peter Millican will be joined by Professor Gil McVean, to further investigate how big data is transforming healthcare, by Dr Sandra Wachter, to discuss her recent work on the need for a legal framework around AI, and also by Professor Sir Nigel Shadbolt on where the field of artificial intelligence research has come from, and where it’s going. To conclude, Peter will be sharing some of his views on where humanity is heading with AI, when you’ll also hear from his final guest, Azeem Azhar, host of the Exponential View podcast.
Futuremakers will be taking a short break now, but we’ll be back with series two in the new year, when we’ll be taking on another of society’s grand challenges: building a sustainable future. Before then we’ll also be publishing a special one-off episode on Quantum Computing and the global opportunities, and risks, it could present.
To read more about some of the key themes in this episode, you can find Sandra Wachter’s recent papers below.
A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829
Explaining Explanations in AI: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3278331
Counterfactual explanations without opening the black box: automated decisions and the GDPR: https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf
In the penultimate episode of series one, we’re looking at the development of AI across the globe. China has set itself the challenge of being the world’s primary innovation centre by 2030, a move forecast to generate a 26% boost in GDP from AI related benefits alone, and some claim they’re already leading the way in many areas. But how realistic is this aim when compared to AI research and development across the world? And if China could dominate this field, what are the best, and worse, case scenarios for both it, AI technology, and the rest of the planet?
Join our host, philosopher Peter Millican, as he explores this topic with Mike Wooldridge, Head of Oxford’s Department of Computer Science; Xiaorong Ding, a post-doctoral researcher who’s studied and worked several of China’s leading universities and companies; and Sophie-Charlotte Fischer, a visiting researcher at the Future of Humanity Institute, and a PhD Candidate whose dissertation project focusses on the development of AI in China and the US.
So far in the series we’ve heard that artificial intelligence is becoming ubiquitous and is already changing our lives in many ways, from how we search for and receive information, to how it is used to improve our health and the nature of the ways we work. We’ve already taken a step into the past and explored the history of AI, but now it’s time to look forward. Many philosophers and writers over the centuries have discussed the difficult ethical choices that arise in our lives. As we hand some of these choices over to machines, are we confident they will reach conclusions that we can accept? Can, or should, a human always be in control of an artificial intelligence? Can we train automated systems to avoid catastrophic failures that humans might avoid instinctively? Could artificial intelligence present an extreme, or even an existential threat to our future?
Join our host, philosopher Peter Millican, as he explores this topic with Allan Dafoe, Director of the Centre for the Governance of AI at the Future of Humanity Institute; Mike Osborne, co-director of the Oxford Martin programme on Technology and Employment, who joined us previously to discuss how AI might change how we work; and Jade Leung, Head of Partnerships and researcher with the Centre for the Governance of AI.
Around the world, automated bot accounts have enabled some government agencies and political parties to exploit online platforms in dispersing messages, using keywords to game algorithms, and discrediting legitimate information on a mass scale. Through this they can spread junk news and disinformation; exercise censorship and control; and undermine trust in the media, public institutions and science. But is this form of propaganda really new? If so, what effect is it having on society? And is the worst yet to come as AI develops?
Join our host, philosopher Peter Millican, as he explores this topic with Rasmus Nielsen, Director of Oxford’s Reuters Institute for the Study of Journalism; Vidya Narayanan, post-doctoral researcher in Oxford’s Computational Propaganda Project; and Mimie Liotsiou, also a post-doctoral researcher on the Computational Propaganda project who works on online social influence.
Many developments in science are achieved through people being able to ‘stand on the shoulders of giants’ and in the history of AI two giants in particular stand out. Ada Lovelace, who inspired visions of computer creativity, and Alan Turing, who conceived machines which could do anything a human could do. So where do their stories, along with those of calculating engines, punched card machines and cybernetics fit into to where artificial intelligence is today?
Join our host, philosopher Peter Millican, as he explores this topic with Ursula Martin, Professor at the University of Edinburgh and a member of Oxford's Mathematical Institute, Andrew Hodges, Emeritus Fellow at Wadham, who tutors for a wide range of courses in pure and applied mathematics, and Jacob Ward, a historian of science, technology, and modern Britain and a Postdoctoral Researcher in the History of Computing.
As chatbots and virtual assistants become an ever-present part of our world, and algorithms increasingly support decision-making, people working in this field are asking questions about the bias and balance of power in AI. With the make-up of teams designing technology still far from diverse, is this being reflected in how we humanise technology? Who are the people behind the design of algorithms and are they re-enforcing society’s prejudices through the systems they create?
Join our host, philosopher Peter Millican, as he explores this topic with Gina Neff, Senior Research Fellow and Associate Professor at the Oxford Internet Institute, Carissa Véliz, a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities, and Siân Brooke, a DPhil student at the Oxford Internet Institute focussed on construction of gendered identity on the pseudonymous web.
With AI algorithms now able to mine enormous databases and assimilate information far quicker than humans can, we’re able to spot subtle effects in health data that could otherwise have been easily overlooked. So how are these tools being developed and used? What does this mean for medical professionals and patients? And how do we decide whether these algorithms are making things better or worse?
Join our host, philosopher Peter Millican, as he explores this topic with Alison Noble, Technikos Professor of Biomedical Engineering in the Department of Engineering Science, Paul Leeson, Professor of Cardiovascular Medicine at the University of Oxford, and a Consultant Cardiologist at the John Radcliffe Hospital, and Jessica Morley, a Technology Advisor to the Department of Health, leading on policy relating to the Prime Minister's Artificial Intelligence Mission.
AI is already playing a role in the finance sector, from fraud detection, to algorithmic trading, to customer service, and many within the industry believe this role will develop rapidly within the next few years. So what does this mean for both the people that work in this sector, and for the role banking and finance plays in society?
Join our host, philosopher Peter Millican, as he explores this topic with Professor Stephen Roberts, Royal Academy of Engineering and Man Group Professor of Machine Learning, Professor Nir Vulkan, a leading authority on e-commerce and market design, and on applied research and teaching on hedge funds, and Jannes Klaas, author of 'Machine Learning for Finance: Data algorithms for the markets and deep learning from the ground up for financial experts and economics'.
Our lives are increasingly shaped by automated decision-making algorithms, but do those have in-built biases? If so, do we need to tackle these, and what could happen if we don’t?
Join our host, philosopher Peter Millican, as he explores this topic with Dr Sandra Wachter, a lawyer and Research Fellow at the Oxford Internet Institute, Dr Helena Webb a Senior Researcher in the Department of Computer Science, and Dr Brent Mittelstadt, a philosopher also based at the Oxford Internet Institute.
In 2013 two Oxford academics published a paper entitled “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, estimating that 47% of U.S. jobs were at risk of automation. Since then, numerous studies have emerged, arriving at very different conclusions. So where do these estimates diverge, and where do we think the automation of jobs might be heading?
Join our host, philosopher Peter Millican, as he explores this topic with one of the authors of that paper, Professor Mike Osborne, Dr Judy Stephenson, an expert on labour markets in pre-industrial England, and Professor David Clifton from our Department of Engineering Science.
Down winding streets, beyond the dreaming spires, inside the college walls, debates are happening - in every study room and lecture theatre - about the future of society.
Futuremakers, from the University of Oxford, invites you to that debate. Join your host, philosopher Peter Millican, and three experts as we discuss the movements that are shaping the future of our society.
Our first series is all about Artificial Intelligence, and we’ll explore topics from the inherent bias of algorithms to the future automation of jobs. That’s Futuremakers – available to download now.