09:00 Mark Sprevak (University of Edinburgh)
Being good to robots
It is wrong to hurt another human being, not just for purely instrumental reasons (e.g. they might hurt you back), but also because it is morally wrong. We stand in certain moral relations to other human beings (and at least some non-human animals) that classify some of our actions towards them as morally good or bad. We owe other humans (and animals) moral duties. This differs from the relationship we have with other objects in the world (e.g. video recorders, calculators, cars). Damage to these systems may have moral consequences for human beings and animals, but those systems are not capable of suffering moral harms in themselves. We are under no moral duties to them. What about robots and AI systems? Do we owe them moral duties? If so, what are they? If not, what would need to change about the underlying technology for us to owe them moral duties? How should these considerations interact with engineering and design priorities? This talk introduces ways to answer these questions based around how non-human animals are already handled in two moral frameworks: utilitarian ethics and Kantian ethics.
10:00 Coffee break
10:15 Aurora Voiculescu (Westminster University)
Ethical and Regulatory Challenges for the New Technological and Social Frontiers in AI and Robotics
In the past decades, an increasing number of human intellectual activities have been replicated through Artificial Intelligence (AI) technologies. AI ‘actions’ based on such intellectual undertakings have led to such technologies being used in a multitude of support activities in businesses and services throughout the economy and society. While big data and machine learning have led to increased progress in AI ‘cognitive insight’, intelligent machines now also increasingly share, physically, the same space as humans. Automated vehicles, care robots, surgical robots, drones, hotel receptionists and shopping assistants, have all become a common encounter. While the support that such AI and robotics technologies can bring to human activities is expanding at an ever-increasing rate, the normative – ethical and regulatory - environment needed for welcoming such technologies is evolving at a much slower pace and, with few exceptions, mostly in a reactive rather than a proactive manner. What are the key ethical and regulatory challenges that stem from the deep encounter of human and machine and what new ‘division of labour’ should we envisage between the various actors in AI and robotics? What is the place of ethical reflection in all this? What should the dynamics be between the ethical and the legal/regulatory parameters required here? What are the key parameters of good governance and good practice when we speak of AI R&D? And, last but not least, how can law and ethics be brought to bear in supporting socially-mindful AI and robotics technologies? Mapping out such key normative questions, this talk highlights the need for a transformative normative thinking, that should match the emergence of AI and robotics as fundamentally transformative technologies.
11:15 Tom Sorell (University of Warwick)
Robot Carers
Robot carers are often conceived of as high-functioning social robots. What, if anything, makes them preferable to simpler and cheaper non-robotic technology (which can be used in many more types of domestic setting than smart homes)? Care robots add something, but perhaps not enough to justify their moral (and money) costs.
12:15 Lunch
13:00 David Gunkel (Northern Illinois University)
How To Survive the Robot Apocalypse
Whether we recognize it or not, we are in the midst of a robot invasion. The machines are now everywhere and doing everything. We chat with them online, we play with them in digital games, we collaborate with them at work, and we rely on their capabilities to manage many aspects of our increasingly complex lives. Consequently, the “robot invasion” is not something that will transpire as we have imagined it in our science fiction, with a marauding army of evil-minded androids descending from the heavens. It is an already occurring event with artifacts of various configurations and capabilities coming to take up positions in our world through a slow but steady incursion. It looks less like Battlestar Galactica and the Terminator and more like the Fall of Rome. What matters most in the face of this incursion is not resistance—insofar as resistance already appears to be futile—but how we decide to make sense of and respond to the social opportunities and challenges that these increasingly capable devices make available. In this presentation, Prof. David Gunkel, author of The Machine Question (MIT Press 2012) and Robot Rights (MIT Press 2018) investigates the significance and consequences of the robot invasion in an effort 1) to chart the increasingly complicated moral terrain of the 21st century, 2) to assist students, teachers, and researchers in their efforts to understand and make sense of a changing world; and 3) to provide critical perspective for grappling with and responding to these ethical challenges.
14:00 Group discussions
14:45 Plenary reports from groups
15:15 Closing remarks