Skip to main content

The E.A.T. Series


Several times each semester, The Parr Center for Ethics hosts an hourlong lunchtime discussion featuring a faculty member or practitioner speaking about an ethical issue related to their work. At a typical EAT event, our guest speaker will talk for 30-40 minutes and then the audience will ask questions for the remainder of the hour, though other formats are possible too. Recent topics of discussion include ethical questions about artificial intelligence, college athletics, environmental policy, health care policy, mass incarceration, political protest, and voting rights.

The EAT series is free and open to the public, and we provide lunch for all registered attendees. See our schedule for the next event in this series.

Event Summary

On Tuesday, September 12th, 2019, interested UNC students and faculty gathered to enjoy kabobs and gain perspective on the weighted question, “Could artificial intelligence replicate the complexity of human moral decision-making?” Professor Veljko Dubljević began his talk by disclaiming that all the information we were about to receive is very much “under construction,” and that extensive research is being done in many areas before any definitive conclusions can be drawn. Dubljević provided examples of where morality in AI is needed, such as self-driving cars encountering problems on the road and Carebots. A car’s ability to prioritize the life of the driver vs. pedestrians vs. animals, etc. must be equipped with moral intuition to be considered safe for use in everyday life. Carebots, stuffed-animal-like robots, can be a form of therapy for the elderly. Morality in artificial intelligence is necessary for a situation like this so that the bot can provide basic care without becoming neglectful. Dubljević expressed his ideas surrounding the theory that it would be possible to replicate human moral decision-making by examining the different aspects of our moral intuition and breaking them down into a scientific equation. Dubljević is striving to do just this with his “ADC” model (breaking human moral intuition into three individual components of intuition: agent, deed, and consequence). If we understand all the parts of human moral intuition, then we can reconstruct them in artificial intelligence. This was shown to be an exhaustive process and one that requires much revision and testing. However, Dubljević expressed to us that with further programming and development, the ADC model could be implemented as a solution to some of the world’s most pressing issues and concerns about the advancement and safety of artificially intelligent technology.

About our Speaker

Veljko Dubljević is an Assistant Professor of Philosophy and Science, Technology & Society (STS) at NC State. Before arriving in Raleigh, he spent three years as a postdoctoral fellow at the Neuroethics Research Unit at IRCM and McGill University in Montreal, Canada.  He is serving as the co-editor for the book series Advances in Neuroethics (see here) and Associate Editor of Frontiers in ELSI in Science and Genetics (see here). Also, since 2017, he is part of EthOS, the NCSU campus-wide study group on ethics of AI.

Dubljević’s Presentation


Fall 2015

September 22 | Amina White, “Does No Really Mean No in Trauma-informed Medical Care?”

October 22 | Marcia Van Riper

November 17 | Deborah Gerhardt

Spring 2016

January 21 | Barry Maguire

February 16 | Michael Christian

March 29 | Pete Andrews

Fall 2014

October 1 | Steve Swartzer, “Should Criminals Have the Right to Vote?”

October 14 | Johnathan Anomaly, “Why We Should Legalize All Recreational Drugs”

Spring 2015

February 19 | David Schmidtz, “The Pretense of Consent”

March 17 | Kurt Gray, “Mind Perception and Morality”

March 26 | Ann Cudd, “What is Equality in Higher Education?”

April 1 | Benjamin Meier, “Realizing the International Human Right to Health through U.S. Health Care Policy”

April 14 | Deborah Weissman, “The Moral and Legal Principles of the Right to an Apology” | For more information visit, NC Stop Torture Now

April 29 | Kim Strom-Gottfried, “Am I My Colleague’s Keeper?”