Models of what constitutes good argument are therefore extremely diverse, whereas models of what constitutes good debate amount to little more than formalized intuitions (although disciplines in which the goodness of debate is codified, such as law and, to a lesser extent, political science, are ahead of the game on this front). It is therefore no wonder that Project Debater’s performance was evaluated simply by asking a human audience whether they thought it was “exemplifying a decent performance”. For almost two thirds of the debated topics, the humans thought that it did.


Is there any reason to think such a tool wouldn’t come into existence?

This post was prompted by a recent article in Nature by Chris Reed about the work of Noam Slonim (IBM), Yonatan Bilu (KI Institute), and Ranit Aharonov (IBM) to develop an autonomous computer system, Project Debater, that can argue with and debate humans (shared with me by Tushar Irani of Wesleyan), as well the progress made with the language and communication skills of artifical intelligence, as demonstrated by GPT-3. (Also see the entry, “Computational Philosophy,” at the Stanford Encyclopedia of Philosophy.)
Perhaps the weakest aspect of the system is that it struggles to emulate the coherence and flow of human debaters — a problem associated with the highest level at which its processing can select, abstract and choreograph arguments. Yet this limitation is hardly unique to Project Debater. The structure of argument is still poorly understood, despite two millennia of research. Depending on whether the focus of argumentation research is language use, epistemology (the philosophical theory of knowledge), cognitive processes or logical validity, the features that have been proposed as crucial for a coherent model of argumentation and reasoning differ wildly.
Individual philosophical works that pose questions, develop arguments, justify premises, and explore the implications of positions make small maps of small bits of the vast terrain of the unknown, and often provide “directions” to others about how to navigate it.
While we have seen increased use of computing in philosophy over the past two decades, the continued development of computational sophistication and power, artificial intelligence, machine learning, and associated technologies, suggest that philosophers in the near future could do more philosophy through computers, or outsource various philosophical tasks to computers. Should they? Would they? And if so, what should we be doing now to prepare for this?
This kind of technology may not work flawlessly, it may need substantial contributions from human philosophers to work well doing what it does, and it certainly won’t do everything that everyone thinks philosophy should do, but it will nonetheless be a very useful tool for philosophers, and may open new philosophical territory to explore.
One way to understand the body of knowledge philosophy generates is as a map of the unknown, or set of maps. Philosophical questions are points on the maps. So are premises, assumptions, principles, and theories. The “roads” on the maps are the arguments, implications, and inferences between these points, covering the ground of necessity and possiblity.
“Hey Sophi”
“Yes, Justin?”
“If I’m a consequentialist about ethics, how can I argue for eternalism about time?”
“There are a number of routes. Would you care to narrow them down?”
“Yes. Eliminate routes with supernatural and non-naturalist metaethics.”
“Current mapping is using over100 variants of consequentialist ethics. Would you care to specify this factor?”
“Not at this time”
“Some routes are blocked by your logic settings.”
“That’s fine.”
“Here are the top 10 routes on screen, ranked by estimated profession-wide average support for premises.”
“Re-rank according to compatibility with my philosophical and empirical presets.”
“Here you go, Justin.”
Annotate routes 1,2,3,6, and 8 with objection alerts, up to objection-level 3.”
“Done.”
“Thanks, Sophi.”

When computers get a bit better at understanding language, or adequately simulating an understanding of language, and better at understanding the structure of argument, they will be able to do a lot of this map-making work. They will also be able to provide directions for philosophers. How far into the future is an exchange like the following?
A question all of this raises is: What should we be doing now in regard to the development of such technology, or in regard to other prospects for the integration of computing into philosophy?
Another thing to do would be to start thinking about the kinds of training philosophers of the near future might need in order to help create, improve, and work effectively with these technologies. In the recent past, people have argued that some philosophy graduate students may have good reason to learn statistics or other formal research methods. Many philosophers of science think that training in the relevant science is extraordinarily useful for doing philosophy of science well. Perhaps we can add computer programming to the list of skills one may opt for as part of their philosophical training (I believe some PhD programs already do this, allowing some students to satisfy their language requirement by gaining competence in programming in a computer language).
In a series of outings in 2018 and 2019, Project Debater took on a range of talented, high-profile human debaters, and its performance was informally evaluated by the audiences. Backed by its argumentation techniques and fuelled by its processed data sets, the system creates a 4-minute speech that opens a debate about a topic from its repertoire, to which a human opponent responds. It then reacts to its opponent’s points by producing a second 4-minute speech. The opponent replies with their own 4-minute rebuttal, and the debate concludes with both participants giving a 2-minute closing statement.

Related: The Distant Future of Philosophy, Will Computers Do Philosophy?
[Obvious, “Duc de Belamy” and “Edmond de Belamy”]
Many technologies face the paradoxstacle, “It needs to be used in order to become good, but needs to be good in order to be used,” and overcome it. But philosophers’ reluctance to cooperate and limited demand for more “efficient” philosophy could be a formidable barrier. That would be a pity. (Think of how the integration of computing into mathematics has made research on more kinds of mathematics possible, and how computing has brought about advances in many other disciplines.)
Here’s a little about Project Debater from the Nature piece:
As I tell my students, philosophy isn’t debate (the former is oriented towards understanding, the latter towards winning). But some of the work that goes into debate is similar to the work that goes into philosophy. What’s provocative about Project Debater, GPT-3, and related developments to me is that it suggests the near-term possibility of computing technology and language models semi-autonomously mapping out, in natural language, the assumptions and implications of arguments and their component parts.
It brings together new approaches for harvesting and interpreting argumentatively relevant material from text with methods for repairing sentence syntax (which enable the system to redeploy extracted sentence fragments when presenting its arguments…). These components of the debater system are combined with information that was pre-prepared by humans, grouped around key themes, to provide knowledge, arguments and counterarguments about a wide range of topics. This knowledge base is supplemented with ‘canned’ text — fragments of sentences, pre-authored by humans — that can be used to introduce and structure a presentation during a debate…
I think the most likely reason it may not come into existence is that philosophers themselves don’t cooperate with its development. As Reed notes in his summary, “the structure of argument is still poorly understood,” and philosophers might be integral to making the varieties of argument intelligible to and operationalizable by the technology (or its makers). Perhaps they won’t choose to do this kind of work. Or perhaps the philosophical profession may not recognize work done to create or assist with the creation of this technology as philosophical work, thereby institutionally discouraging it. Further, it would seem that some work on making philosophical content more machine-intelligible would be necessary, either directly or perhaps through feedback on beta-testing, and philosophers might be reluctant to do this work or provide guidance.
I’m sure there is a lot more here and I encourage those more knowledgeable to share examples in the comments. (Maybe the philosophers involved with the International Association for Computing and Philosophy could help out?)


Your thoughts and suggestions welcome.
One thing to do would be for us all to become more aware of existing projects that involve computers, in some form or another, taking on some of the tasks involved in philosophizing, or projects that are relevant to this. I’m not just talking about computer-based philosophy-information aggregators of various kinds, such as PhilPapers, Stanford Encyclopedia of Philosophy, Internet Encyclopedia of Philosophy, and InPhO, but also the use by philosophers of various computing tools in their research, as with corpus analysis, topic modeling, computer simulations, and network modeling, as well as relevant work in the philosophy of computer science.

Similar Posts