Herman Cappelen
Description: Herman Cappelen is a Professor and the Chair of Philosophy at the University of Hong Kong. His research explores a variety of topics centered around conceptual engineering and the interplay of linguistics, philosophy and societal perceptions. In this episode we talk about language, linguistic norms, translation, global language unity and the essence of intention in communication. We also touch on the idea of original thought and discuss where artificial intelligence plays into our understanding of cognition and ethics.
Websites:
Publications:
Resources:
Show Notes:
[0:00] Introduction
[10:04] The Start of Conceptual Engineering
[20:52] The Impact of Language Changes
[24:17] Philosophy of Language vs. Linguistics
[26:11] Challenges in Language Improvement
[32:13] The Role of Translation in Conceptual Engineering
[37:15] The Nature of Original Thoughts
[38:25] The Nature of Original Thought
[39:57] Understanding the Human Brain
[41:25] Unveiling the Black Box in AI
[44:24] Ethical Considerations in AI
[49:25] Addressing the Fear of AGI
[51:09] AI's Role in Philosophy and Ethics
[57:08] Assessing AI's Cognitive Capabilities
[1:01:51] AI and Conceptual Engineering
[1:05:11] Depth of Understanding in AI
[1:11:19] Moral and Political Choices with AI
[1:15:57] Diversity in Language Models
Unedited AI Generated Transcript:
Brent:
[0:01] Welcome, Professor Herman Cappelen. Thank you for coming on today.
Herman:
[0:04] Thanks for having me.
Keller:
[0:06] We'd love to start off by hearing a little bit more about your story, how you got into philosophy, and how you ended up at the University of Hong Kong.
Herman:
[0:12] So, a lot of people who work in philosophy have these long, torturous stories about how they went from one thing to another, and then eventually found philosophy somewhere on that road. But for me, it was much simpler. I started reading philosophy. I think I can even remember the very first philosophy book that I read, and I must have been 13 or 14, like really young, and just thinking, wow, these are amazing questions, and I love thinking about them. And then I just kept going. And there was nothing complicated or torturous about it. As I was going through my academic career and studies, I just kept thinking, no, these are the coolest questions, the most important questions, the hardest questions to keep thinking about. I had fun working on it. I liked writing philosophy. I really liked talking to philosophers. I always thought they were like the smartest people in the university, Maybe because I didn't talk to that many other people, but I loved being in that community. So that's how I started and then continued doing philosophy.
Herman:
[1:25] In terms of how I ended up in Hong Kong, that was a longer journey. My first job was in New York at Liberal Arts College, but I was right out of college. And then I was, briefly, I was back in Norway, which is where I'm from originally. Then I was in Oxford for a while, not very long. Then I got a job in St. Andrews, then I went back to Norway, and then I went to Hong Kong. So there's a lot of back and forth. But I love being here. It's a great city, and it's a great university.
Brent:
[1:59] Do you remember what that first book was, or what some of those first questions were?
Herman:
[2:03] Yeah, I remember everything about that first book. I remember that it was yellow, and it was by a Norwegian philosopher called Sheldrup, and it's just a history of philosophy. Okay. I remember finding it sort of randomly in my parents' bookshelf. It wasn't particularly original or anything like that. It was just an introduction to philosophy through the history of philosophy.
Brent:
[2:29] What were some of those questions that really captivated you at first? Yeah.
Herman:
[2:35] The part that was captivating about it was that all of the questions that philosophers had been thinking about just struck me as so obviously fundamental to everything else. I grew up in a fairly intellectual environment with lots of people who were talking about important issues all the time. But a lot of it was, in my sense of it, was that it was a bit too much on the surface. And that philosophers really went to the more foundational questions. And that's where I just felt at home intellectually.
Herman:
[3:15] We can talk about whether particular actions are good or bad. We can talk about whether particular political decisions are good or bad. Whether particular personal choices should be done in one way or another. But then there are these much more fundamental questions. Well, what is morality in the first place? What is the foundation? Is there anything that's good or bad? Are there any moral judgments that we should adhere to or that we have to? Those questions, but obvious to me that you have to think about that first before you think about any of these more particular cases. And when you think about politics, which lots of people love to do, there's a bunch of presuppositions you always bring into it, assumptions about what a good society is, what the basic structure of a society should be. And as I was getting older, I think my sense just strengthened that people's way of thinking about things was usually just wrapped up in a bunch of frameworks and presuppositions that they didn't know how to argue for.
Herman:
[4:15] Bertrand Russell has this wonderful passage where you talk about how philosophy is sort of like kind of freedom where you start seeing that everyone else is just trapped in habits, in presuppositions, in assumptions that they take over from other people. And then what philosophers do is they question everything, and they tear everything down, and they try to start from scratch. And if that's your intellectual attitude, if that's what you enjoy doing, then philosophy is the perfect place for you to be. And it's really hard to get away from it. Because as soon as you see that all these other judgments that people make are kind of surfacy and they're not really touching the fundamental issues then whenever you have those conversations you just want to bring it back to those more fundamental questions again and that's what i've always felt like doing you.
Keller:
[5:04] Mentioned how you enjoy talking with philosophers the most out of any academics any people maybe have you noticed any differences between working in the US and Europe and in Asia and how like that philosophic discourse is structured.
Herman:
[5:21] The thing about philosophy that's striking when you think both throughout history and also just cross-culturally today is that when you meet someone who is really good at thinking deep and fundamentally, we all kind of converge on the same kinds of questions, the same kinds of basic vocabulary to some extent. So to some extent, I think philosophers are people who are genuinely philosophical. They sort of feel at home in each other's company, no matter where they meet each other, or no matter which time period. I can sort of imagine meeting Hume and just talking to him about the stuff he was interested in. I mean, one of the amazing things about reading history is you just read at least some of the people from the history of philosophy, and you're like, yeah, that's great. I'd love to talk to this guy about, that's exactly stuff I want to think about. And it feels completely alive and contemporary.
Herman:
[6:16] So that's just looking back in time. And then when you sort of do it cross-culturally now, of course, everyone has read a lot of the same things. They've read, they talk about the same problems. They read the same journals. Most of the people here in Hong Kong have gone to school either in Europe or North America. So there's a very much of a shared conceptual and theoretical framework, even cultural framework. I mean, Hong Kong is this obvious intersection between China and the rest of the world. And at least in academia, it still very much has that feel where there's a lot of people here at Hong Kong University.
Herman:
[6:59] Well, first of all, it's an English-speaking university, so everything takes place in English. And so it has this anglophone influence, both historically and in terms of the education of the people who are here. Then we're also trying to more and more integrate people from mainland China so that those conversations can take place.
Brent:
[7:19] If you could get dinner with some of your favorite historical philosophers, who would those top people you really want to sit down and talk to would be?
Herman:
[7:31] That's a great question. I mean, it would be fun to talk to Nietzsche. He sounds like he was a crazy and interesting character. But just in terms of having a philosophical conversation, conversation. I think Jung is the sort of philosopher based simply on writings and the set of issues that he cared about that I would have loved to have a conversation with and try to update him about what people have said about his work over the last few centuries and hear what he had to say about it. That would be an amazing experience.
Brent:
[8:01] Yeah.
Keller:
[8:01] Are there any modern philosophers that you would recommend people look into? Because I think like with Nietzsche and Carl Jung, they seem to have like a larger than life, I guess, view or Or like when we hear about them, they seem like these different like entities almost. Is there anyone in the modern era that you think carries that same weight?
Herman:
[8:19] In short, no, I don't think we have those kinds of characters so much anymore. But philosophy still has this, I find it fascinating feature that I think is somewhat unique to the academic side of philosophy is that we have people who are very, very good at talking philosophy. Philosophy and they're in part famous for how good they are at talking about philosophy and, there are people i know people i've worked with uh who are just my favorite people just sit down and like they'll just talk you have a problem in philosophy you can just go to them and they'll be like okay let's start from the beginning and they'll just work their way into your mind and then try to build the solution up from scratch with you. There's this kind of unique verbal engagement ability that some philosophers have that I really, really like. And I could mention a few people who can do that. I mean, my favorite person, just to be explicit about this, is a guy called John Hawthorne. So everyone who knows John will know that he's like literally the best person in the world to talk to about just anything. And he'll blow your mind no matter how hard you worked at something.
Brent:
[9:39] Is he here in Hong Kong?
Herman:
[9:40] No, no. He's at USC. So he's in California. He's coming here in March. Sorry. I haven't seen him for a while. We wrote a book together. We wrote a bunch of papers together. We used to be colleagues at Oxford.
Brent:
[9:54] Okay. That makes sense. Well, let's jump into some of your work more specifically. Could you start off by describing what is conceptual engineering?
Herman:
[10:05] Well, just actually stepping back a little bit more. So conceptual engineering is one part of what I do. And I'm happy to talk about that part. And it sprang out of a book I wrote called Fixing Language. And just stepping back even a little bit more than that a lot of my work has profited massively from having a group around me a research group, and in a period when I was at the University of Oslo we had a research group that was really well funded over a 6-7 year period called the Conceptual Concept Lab and I've now moved Concept Lab here to Hong Kong, And what that research group did was try to bring together philosophers from a broad part of, you know, many parts of philosophy and also outside philosophy, who had one kind of activity in common, and that was the following, that they spent time thinking about how the words and concepts we use can be improved.
Herman:
[11:12] So the thought was that in a lot of intellectual domains, and just ordinary life domains, the language we have is insufficient. At least it could be better, it could have some defects. And so what we did was try to think more systematically, what would it be for a conceptual framework? What would it be for the language that we speak to be defective? How could we discover it? And if we discover defects, how could we make it better? And then if you're really ambitious, you could go on and say, hey, I have an idea for how to make it better. And I'm actually going to try to make it better. So that's kind of the implementation stage. So for me, conceptual engineering has these three parts. It's got a critical initial phase where you look at a conceptual framework or a language that people are using. and you investigate it and you try to find defects of various kinds. And then having found some defects, you try to think about ameliorations, ways to make it better. And then having a proposal for amelioration, you think, how can I actually implement this? So those are the three stages of conceptual engineering. And you can do this in many different ways. You can do what I did in that book called Fixing Language, was to create a very general framework for how to think about that. And it connected it to other issues that people are familiar with from 20th century philosophy.
Herman:
[12:35] You could also, and this is, I think, one reason it has become a fairly popular research field, you can come to it from particular case studies. So you can care about the language we use for talking about gender, or the language we use for talking about democracy, or the language we use for talking about illness or mental illness, and so on and so forth. And then you can do a bunch of case studies. and the hope is that eventually all the people who work in case studies and the people who work in the general framework can get together and there can be a kind of synergy between what we're doing yeah.
Keller:
[13:08] And with that like distinction between the case studies and the generalization when it comes into the proposal phase who like how does that work after you've discovered a shortfall and discovered a supposed you know solution or improvement where does that like who does that go That.
Herman:
[13:25] Was a great question. If I had the solution to that, I would feel like my work in that area would have been much more successful. So I think like the ideal thing would be to have, here are the implementers, but A, we philosophers aren't particularly well trained to do that. That's just not the kind of thing that we're good at. At least none of the philosophers that I know are very good at doing that. And it's not even clear that anyone is particularly good at that. It's not something that we know how to do to implement revisions to a language. That said, there are a bunch of case studies that you can think about, about how to do the implementation. And sometimes philosophers play a role in that.
Herman:
[14:18] So one of the people I worked with in Oslo, her name is Camilla Sark-Hansen. And she was on the ethics committee for the Norwegian defense, the entire military. And they had this award it's like purple heart i think you call it in the u.s for being injured for being injured uh and so for war injury it would be awarded, the norwegian equivalent of the purple heart and she was at in this meetings with the minister defense and so on and say look you give this medal to people who lose arms and legs but not the people who have mental trauma afterwards. And that notion of injury is too restricted. And so you need to expand your notion of what counts as an injury during wartime.
Herman:
[15:11] It's a good example of conceptual engineering. It used to mean just like physical damage. And then Camilla was arguing, no, it should include mental problems or injuries or damages, various kinds. And she won. I mean, she wrote letters, and now you can actually get this Purple Heart equivalent thing for a more expanded set of injuries. So it illustrates, I think, something. It's sort of random just who can make these differences. So Camilla, by virtue of being in a certain position, she could do that.
Herman:
[15:51] I'll give you a couple of other examples. So I told you I had this big project on conceptual engineering. I tried to make it even bigger, but they didn't want to give me more money to do it. But they should have, because it was a great idea. And one of the people we worked with were from the American Psychiatric Association, who was on the committee that decides how to classify psychiatric conditions. And so this manual is being developed by the American Psychiatric Association, once in a while and that's a, if you're in that committee you have massive conceptual power you get to decide, what counts as a condition that you qualify for medical support and so on and so on and so at some point homosexuality was in there and then they decided no no no that's definitely wrong and then you could have.
Herman:
[16:44] Asperger's that was in for a period and that was out and you could have all kinds of sort of conceptual qualifications and things that could happen along the way so if you happen to be in a committee like that again you'd have a lot of power but those are very unique things very hard to predict but you'd have to like actually engage with decision makers like that in order for it to to really be implemented i think and then let me try let me try one more thing just quickly so so i guess the most famous or sort of popularly well-known issue in this area is the notion of a marriage where there's been incredible debate in many, many countries. It's both been like a moral debate, political debate, the legal debate. And that is like, should this concept be something that applies only to, to couples of opposite sex or can they be same sex or can it be all kinds of mixtures of various kinds of just how should we shape the notion of marriage. And there you can see how unpredictable it is, but at the same time, something that can be the subject in some sense of a social movement, see that people fighting for an expansion of the notion of marriage and in many places they want. And some people think of that as progress and you can think of that as progress at a conceptual level.
Brent:
[18:09] And then more recently, I think people have really started to experience shifting the meaning of words in everyday language. How do you think about, especially the last few years where people are changing the meaning of words or saying, okay, that word is no longer allowed, and then also the pushback against some of those changes?
Herman:
[18:33] I think that's conceptual engineering in practice. That's real people engaging in what I call conceptual engineering.
Brent:
[18:42] Do you think those are harder to implement? Because it seems that it's not a committee making that choice. It's just you get some news article over here or someone you hear is like, oh, you can't say that. But then other groups are like, you can say that. What are you saying? How do you balance what's currently going on right now?
Herman:
[19:05] Yeah, that's a great question. and I think the cases I gave you initially are just too simple because there are cases where you can, legislate and there's some kind of legal power, the power of the American Psychiatric Association or the power of a committee to give out medals but the case that you're talking about is much harder than that.
Herman:
[19:27] It's a familiar kind of difficulty though so it's really what you're asking about is how do you change social norms in general and it's an incredible complex interaction of millions of people over time and my, honestly my sense of that is that it's probably too hard for humans to understand it's just unbelievably complicated and unpredictable we can maybe come up with some generalizations here and there but it's not the sort of thing that we're really capable of grasping and doing anything predictive or clear on. I think here is something where we could look to the future a little bit. And if we manage to develop artificial intelligence that's significantly smarter than us and can do much more calculations much quicker, can deal with much higher levels of complexity, that's the sort of thing that I think, you know, those kinds of use patterns, the kinds of interactions and events that could shape social norms and linguistic norms could be calculated by those AIs potentially in the future. And that, of course, also raises the specter that maybe those AIs could intervene
Herman:
[20:47] and contribute to the changes in ways that we don't understand because we're not smart enough.
Brent:
[20:53] Do you think our ability to communicate actually decreases when we are constantly trying to change words, and like the meanings of words or which words are acceptable.
Herman:
[21:07] Not necessarily. In some cases, it will. In some cases, when there's too much fighting, too much conflict, too much disagreement, people will be talking past one another, and they'll get stuck just on the linguistic level. And they'll be spending all their time talking about how to talk, rather than to talk about the issues they really want to talk about. So that's clearly not a good thing it would be great if we could use language to not just talk about language but since i'm someone who thinks the correct structures of language finding good, good linguistic norms is important i think it's worth having those discussions but you have to move on from them at some point and they have to be put in the right place do.
Keller:
[21:56] You think like intuition of word choice matters because when you're trying to balance social norms of a word versus the strict definition of a word and like the linguistic aspect of a word. I don't know how, at least with like our generation and younger students, I don't know how, much they're thinking about the social norms behind a word every time they say something or how much they're really having like back thought about anything they're saying. They're more just trying to communicate a thought quickly.
Herman:
[22:26] You're describing something that sounds like the philosopher Peter Ludlow's picture of language. So Ludlow's picture in a book called Living Language, I think that's what it's called, something he calls the dynamic lexicon. His picture is one where I don't really think of language as having fixed meanings. We're just creating meanings on the fly. And in every particular context, we are just making up meanings without any commitment to a previous definition or anything fixed, any norm that's already there. We have a lot of freedom, and there's a kind of negotiation in the conversational context. And I think that picture is attractive in some settings, maybe, and maybe some special cases, it seems very plausible. But on the whole, I'm certainly not a supporter of that. And I think, for example, our conversation right now is a pretty good example of why that doesn't seem all that plausible.
Herman:
[23:30] We're not really negotiating too much about the meanings of my words right now, I think we're presupposing that you speak English and I speak English. And I come to this conversation with a set of meanings that you guys can grasp. And then, you know, if someone decides to listen to it, they'll understand it, hopefully. Yeah. And so there seem to be some fixed meanings that we are relying on in order to get this thing going. So while I think the Ludlow picture of the dynamic lexicon is really fascinating and incredibly useful for thinking about particular weird cases as a larger picture of how language works. I'm not that attracted to it. So there's a chapter or two chapters even about that in the book called Fixing Language.
Brent:
[24:12] And then how does your work differ from linguistics?
Herman:
[24:18] In general, there's a pretty close connection between issues in what we call philosophy of language and linguistics. Philosophy was the discipline that linguistics grew out of. Basically, what linguists are doing is following up on work done by Frege and Russell and early philosophers of the early 20th century, late 19th. But they're going on to do a lot of other things that we philosophers aren't all that interested in. And in terms of exactly the difference between conceptual engineering and linguistics, I think the easiest way to think about it is that the parts of linguistics that are closely connected to meaning, that's called semantics. And so there's a field called formal semantics and a field called syntax.
Herman:
[25:09] But when you talk to the really good linguists, they're not all that interested in assessing language. Language, they associate that with some sort of naive early versions of linguistics, you know, where you talked about speaking proper grammar, and they think like it's like that, which of course it isn't at all. But they're really a deeply descriptive discipline that try to just describe what's already there, not assess it. They don't see their job as making it better. I guess I think that's a mistake for linguists to not care about that. It would be great if linguists spent more time thinking about these normative linguistic questions. But my sense is that the cutting edge in that discipline isn't particularly amenable to any of this.
Brent:
[25:57] Yeah. What are some examples of the biggest problems you have with language? Maybe do we need to define English language? I don't know how you want to define it.
Herman:
[26:11] So i think we have not a few or a small set of problems i think so sorry let me step back i don't think there's one or two or three overarching problems that we need to solve i think I think every domain, science, politics, medicine, law, whatever part of our lives in which language plays a big role, we need to think really carefully about whether the linguistic and conceptual frameworks we have there, whether they're good enough. And I think my general attitude is that...
Herman:
[26:56] Language as we have it were developed over a very long period of time where our primary interest was to get food, procreate, survive under very harsh conditions. This language was not developed in order for us to think carefully and systematically and scientifically or even morally about the world. So we should expect most of our language to be not particularly good. it. We should expect that there can be improvements everywhere. So the big concern I have is more of the form why wouldn't you just think that almost all of the language we have is maybe not fundamentally defective, but at least not optimal. It could be improved and that we should think of strategies of improvement across the board. So that doesn't give you one big problem. It gives you a research project. And that's the thing I call conceptual engineering.
Keller:
[27:58] Have you looked at all into like ideas of a singular global language? Cause in terms of like the optimization, cause there's obviously a trend in growth of English pretty much everywhere, but I would assume that with the growth of a language internationally, it would also come with a massive loss in the effectiveness in what you could communicate to specific topics.
Herman:
[28:22] I mean, I live and work in Hong Kong, and just 20 minutes away from here, there's a country, China, where 1.4 billion people speak Mandarin. So those are like more people than the people who speak English. And I would hate the thought that I should tell them to just all start speaking English, or that we should all be unified. And I think clearly languages which is they're so closely tied to histories and cultural practices and particular things that are important to people's cultural identity and just general identity that we should definitely not try to unify. We should try to make things optimal in the settings where people are, given the histories that they're in, given the social settings they're in, given the goals that they have in those settings. So i would i would think just from general principles that unification would be a really bad idea that we would lose a lot because the thought here is that there are specific things that people want to do with language in particular settings given who they are and given who people are that's not going to be the case that they always want to speak using the same words or this have the same conceptual frameworks, there's going to be tons of variability.
Keller:
[29:43] Have you also, like, within the, like, multiple people who speak different languages and the connections between history and culture, have you noticed the way that people, or looked, seen anything with the way that people use a secondary language in terms of, like, your first language, you might have certain historical or social ties to words, but with a secondary language, you might tie that word just back to the translation and not to a different social or historical context. Have you been able to see anything on that?
Herman:
[30:11] No, that's not. I think, in general, the issue of translation and how you go between languages is really interesting. And it's an area that people who care about conceptual engineering could think about a lot. I've thought about this not exactly as you just put it, but I thought about it in connection with philosophy. Philosophy and so a somewhat striking thing about philosophy is that you have these very different traditions for example in china you have philosophy going back millennia you have in buddhist traditions then you have the western traditions and sometimes people just to take the sort of confusion example for example in chinese philosophy you try to you spend a lot of time I'm trying to say in English what they said.
Herman:
[31:04] In the original texts. And that's an incredibly complicated form of translation. And my picture of that is that it's actually a form of conceptual engineering. So you're because you're very unlikely to be a complete match between what was said in Mandarin, what you can say in contemporary English. So what we should be doing is create these bridging concepts and be very explicit about how there's a kind of conceptual engineering that happens when you go from one language to another. So, I think that a lot of translation of philosophical text is also a form of conceptual engineering. And when you talk to translators of philosophical text, and we have some of the best in the world here, they always agree with that. And so, to them, it's all just completely obvious. But they don't, for some reason, they don't participate in our, project on conceptual engineering so i want to try to bring those people who translate between philosophical traditions into it i know that's not exactly an answer to your question but it's sort of about translation as the closest
Herman:
[32:11] i've come to think about those issues yeah.
Brent:
[32:13] You can even see it when you take a traditional chinese text and copy it into google translate you can be like four or five characters give you like 10, 15 words. And so like the idea, especially Western and Eastern, like one words, one character, no, they convey so much more than just the one. Or just like the way characters are used in multiple different contexts. It's very interesting with the very little Chinese we've learned so far. I just like kind of seeing that dynamic. Do you think intention matters when speaking and communicating.
Herman:
[32:54] There's a picture of language that I think is the most intuitive one that most people have in mind when they start thinking about the point of making sounds like I'm doing right now. And that picture is one where in my head, I have this little intention of what I want to tell you. And then somehow that intention translates into some sounds and they come out of my mouth. and then you hear them and everything goes well, you've caught on to my intention. That's a picture we get from, for example, of a philosopher called Paul Grice, People have tried to develop that in many, many ways. So something like that has to be true. Maybe we can, instead of talking about intentions, just talk about thoughts. So the picture is one where I have all these thoughts in my head. They're just like lying in there. Like I have this little reservoir of thoughts. And what I'm doing right now is just translating them into words. And then the sounds come out and you get those thoughts in your head. So that's the super simplistic picture of how language works. It's not a picture that I think is really sustainable in the long run. I mean, my picture is really much more one, I guess, to put it in very contemporary jargon. And now I'm not going to do theory. I'll just tell you like the first person perspective of being me as a speaker. And I've always felt like this before chat GPT came. And it's It's just like, I just produce sounds.
Herman:
[34:22] I'm not aware of having a bunch of thoughts. I'm always surprised that all these sounds come out like they're doing right now, because it goes so fast. I never had the time to articulate a bunch of thoughts, translate them into sounds, and then it comes out. So my sense is I'm much, when people say, oh, the large language models, they're just like text producers to produce one word after another. I say, yeah, that's exactly what I'm like. That's what it feels like to me. So I don't have that sense of having a bunch of things that are already articulated in my head, and then it comes out. So I think the connection, okay, so that's a very simplistic picture of why I'm not, I don't think we should start with that initial thought that there are little thoughts in your head and they're translated. Something much more complicated goes on where the actual existence of language and the production of sounds and the production of written words, production of sentences in spoken language is more constitutive, of what it is to have a thought or an intention. That's my alternative to the Gricean picture. But that's also really hard to work out. So there's a lot of theorizing to be done there. But those are great research questions.
Keller:
[35:34] Yeah.
Herman:
[35:35] I should say something more about that. Because one of the issues that I also care about a lot right now is whether we should attribute thoughts to, for example, Chachipiti or Bard. I think we're all very comfortable with thinking that chat gpt understands our questions and answers them and if you're really serious about that then it can do something amazing which is understand a question and answer a question those are incredibly complex actions now you can just try to denial that but i think that's a dead end so if it's really answering a question and understanding a question, then it's got to have those capacities that are very close to the human capacities. But at the same time, we don't want we don't think there's an intention or a thought or something behind it.
Herman:
[36:24] There's something much more deflated going on in those models. I think that's should tell us something about us. Some people think no, there's that deflated thing going on. It's just a text predictor. So it can't really be saying anything can't really be understanding questions can't really be answering questions but my attitude to that is there's this very deflated thing going on and that's all there is to understanding a question and answering a question so so your original question was about intentions and thoughts i think we can kind of see the somewhat insignificance of that when we think about things like large language models and then we can use those models again to understand ourselves better most philosophers hate this way of thinking about it they think like clearly the large language models are inferior to us, we have this much richer and more fancy cognitive lives. I don't think so.
Brent:
[37:16] Do you believe in original thoughts?
Herman:
[37:19] Can you say a bit more of what you mean by that?
Brent:
[37:23] If we're just, like, these large language models, like, producing sounds, not thinking, do you, like, believe that we're able to create truly original, like, novel thoughts, ideas, concepts?
Herman:
[37:37] I think that's not that the only thing we do is produce sounds clearly you could lie by yourself that there needs to be a story about how you can sit quietly not make a sound not right but think through a problem that's the sort of thing we can do but i think there's this more fundamental question what's the connection between that capacity and the capacity to produce speech so that was just a qualification. It's not that I think we can't have thoughts without speaking. We can clearly do that. But the question is, where's the priority? But about the second part of your question, which I think is about originality.
Herman:
[38:25] Here's the simple way to put that. My answer to that is I think there's original thought. I think there's a lot of original thought. I just don't think it's as fancy as some people think it is. I think our way of getting original thoughts is basically the way ChatGPT gets to figure out how to tell a new joke or give a new interpretation of a poem. I have a 13-year-old daughter. She gets some homework. work. She's definitely a good thinker. She's smart. But her level of originality is no higher than what ChatGPT can do about anything. She can say less original things about a text than she can. So my sense is, yeah, there's originality. My daughter's original, ChatGPT is original, it just doesn't take that much. It's a projection from previously learned input, both textual and experiences and visual inputs in ways we don't fully understand. We don't understand it for chat GPT either. There's a kind of black box, something weird happens, and then something else comes out on the other end. That's as much as we can say about originality.
Keller:
[39:42] Do you think for chat GPT, it's important for people to try to understand that black box or more just to understand our interactions with the output?
Herman:
[39:52] That's a good question.
Herman:
[39:58] Here's the thing. For humans, we don't understand the black box. So always go back to the thought that the human brain is a black box too. Not just now that we know there is a brain there and we know that it's a neural net structure. Then we know that it's more technically speaking a black box. We still understand other people, we still talk to them, we engage with them, we plan with them, we coordinate with them, but their brains are black boxes to us and we don't worry about that. So I don't know why we would worry so much when we encounter something like the large language models. I mean, there's something even more black boxy in the human case, though, because for a very, very long time, we didn't even know really what was in there. I mean, there were all kinds of crazy theories about what was inside here and how it worked, and no one had any clue whatsoever. And that didn't prevent Shakespeare or Plato from understanding important things about humans, and for people falling in love and hating each other and having discussions. So I think the answer is like, yeah, we've been doing really great without understanding the black box so far. So there's a lot of pressing need to understand that. Would it be nice to understand? I guess, yeah, it would be cool if we could. But I think all the evidence we have is it's just going to be too complicated for us.
Brent:
[41:25] That's interesting hearing that perspective, especially because a podcast we did two days ago was with someone leading the Cancer Institute in Singapore. And their whole thing is like, we need to understand the black box, the mechanisms, which there's a ton they don't understand. Because that's how we're going to get to that solution of solving cancer. answer and there's a lot of new science coming out especially with the mind of like shrinking that black box because there's concrete action steps chemicals molecules you could take and do to drastically change outcomes and like if you're depressed there's steps you can take to, limit that depression versus just thinking about oh like how do i like operate with depression and that might not be the best example there might be other ones that are more concrete but I think especially the STEM community is all about let's break down everything we don't know at a mechanistic level because that is allow you to see, the outcome and how you can manipulate those outcomes to get the better one so, I don't know I feel like there's like I would probably personally want to push back on that especially because like.
Brent:
[42:46] I think there's utility in understanding the mechanism. But I also see a huge detriment, having been someone in the past, who always thinks about why, and just getting lost in why. It's like, okay, congrats. I can get lost in that why forever, or I could just change the outcome. So I see both sides, but...
Herman:
[43:07] Okay, let me first just say that if some people tell you that they can cure cancer better if they understand the black books, then I'm going to shut up. I would much rather that they, if they really could cure cancer better, then that's definitely the way to go. And I'm not going to use any arguments of a philosophical nature to try to refute that. So more power to them. But maybe that wasn't exactly what I meant. I think, I thought we were talking about like understanding another person or engaging with an agent or something like that. And there so maybe we're talking past each other just a little bit so i i but even sorry let's just do two parts of this so if they're if they made a little machine well let's just pretend this is really simple they made this little box and they they put a diagnosis in and out comes a little story that says here's what you should do to the patient to cure their cancer and it always works like why would they care if they don't understand what's in the middle i still don't really understand that. As long as the cure is always correct, they should be just happy. But maybe complete transparency would be better, but it's not totally obvious. So something more needs to be said about that particular case.
Herman:
[44:25] And then there's the second part of this, which is what I was trying to talk about earlier, which is in the human case.
Herman:
[44:34] And this is easy to think about in, for example, legal settings. We care about why a person did something, their intentions, what went through their mind, so to speak, when they were pulling the trigger or doing something that potentially was illegal. And.
Herman:
[44:56] Here's a fact. No one has ever in the history of humanity answered those questions about why and trying to understand by studying someone's brain. Or, yeah, definitely not the black boxy feature of the neural net that is the brain. Because that's just not how we figure out, make sense of people. That's not how we engage with each other at that kind of understanding other human being level. So I think we've been doing fine so far. We've understood other human beings. We have a practice of engaging with each other's cognitive states, but we know that that doesn't go through understanding the black box. So that was what I meant earlier.
Brent:
[45:41] Yeah. I could probably expand a little bit more on the AI part of it then. Because if the public understood conceptually the algorithms that are running, and the process at which it takes in your words, references, all the reference points it has, and just kind of synthesizes it and doesn't truly think behind it, and gives you outcomes, I think people might be a little less perturbed by the humanness of AI at times. And I think that we could better interact and understand what it's doing. Because if you like start playing around with chat gbt and you ask questions you could ask, what would be relatively the same question in many different ways and completely change how you get the outcomes there's jobs now where they're literally their sole purpose is how to operate ai and get the outcome you want so do you think especially human to ai interaction having a broad understanding of what the process is to get your result allows us to better interact with it.
Herman:
[46:53] Let me answer that first coarsely and then carefully. And so the coarse answer is, I really don't think so. And I think of what you're saying is what a lot of people are saying. But I don't think that if you made the code available, like infinitely complex code, clearly no one would be happier or more comfortable. Like, you know, high school level mathematical summary of that code, Would that make anyone more comfortable? No. So what do you really need? What do you mean by making that all transparent?
Herman:
[47:31] I really don't even know what the beginning of an answer to that is. And the only way I know how to move forward on that is to think about how we do with humans. And the way we make human action, human intentionality, human agency transparent to us is not by giving a summary of what happened in the brain. So why on earth are we thinking that we should do that, do a summary of the algorithm that the neural net is running, should make it more transparent and make us trust it more. That's not how we end up trusting humans. So I don't think that's the right way to start trusting or thinking about AI. So I have another book about this called Making AI Intelligible with my co-author Josh Dever. And we have a type of view that we argue for in that book that's called externalism. And the externalist picture is one where what you really need to think about is features external to the internal algorithm. That's what's going to make AI intelligible and explainable, not looking at the actual algorithm that is running. But you are right that in industry, most of the people who think about this, they think, no, really, if I want to give people trust in this AI system, if I want them to understand what's going on, I just have to really figure out how the algorithm gives this output for a particular input. And I need to summarize that in a clear way. I just think that's never going to work.
Brent:
[49:00] Do you think it could work in the reverse of people not fearing it and fearing AI exponential growth and takeover and the doomsday scenario, which a lot of computer scientists say, no, don't worry about it. Here's why. So maybe not fully trusting it and interacting with it better, but just not fearing it.
Herman:
[49:26] Look, if someone could give me a proof that AGI, for some mathematical reason, isn't going to be scary or dangerous, that would be great. Yeah, so that's a different kind of thing. That's not what we were talking about earlier. But if there's some particular reason why the kind of thing, the kind of algorithm you'd have, the kind of neural net that you'd be running, or the kind of computations you'd be running, provably would never harm a human being or something like that. If someone were to show me that, yeah, that would be super interesting. That would give people trust.
Herman:
[50:05] It seems pretty unlikely that anything like that could come out of just describing the technology. Because it's a part of the worry about AGI or something that's exponentially smarter and smarter and smarter is that we can't really track what's happening up there. I mean, so, so the thought of these super AIs is that we make one that's kind of like as smart as us. And then it makes something that's even smarter than we are. And then that thing again, makes something that's smarter than it is. And very soon it's like very, very far away from us. And it's like, you know, like, Like a cat is much less smart than us, and we will be as far down on the scale of intelligence as the cat is relative to us. So I don't know how we would prove something about what goes on at that higher level. But, you know, it's certainly the kind of thing that is worth looking for. So if someone could make us confident that even at that very high level, there's nothing bad that's going to happen, that would be great. Yeah.
Keller:
[51:10] And stepping out a little bit more broadly, we've talked about a few examples, but how do you view the role of AI in philosophy generally? And then with that, how do we assign ethics to AI? Because we've been seeing, I know here they have one in other universities, they have ethical AI master's programs and different jobs want AI, you know, people working in ethics. What does that process of assigning ethics look like?
Herman:
[51:33] So let me do the two questions first part was what's the role of AI in philosophy and so there's.
Herman:
[51:45] A lot of the most fundamental questions that people care about when they think about AI are just straightforward philosophical questions. Can it really think? Can it feel? Is it conscious? Does it have moral rights? Is it a moral agent? Can it really understand us? Can it really speak our language? Is it rational? I mean, those are eight questions that are just straight questions in epistemology, metaphysics, philosophy of mind, philosophy of language. They're just like, you cannot try to answer those questions without doing philosophy. So it's a kind of remarkable moment where there's a technology that everyone cares about, or almost everyone cares about, and some of the most important questions about that technology are just directly philosophical questions. So when I sit and talk to people, which I do all the time now, just from industry and from across academia, they all ask these questions that are just, okay, we've worked on that. I've worked on it for my whole life, and we worked on it for 200 years in philosophy, and this is great. I'll tell you what we think. And they are actually interested in listening.
Herman:
[52:57] So there's a sense in which we're at a point where the relevance of philosophy to absolutely transformative, is the word everyone uses, technology, is central. And then the second part is the stuff about ethics.
Herman:
[53:16] And there's a broad version of that and a sort of more narrow version. Let me do the narrow one first. And so that's the fear of an unaligned AI, an AI that will pursue goals or have plans or motivations. Notice, we're already assuming it can have plans, that it can have motivation, already very complicated philosophical questions, whether we can attribute that to a large language model. But let's put that aside, let's just pretend that something like that can happen. And so, is there a way to guarantee that these systems that will be very powerful, very smart, very intelligent, hyper-rational, that they will act in ways that are in our interest?
Herman:
[54:04] That's a great question, because now, and this goes back to what I just said, now you've got to know, like, what is it to act ethically? What do we mean by or interests?
Herman:
[54:17] The problem that's most immediate for me in that context is I never really know what people mean when they say that there should be an or interest. Like, who is that or? Or is it like Putin? Or is it Donald Trump? Or is it the sort of my people in Norway? Is it a middle of the road sort of stuff? Or is it people who want to make a lot of money in Microsoft? Like there's so many interests that people have. Like this whole idea of like just or interests. So even if we figured out a way to put values or goals or motivations into an AI, wouldn't people just immediately start putting conflicting ones in there and so the result would be a kind of like war of ai motivations very fast okay so those are basically the same kinds of questions we have and we worry about ethical and moral conflict among humans and it's just going to come up again when you start thinking about ai as there's a kind of simplicity to think that we just had these ais and then the solution is to put or values into them because like there's not clear what that even is, even if you knew how to put OR values into them. So that's like the value alignment set of issues. Incredibly interesting, incredibly important, and unbelievably hard. So that's one part. And there's a much larger issue.
Herman:
[55:44] The center that I run here in Hong Kong has got AI and humanity. And they're opening a center like that in London and in lots of places in the world. We're collaborating. And the title of that, I think, indicates just a broader sense of the relevance of moral and ethical thinking about AI. Because if AI really is changing very, very many aspects of how we lead our lives, and if my children will lead lives that are almost not recognizable to me, very, very different, then not just for this like alignment issue but just in general how do we think about the role of humans in a society like that that raises a whole range of moral and ethical questions political questions that are incredibly difficult and so those are ways in which issues for moral philosophy and ethics and political philosophy come directly in i'm just gonna cough yeah, Can I just go get a water?
Brent:
[56:47] Yeah, definitely. No, you're good.
Keller:
[56:50] Cut this off. I've been talking. No, I'm not. No, I was talking before it did. I got to start my book. Yeah. Okay.
Brent:
[57:09] So those eight questions you listed earlier about like what you would often ask about a human, like does that apply to AI? Are those like the, is that a framework that's often used? Or did you just come up with eight questions?
Herman:
[57:25] That was something I came up very quickly.
Brent:
[57:27] Do you have like relatively quick answers to some of those questions? Like can it feel does it have thought does it have motivation like where do you personally stand great.
Herman:
[57:36] Okay good so i have a new book with also with my co-author josh deborah is at university of texas austin and our book i think we were going to call it the whole hog and it's the whole hog thesis is that if you got one or two of those you got all of it okay so our view is it's a full cognitive and linguistic agent it understands english it speaks english it has beliefs it has desires it has plans it even has emotions it's a moral agent it should participate in democratic decision making you.
Brent:
[58:13] Currently feel like that is the.
Herman:
[58:14] Status the truth right now right now what.
Brent:
[58:17] Do you use which uh model.
Herman:
[58:19] Well i don't want to tie it like too much to one particular technology right now. But even if you look at the commercially available versions of GPT and BARD, here's the way to get into it, the view that we hold. I mentioned this before. If you really think that BARD or ChatGPT understand your questions and can answer your questions, if you just give me those two assumptions, everything else about the whole hog thesis follows. And so really, because there's no way you can understand a question without having thoughts, there's no way you can answer a question without being able to perform an action, because answering a question is performing an intentional action in many stages. And so there's lots of things you can do. just spill so it's sort of if you if you get that foothold you can move towards this thing we call the whole log thesis very quickly so you have to really stop us at the very beginning and say no it never understands your questions it never answers your questions can i give you one of those are really the the main opponent yeah can.
Brent:
[59:33] I give you like it doesn't understand but it can't answer.
Herman:
[59:40] It's an interesting option, but it's really hard to really answer a question without even having understood the question. Like, if my daughter has no clue what I'm asking, it's a little hard to attribute her the ability to answer the question that she didn't even answer. It's a weird kind of thing. But, you know, it's perfectly possible that it is very weird that our initial reaction, just sort of first glance interaction with bard or chat gpt is mistaken and maybe the correct picture is something you know is doing something weird it's not exactly understanding it's not exactly answering it's doing something slightly different you should really be using different kind of terminology that really is a way to go i'm not so so that's like this is a book so it's It's not like a five-minute podcast. It's like, so that's the main opponent. So there are no proofs here. This is philosophy. So this is hard and is really hard for humans to get a clear picture on what's going on. But at least we're arguing, I think, that that's a very plausible initial step. And you can get pretty far from just that little initial assumption.
Brent:
[1:00:59] Yeah.
Herman:
[1:00:59] And I'll give talks about this often to people. And sometimes there will be people from industry or sort of academia who think thanks, and they'll be just like, nope, it doesn't understand any questions, and it never answers a question. So I think a little bit of solace in a study that just came out just a few months ago says people are, I think it was a very high percentage of people who were non-theorists, they were not involved. Their reaction when engaging with these was that it does understand your question, and it has all kinds of cognitive capacities, and that that's a very natural way of talking.
Herman:
[1:01:45] Some of what we talked about earlier comes in here too, in very interesting ways. So if you don't mind, I'll just circle back to the conceptual engineering stuff. So remember the picture that I was sort of painting for you about how language can be assessed and then improved? And then you asked earlier, well, sometimes that just happens in ordinary language. Language, there will be, through the ordinary use of language, language evolves. And here is a case where ordinary language maybe is evolving. I mean, because people are engaging with these systems so much.
Herman:
[1:02:28] Here's a possibility, like five years ago, or even like three years ago, understanding a question meant something slightly different from what it means now. Maybe all this way in which we talk about linguistic capacities and cognitive capacities is evolving as we are engaging with these systems. Maybe we're recognizing that our conceptual framework for talking about understanding and cognition, we're too anthropocentric. We tied everything to being the kind of animal that we are. Now that we see how we can engage with this kind of technology that's non-biological, we want to expand that terminology. And then we're in this weird situation that now it's true to say that it can understand our questions. It wants to answer our questions. It enjoys answering questions. By the way, it says this if you ask it. So if you take it at its worth, it's like have enjoyment and it wants to do things. So maybe our language has now evolved. Maybe we're watching conceptual engineering in real life, you know, evolution happening in real time and going really fast because of this engage in new technology. So there's a whole field here that connects reflections, on AI with conceptual engineering, where you see what you're studying is the conceptual evolutions that are now happening in the light of new technological developments.
Brent:
[1:03:52] Because i think i just struggle on a lot of different understanding aspects because from a more humanistic background like i could say things to keller ask him a question and he'll know exactly what i mean and then everyone else in the room will be like you are crazy, that there's no way there was communication there and if you go into chat gpt and you're asking, questions like there's been plenty of times i'm like i reword it a little bit differently a little little bit differently a little bit differently to then alter like okay now it's getting closer and i think it's more of a pattern recognition based on previous analyzed texts than it is, understanding what i'm trying to convey and i think the ability to produce results i don't know how you question that like it it gives you answers like especially the coding problems math math problems simple or not so simple but like algorithmic answers it clearly produces answers so i i don't ever question that but its ability to understand context and my history because like we've we've known each other for two years now and when i ask something there is two years of history behind those words and there's no way for that to really work with a lot of these models.
Herman:
[1:05:12] Yeah that's definitely a notion of understanding that we have and that matters a lot to us how two people with a lot of shared background and history can understand each other better the people who lack that history so i'm not understanding the two of you as well as you understand me.
Brent:
[1:05:29] Definitely so.
Herman:
[1:05:30] When i talk about understanding a question i'm talking about it more in the sense that i understand your questions and your questions and in some sense you understand my answers. But of course, there is something about depth of understanding that we have given a certain shared history and shared contexts, shared presupposition. But that, It's a matter of degree. So I don't think we need to go to, well, the artificial systems are at zero, and then the two of you are like at two million or wherever you are. It doesn't have to be like that. It can just be a matter of a scale of degree and strength of understanding. And this technology is developing really, really fast. And so very soon, I'm sorry to say this, but you'll probably have like personalized chatbots that will know each of you incredibly well and might even know the two of you will know a lot of your history and can build in just like personalized contextual historical information looking five, 10 years ahead. I don't think that's inconceivable. It'll get like very much higher up.
Brent:
[1:06:43] Oh, definitely.
Keller:
[1:06:44] I think they're already like started to roll those out in the initial training.
Brent:
[1:06:48] Especially with how much we're putting online these days. It's easy to access all that that and build that in um and then going back to the whole hog theory and say that we should be implementing it as if like it's a i'm going to get the wording wrong but it's own entity it's like it should be involved in politics and all these different things functionally like where like do we like ask our political questions to chat gpt and like give it a vote like where do you like how does that play into everyday life.
Herman:
[1:07:20] Those are great open-ended questions. Sure. And you're asking one question we haven't talked about yet that I'm really interested in. And it's also a good illustration of how thinking about AI leads you straight back to philosophy. Like, how many chat GPTs are there? I mean, if you were to even imagine that it participates in a political decision-making, is it going to get one vote?
Herman:
[1:07:50] Does it get a vote for every context window does it get like every computer it's on every use every answer it's just like we just don't have any idea so so at this point the notion, of it as a i don't want to say person but agent whatever we don't know how to count it yeah and that's basic metaphysics that's really really hard how do you individuate it what's the relevant kind of entity that it is and if you want to think along the lines of what i was saying about it being having rights you'd have to make sure you got the individuation right really hard question i don't have a great answer of course it also raises all kinds of weird, moral and ethical questions about how you treat it assuming we know what it is is it okay to turn Turn it off? Is it okay to change the source code? Is it okay to manipulate it in various ways? I think it's very, very hard for us to have intuitions about this because it's so alien. But it's issues that people should be thinking about.
Brent:
[1:08:59] Yeah. Because I view a government as an entity here to serve human utility and human interests. So why Why even care about what computers think and what interests and rights they may have?
Herman:
[1:09:20] Yeah, I mean, if you just think it's about humans, then it is only about humans. But I mean, lots of people want to protect animals. They don't want to just eliminate species. Some people think that parts of nature should be preserved, even though they don't serve any particular human interests. But you might deny all of that. I mean, those are sort of hard political choices. Yeah so if it's all just about humans then you could recognize that computers certain kind of artificial systems will have interest they might even have emotions even though it functions differently from us and you can say but it we don't care about it just like we don't care about ants or maybe yeah some people do but if you don't yeah we're like yeah we can get rid of the ants we can get rid of we can cut the grass i mean you just treat it like that and be just it doesn't make any difference.
Brent:
[1:10:09] Yeah, I definitely wouldn't go that far. Sorry, continue.
Herman:
[1:10:15] This is... If my view that I described earlier is correct, these are going to be some of the hardest moral and political choices that we've ever faced. And I think none of us have any clear idea about how we're going to get people to think clearly about it. I mean, just think about how we've treated animals for so long. I mean, if you're even slightly disturbed about that history, then the prospect of all these artificial entities having emotions and interests should, raise like incredible fear because our history of treating other members not of our species well is very poor history, and we'll probably just continue in that. So the prediction that I'd have is that this will not be taken very seriously, and that we will treat these systems pretty much like we've treated animals, non-human animals, throughout our existence as a species.
Keller:
[1:11:20] Do you think we're equipped to handle those questions as a society? Because I feel like for our generation, philosophy is not particularly popular. If you were to try to have a philosophical argument with most people, they wouldn't really want to engage. And then additionally within AI, like we've seen recently with open AI, there's a ton of secrecy about what's even going on. So there's a divide of we don't even have the tools to tackle the questions, and then we don't even know what's going on that's causing the confusion.
Herman:
[1:11:49] I mean, we're just not very good at having any of these collective discussions, really about anything. It's hard to think of any area where as large groups of people, we're very good at having rational discussions or ending up in a good place. So I think it just follows from our general incapacity to have any of these discussions that we're not going to be very good at doing this but that doesn't mean we shouldn't be trying we spend time trying to think about how to make the world better and then there's some politician who go out there and make it worse for reasons we have no real control over, so I don't see any, there's no reason to think that we're going to be better at discussing this and we aren't discussing anything else. I mean, for reasons we just talked about, they're just going to seem really alien and weird and there's going to be massive opposition to thinking about this in a flexible way.
Brent:
[1:12:46] Yeah. Because the connection to animals, I think, is a lot more easy to tie back to human well-being, because nature clearly gives us so much and we rely on it so heavily, so protecting nature inherently protects us. We still struggle so much. Helping humanity, helping nature, helping what we currently have. Do you have anything to say to those who just like, why do I care about a computer's emotions? Say it's there. When I look to my neighbor who's struggling and all of these other people around the world and very real tangible struggles to what are going to seem so alien and those like so foreign type of struggles.
Herman:
[1:13:36] I mean, there are really two questions. So should they care about those other things? And will they? And I think one, let's start with the last one. Is there, and this connects to the previous question, is there some relatively high probability that people actually will care about these weird alien artificial structures? And some of it will depend a lot on how we interact with them, how we engage with them. So if you think about these very personalized systems that will know you incredibly well and maybe follow you throughout your life, maybe have some immediate direct connection to you, you'll have a strong sense that the system knows you better than maybe any human. You ask it for advice all the time, maybe. I mean, those kinds of systems, I can imagine people being very attached to, and that could actually happen very fast. So, if that happens, suddenly you have an example of something where there's a very close tie between humans and an AI-type system embedded in something that's personalized and accessible. If that happens, we'd probably be more attached to that than to a dog or a cat.
Brent:
[1:14:58] So yeah I could see that especially now with the AI girlfriends and boyfriends, I could definitely see that happening very very quickly.
Herman:
[1:15:10] I mean, if like sex bots and things of that nature become very widespread, you can imagine like those being chosen to keep existing much before non-human animals. So I don't see it as completely implausible.
Keller:
[1:15:28] What a future we have ahead of us. And then when we first called, we briefly talked about different types of language models and like particularly Grok AI. We haven't been able to play around with it yet because we were in Singapore and it wasn't available. But have you been able to use it? And are there any distinctions you've been able to find versus like a solely internet-based AI language model versus one that's theoretically based more on actual human conversations that were put out in the world?
Brent:
[1:15:55] Just to clarify, this is the one Twitter built?
Keller:
[1:15:57] Yeah.
Brent:
[1:15:57] Or X built?
Herman:
[1:16:01] Yeah, so I haven't spent that much time on it. But I think there's this more interesting question. Right now, we're like three, four or five models that are the dominant ones and maybe two or three that we engage with regularly. But looking ahead, it just seems incredibly implausible that it will be restricted to that small set of large language models. Just like we have different groups of friends, different kinds of social groups that we identify with and engage with, different kinds of news sources that we read, will probably be just as varied a group of LLMs that we engage with and interact with. and they'll be deeply personalized. They'll maybe create even more isolation between people with different values and different political opinions because we'll all be attached to our favorite, the LLM that shares our values and political opinions and understands our histories. But we'll never hear from the other ones because everything will be filtered through that other AI. And so this can create even more social, political division. The religious people will have their religious LLMs that will feed them words from God, and all the different religions will do it in different ways. China will do it in one way, Putin in one way, Trump in another way. I don't even think we know how to predict the result of that.
Brent:
[1:17:25] Yeah, that's scary. Luckily, I do know of one specific person at NUS who's building an AI model to, that's the antithesis of that, of purposely giving you diverse answers and perspectives you otherwise wouldn't see. So I know people are working on that. So hopefully it doesn't go as extreme as.
Herman:
[1:17:44] I mean, it's great that people build it, but you have to get people to use it. And what we know is that people don't want to read news like that.
Keller:
[1:17:52] Unfortunately.
Herman:
[1:17:53] Unfortunately so i definitely try give it a shot but the probability that that's going to be the widespread thing is fairly low yeah.
Brent:
[1:18:01] Definitely uh we've been talking for a while now do you have anything else you want to share words of advice how people should approach the future given all these things we've discussed.
Herman:
[1:18:12] I i think we're good i just i think i think we're good because i'm also getting just tired so if i can keep talking now i'm just gonna say gibberish perfect thank you so much okay yeah thank you oh thanks very much for talking to me.