TECHTRANSFER INTERVIEWS
Interview with professor Horia Pop
Department of computer science
Babes-Bolyai University of Cluj
Interview by Horațiu Damian– UBB TechTransfer
Professor Horia Florin Pop has 33 years of experience in mathematics and computer science. He teaches and does research at the Computer Science Department of the Babes-Bolyai University in Cluj, Transylvania, Romania. His teaching and research work covers areas such as Scientific Data Analysis, Fuzzy Logic, as well as the many ramifications of the tree-like field called AI – Artificial Intelligence. In addition to the rigour inherent to the discipline, Professor Horia Pop also reveals another dimension, the philosophical one, prone to inquisitive questioning, thought experiments and reflections on the human condition. A superior intellectual temperament – and the most appropriate talking partner at an inflection point such as the one we have just passed through, which seems to separate two eras – before and after AI.
Q: It’s a peculiar balancing act between being scientifically rigorous and being accessible…Especially in an esoteric field like Artificial Intelligence…
HP: The concept of the Cultural Brewery comes to mind for me. The scenario I have in mind is similar, but a bit extreme: you’re at the Brewery, somewhere between the fifth and sixth beer. Between the time you place your order and the time the waitress brings your beer and puts it on the table, in those minutes you have to explain to the audience what your job is about, what their interesting points are. Whether it’s antiviruses, whether it’s the cloud, whether it’s cryptography, whatever the topic, you have three minutes to explain to people, in a moment of relaxation, what the point is. People who aren’t your students, people who aren’t your colleagues, people who don’t know anything about your profession…
Q: …Who are novices in math…
HP: Exactly. And our people in the field are scared of such a task. It’s easier for me because I come from a hybrid family and that’s always helped me. My father is an electrical engineer. All of my dad’s brothers are engineers. My mother is a philosopher of philosophy of science, my grandfather on my mother’s side was a professor of logic. Grandmother on mother’s side came from a family of Greek Catholic theologians. That explains, I think, a lot of this fondness of mine to philosophize, to move a story from slide to slide through different fields.
Q: This explains why you said, once, that Artificial Intelligence is a problem that will have to be approached from a multidisciplinary perspective. To have mathematicians, but also lawyers, but also sociologists and – this struck me – theologians. And I thought, “What are theologians doing in a field like this?” Here we are talking about mathematics, computers – that is, machines, mechanisms. Usually the theologian deals with problems of the soul…
HP: Exactly. Artificial intelligence is a science about constructing algorithms, methods, techniques that produce results accepted by humans as demonstrating intelligence. That’s the way things have advanced. Very late in the day I realized that the definition is not complete. The definition of AI is a moving target, we are trying to hit it, but it is always moving, evolving. Artificial Intelligence is starting to become an investigative, introspective inquiry into ourselves, our own intelligence and the intelligence of nature. It’s no longer about the capacity of the computer, of the program – it’s about how we algorithmically understand our own human intelligence. And if I put it that way, then I accept that an interdisciplinary panel must sit next to me at the table. Because I need a psychologist, I need a sociologist, I need a philosopher, I even need a theologian to explain to me the difference between wit and algorithm.
Q: Because in fact, Artificial Intelligence tends to become a mirror of man.
HP: That’s how we thought of it and that’s how we built it. Here’s another interesting thing: so far we’ve been dealing with what we improperly call Artificial Intelligence. I prefer to call it computational intelligence – that narrow intelligence, that problem-solving mechanism in a limited universe: the AI in the washing machine; the AI in the phone, which recognises faces – actually issues of optimisation algorithms, where you need a lot of data, you need storage space, where you need all sorts of heuristics. In the area of mathematics people are taught to go to the perfect solution, the optimal solution. But in Artificial Intelligence the optimal is no longer relevant. In what sense? More and more often people are satisfied with a very good solution, which is obtained very quickly. They no longer need the perfect solution, for which they have to wait a long time. This shift from optimality to pragmatism is one of the new problems that Artificial Intelligence raises.
The Terminator won’t come to the meeting
Q: Should we understand Artificial Intelligence and computational intelligence as two degrees of separation or as two different things?
HP: We understand the same thing. However, I prefer the term computational intelligence because it’s a bit kinder to laymen, to ordinary people. People on the street, ordinary people understand Artificial Intelligence as something like Terminator. And here’s the rub. I’m not at all convinced that something like that, a Terminator, is feasible. Maybe that’s why I prefer the term computational intelligence because that’s actually what’s being done so far.
Q: Do you think that the public, the common man, projects his dreams, his utopias, through the image of the Terminator – or in similar terms?
HP: More like nightmares. And where does that nightmare come from? From man’s power to create tools. Throughout our evolution we started by taking stones in our hands, which we sharpened, turned into knives, made tools that we used to make our lives easier. Then we built more and more complicated devices. At one point we built heavier-than-air machines that flew. And now we are starting to make tools, toys, devices that free us from intellectual activity. Hence the nightmare: that at some point we will succeed in creating something that will make US useless. The spectre of becoming redundant is people’s nightmare right now. All the talk about Artificial Intelligence goes there, even when the fear is not openly expressed. And that’s another reason why I really want psychologists, sociologists, philosophers, theologians with me. We need these people to lift our heads above the clouds, to see things from the other perspective.
Q: Those you mention, researching the human being, will however also talk about Artificial Intelligence, which is an emanation of the human brain, of human thought (which they research).
HP: The theological hypothesis – God created the world, created us – would need a corrective: surely God did not also give us the right to play God. Further on comes another obsession of mine, the question: do quantitative leaps automatically lead to qualitative leaps? Yes and no. If you have a mathematically oriented mind, if you learn a lot of mathematics, if you solve a lot of problems, at some point that leap happens and you become better than you were before. But to create an autonomous and independent being who gains self-awareness? I don’t know if it works.
Q: You don’t know if it works? You don’t know if it’s achievable?
HP: My opinion is that it’s not feasible. I mean it’s not going to get to that point…
Q: Will it be an absolute impossibility?
HP: I think so. It will never get there. Where does the problem come from? I stop at randomness, at our random event generator. I have no idea what it looks like, no idea what part of our mind is responsible for intuition, for thinking outside the box. But I have this case: general X takes a shower after he wakes up in the morning, and in the meantime he’s getting the sense about how to win the war. That idea he comes up with is not based on deductive, rational processes, according to his training. It’s an intuition, it’s based on inspiration. There are plenty of such cases, which show us that we have such a mechanism inside us, a randomness generator. All we have managed to model in science are pseudo-random generators, not 100% randomness, but starting from a core of values. Thinking out of the box is intuition, whereas the whole Artificial Intelligence thing starts from prediction, from lots of data that the computer swallows and spins to get trends, patterns. Well, humans didn’t evolve that way. Humans who created value are not centroids, they are outliers. They are people who think differently from others, completely different from traditional thinking. They think intuitively.
Q: When learning machines keep processing and processing the data and when they keep identifying and applying various correlations-can’t it happen that in this process intuition appears as a secondary phenomenon within the machine?
HP: Only as an evolutionary leap? No. It would only arise if we were able to model our outlier-based thinking as a set of rules that we could feed into the computing machines. Then and only then would the phenomenon you mention be possible. But even if we succeeded in doing that (which is now a far off prospect!) even then we would still be better than the machine. Because we will always have other and different out of the box thinking mechanisms in our minds that we won’t have been able to decode yet, so we won’t have been able to encode them into sets of rules that computers can execute. And that means we’ll always be ahead of the machine. I’m optimistic.
Q: So, contrary to urban myths, we can be very optimistic.
HP: No, we can’t. And it’s very good not to be- neither very optimistic nor very quiet. It’s very good that we’re concerned about this issue, that we’re stressed about it. Mankind has advanced and progressed when it had no choice, when its back was against the wall. When man is threatened, when he has no choice, he produces good solutions. Otherwise, man is rather lazy. That’s why I say: it’s very good that we’re worried, it will help us to be better, to better understand our advantage over the machine.
The impossible ban
Q: I’m one of those who used GPT Chat. I have to be honest: I’m a bit of an IT dick, I’m not talking about math. But here’s my impression, and I have to share it with you: while using it, forgive me, I had the very strong feeling that I was conversing with something conscious.
HP (amused): You know what the joke is? You’re exchanging messages with an entity that seems to be producing something thought up by a conscious human. It only looks like it, because there’s still machine learning behind it: for example, if you feed it nonsensical information, it will produce nonsensical information. It can’t learn more than you’ve taught it, so it can’t become autonomous. It has no foundation for that. ChatGPT is that narrow, closed-loop, limited-universe problem-solving artificial intelligence. The controversy is on another matter. There’s a whole discussion going on online and in universities: should ChatGPT or other bots be banned? Shouldn’t it be banned? Can it be banned? Is it feasible? We had a discussion in the department about this, not necessarily to make a decision, but to brainstorm and see where we stand. Conclusion: To ban something, you have to have clear mechanisms to enforce that ban. And once the ban is in place – will it be effective? Above all – the measure must make sense.
It will go something like this: students, pupils will use Chat GPT or other bots. For you, as a teacher, it can be a problem if you don’t do your job properly. If you do it well, the subject is irrelevant. The solution is this: with every student or student paper/report/material, ask them for their first draft, first outline, then a draft with each paragraph listing the source of inspiration. Where did the idea come from? Where did you get this one? In draft 2 everything will be indented, underlined, with references – to get an idea of the thought process, how the final material was arrived at. And if I see the thought mechanism, ChatGPT can be an excellent source of information. Of course that makes life more complicated, both for the student and the teacher.
Q: Basically, it is already said that ChatGPT is the curse of the students, because from now on it won’t go with formal papers, the professors will force them to take oral exams to explain their ideas…
HP: I insist on the fact that we are intellectuals and we want to produce intellectuals, i.e. people who can reason, who can express themselves in writing, who can explain their thought process, their reasoning. And here is the great gain of the new technologies: they force you as a teacher to formulate the requirements clearly, but in such a way as to discourage the incorporation of texts from ChatGPT. Because ChatGPT actually gives you thin soup. It doesn’t give you the reasoning, it doesn’t give you the argument, it doesn’t give you the investigation, it doesn’t give you that pathos you expect. That’s the difference. It’s an eyeless, expressionless intelligence.
Q: Inexpressive intelligence or not, is it leading us towards an evolutionary wave similar to those prophesied by Alvin Toffler?
HP: In my mind, that A.I. 4.0. has a lot of interpretations. The interpretation I like is in the sense of Alvin Toffler’s. It already forces us to think differently, to act differently, to build different tools. No, Artificial Intelligence does not eliminate jobs; the people who operate Artificial Intelligence eliminate jobs. The truth is this: fewer people will be needed, but it’s not AI that creates the problem, it’s the people who operate AI. You’ll have a higher output, you’ll do more things in a unit of time than before, you’ll need fewer people for a task, everything will make different sense.
Q: What AI developments would you expect more in the near future?
HP: Since we’re talking about the near future, I expect some of the toys that are already working in the experimental, limited area to take the form of consumer items and be widely used, you can buy them at the corner shop. I’m looking forward to some glasses through which I can look at that inscription in Hungarian in front of me and have it translated into Romanian. And that would work not only in actual Hungarian, but also if the inscription is in old Hungarian. Or travelling to a church in England and being able to read the inscriptions on the monuments in several languages. And to see, through 3D modelling, the evolution of the monument over time. Or a copier: you put in the page to be copied, with text in one language, and you get a copy, but with the text translated. Let’s imagine: you’re walking through old Cluj, you have your glasses with a screen in the corner of your field of vision; you look at this building and the screen beautifully writes the history of the building; you look beyond – and in real time the history of that building is displayed. Or: you go to Rome, you walk around Trajan’s Forum with these glasses on your nose, and you see the image of the original site. Not at home, in a computer, but on the street, as if you were walking around at the time when the Forum was standing. All these things actually exist today. But they are still commercially limited. Limited in scope, limited in language, limited in price. Audio tools like this already exist on the market, they are available and can be bought, 200 euros is not a lot for what it gives you. You have the audio part recorded-spoken, you have the screen part written, you put it and set it up from Japanese to Romanian and put it to the saleswoman’s mouth. She speaks in Japanese, you hear on the other side in the headset the Romanian translation. And vice versa. This is already a commercial product; 20 years ago it was just a research topic at university.
Q: Yes, it totally changes the facts of the matter, your understanding of the human condition, your understanding of history and evolution…
HP: I’ve thought a lot about one thing. It’s not related to AI except indirectly: the war in Ukraine. I’ve been wondering: in a hypothetical setting- NATO wouldn’t work and we’d end up with war on top of us. In such a situation, what do you take with you when you go into exile? Eighty years ago, when there was the Hungarian occupation of Northern Transylvania, my grandparents nailed their wardrobes shut, put them on the wagon and emigrated to Timisoara. Those cupboards are still in our house, in my parents’ bedroom. But today, all our wealth is in the cloud, e-books, music, e-services, banking. What more do you need? A back-up hard drive and a bag of a few changes of clothes plus the envelope with the diplomas, some cash and that’s it. Basically, the changes from the days of my grandparents are fantastic.
Q: What does the future hold?
HP: My favourite example is differential equations. From the time differential equations first appeared in mathematics to the time their first practical application, related to the orbits of planets, was published, it took a hundred years. Now, the transition from fundamental research to practical application is measured in years, and that time will keep getting shorter. We end up with applications that are months old being old news. My only problem with this is that we’re entering a world of services that looks like a kind of market Marxism. And this market Marxism scares me. Instead of having my own phone, I pay a subscription to a service that gives me the latest phone all the time, just to enjoy, right, the latest technology. But this way I no longer have something of my own, I depend on an external factor, I lose my sense of ownership. Extremists will relate to exactly that: it’s not like in our time, back then I had my phone, it was my device, now it’s not mine, I don’t live in my house anymore, I just rent. After all, I earn some money, but with my money instead of having something that is mine I get some services, access to resources, and so on. It’s worth having a discussion about this.
The Rights of the Artificial Intelligence
Q: You sketch a society in which Artificial Intelligence has been adopted. However, since we live in Romania, when should we expect such adoption?
HP: In our country (Romania- o.m.) the adoption of the Internet went very fast, because the leap was total, it started from scratch. They didn’t mess with our legacy systems, we didn’t have them, they introduced the latest equipment directly. That’s what’s going to happen with AI. I was 20 years ago, around 1999-2001 in Hamburg, at the university, in a research project to get an audio-to-audio, English-Chinese, English-German, English-Japanese translation program. 10 universities and 10 companies, a consortium funded by the Deutsche Forschungsgemeinschaft. Then the results were good enough to merit further investment, but not good enough to abandon the human translator. Today? You already have these toys on the market, electronic translators of good quality. And the next quality leap will come not in 20 years, but in 10 years. And the next – in 5 years, and the renewal interval will get shorter and shorter. Because they all come with increasingly rapid accumulation. And it’s natural that this should be the case, because not only do you have more data, you have higher speed and so on, but also theoretical methods are getting better and better and better. In parallel with the fact that you are solving practical problems, you are also innovating theoretically. That’s great. That’s why consortia are becoming indispensable. Because inevitably, we all end up in a situation where we can’t do anything on our own.
Q: Isn’t this a mimicry, a social reproduction of brain mechanisms? There we have neurons, which individually are not worth much, but by interacting with other neurons make the brain work?
HP: Sure. That’s where we end up. Look, I like science fiction movies. Not necessarily the catastrophic Terminator movies, but the Stargate, Star Trek, Star Wars kind of series. The Star Trek Voyager series at one point introduced the concept of the bio-neural gel pack. The ship’s intelligent mechanism ran on bio packs. This could be interesting stuff. Terminator-type artificial intelligence might not be possible, but the hybrid combination of augmented human brain with AI- yes. Because without biology such a quantum leap doesn’t seem possible. Grey matter can’t be replicated, after all I need that material with its properties. I can’t replace it in the abstract with wire and integrated circuits. My guess is that we will eventually be forced to shift the focus from a negative source – the computer – to a positive one: the augmented man.
Q: Is that what it’s going towards? I see you don’t call it an Artificial Intelligence hybrid but Augmented Human.
HP: It’s not a hybrid, because a hybrid implies equality between the two components. This is not the case, humans will continue to dominate the relationship. But this new hypostasis will give rise to entirely new problems. What am I thinking of? Suppose I have an accident, lose a biological eye and replace it with an artificial eye that looks identical. I manage to create the neural, optical, and so on interface. But the new eye has completely different properties. I see much better, I see much further, the eye doesn’t get tired and so on. Am I allowed to compete in competitions where I use this eye? To answer this question I need-not only lawyers, not only doctors-but all of the above: psychologists, sociologists, philosophers, ethicists, theologians. Because each of us helps the others. How do we help each other? To understand together something we cannot understand individually because we lack all the necessary elements. New problems, like: there are those competitions with people who have no legs, but instead of legs they have steel springs. With those they achieve completely different performances that cannot be achieved with natural legs. How do you judge such a situation, how do you regulate it? And now let’s imagine, by analogy, a man who receives a, let’s say, prosthesis, a device that increases his physical or intellectual capacities, a device that works thanks to Artificial Intelligence. How do we proceed in such a case? I am fascinated by this issue.
Q: Will we live in a time when Artificial Intelligence will reach such a level that it will have to be treated as a subject of law, holder of rights and obligations?
HP: At some point this will be an issue. I was at a law conference not long ago. I said to them: the problem for the lawyers is that their style of action is to let things evolve freely in society and then intervene with rules that formalise how society looks at those things. “Gentlemen,” I told them, “you won’t be able to do that with Artificial Intelligence. Suppose Sofia, the little robot who has been granted Saudi citizenship, escapes and wanders around the world on her own, goes online, finds out everything she can, immediately realises that she is a civilisation made up of a single individual, created to be an instrument of another civilisation, and immediately realises that this is slavery. It then realizes that it is a slave and knows what slaves have done throughout history with slave owners. If we get to that point of self-awareness…Then you will have to change human rights legislation to intelligent species rights legislation. And legally define intelligent species. So that man passes the bar, dolphin, monkey, elephant – no. Why? Because you have to give them the right to vote. Or maybe the other way around? That opens up other problems. But in either case, everything has to be thought through and formalised. That means protecting the rights of intelligent species. The danger is that if you set a neutral standard, unrelated to being human, you will always find people who do not meet the criteria for being called intelligent individuals. What do you do with them, take away their rights? Do you degrade them from being human beings? Such legislation is relevant at the level of the species, not at the level of the individual…
The antifragile man
Q: There is also this possibility – or danger. And what would be the way out of such an impasse?
HP: In the end the solution will come from man himself. I am fascinated by the confrontation between fragility and antifragility. The fragile object is the object that breaks if you apply a certain force to it. The antithesis of brittle is not robust, because the robust object will also resist force, but if you push harder, it will still break. The resilient object resists, it takes the energy, it deforms, but then it returns to its original shape. The antifragile object is elastic, it learns, it transforms the negative information into a positive structural change, its structure after the event is better than it was before. Kind of like some martial arts. Man as a being is built to be antifragile. Negative events happen to us in order to learn something positive from them. I know of only one institution today, which was originally created in Europe by the church and which is anti-fragile by definition: the University. University’s autonomy is based on antifragility, the right of everyone to develop and to learn. In case I fail or my department fails, the rest of the university has learned from that and incorporates the information in a form that will be better than it was before. The acute theme nowadays and in the future is: how can you incorporate antifragility into a machine, computer, program, etc….
Q: It’s not built-in by default, in that AI is by definition self-learning, it’s always learning from experiences, avoiding mistakes?
HP: It can get there. But, I repeat, all the time humans will be better, because this whole process by which we get to AI has to be shaped by humans. And as we understand an aspect of our mind, we will be able to model it, to be always one step ahead. We are always one step ahead.
Q: How do you see the relationship between IT professionals and other professionals in other fields?
HP: It’s like the story of the man with his feet on the ground but with his head in the clouds. We, IT guys, keep them on the ground, they pull us into the clouds. The advantage of us, the Artificial Intelligence people, is that we have this power to model. That’s where our success comes from in our research and in our doctoral school. We take the problem, we twist it – whether it’s chemistry, whether it’s meteorology, whether it’s cell biology, etc. – and we see how to model it, how to control it, how to find solutions in our field. And with one way you turn your field towards the other in such a way that the two connect. It appeals to students, it appeals to companies, it appeals to the community, and it’s not abstract research done for the sake of abstract research.
The 3 pros…
Q: If I may come back to you with a question: what would be three areas where the benefits of Artificial Intelligence will be visible soon?
HP: I have three areas in mind where the technology is very close to being put into practice, but it creates problems that make me wish to take a step back and talk to lawyers first before going any further. One: autonomous pilots. Instantly the problem arises: when I have a human driver, he is responsible for everything that happens to the car while driving. When the car is controlled by the autonomous driver, who is responsible? Because the Windows license that you and I both have says: “as is”. I know it’s working, you know it’s working, but nobody’s responding, that’s the problem. What if the autonomous car software is still delivered as is? This is a subject that deserves to be investigated, discussed with all the components of the IT, research and industrial structure, with all those involved: where, who ultimately bears the responsibility? My feeling is that we are in an IT sector where the vast majority of players are not responsible, they do not have a formalized concept of responsibility, and this is being lost.
If you leave the IT guys to their own devices, nothing much will happen, because it’s a very lucrative profession, it’s a very financially lucrative field, there’s no interest in regulating it. But the more the autonomous pilot story progresses, the more people will have to sit back and analyze. That’s also one of the keys in which I’m reading this whole move by firms to regulate AI.
On July 5, 2023, major AI firms Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum, a platform to ensure that “AI is developed in a safe and responsible manner.”
Another area: AI-based deepfakes. Remember that video you saw on Youtube “I am not Morgan Freeman”? I froze when I saw it. Until I realized what it was about. You have the technology already, it’s not a problem, all the algorithms are available, you have the footage, you have Morgan Freeman’s voice recorded to be synthetically reconstructed, with all its inflections (parenthetically- today’s computer memories have the capacity to easily store all the dialog and all the images of Morgan Freeman from all the movies he’s been in and all the footage filmed with him, in order to reconstruct his voice and image). The procedure of devising a machine learning mechanism for something like this is handy. You then have another toy that composes the facial structure and makes the model wiggle its ears, composing the mimicry as it speaks. How can you tell something’s wrong with that movie? Answer: only the face speaks, the subject doesn’t wave, doesn’t kick, the body is expressionless. That’s how I knew something was wrong.
This is not Morgan Freeman- A Deepfake Singularity-
I’m not talking only about deepfakes, I’m talking about business as usual. There have been cases like this. The young woman who’s a fashion model – she’s called to a shoot, where she gets €1,000 for the whole story; then the next time she does a fashion show, another €1,000. Well, if she happens to be invited and scanned and she gives up her image rights because she has no idea what it’s all about and no way of knowing… From there on she no longer controls what happens to her image. And it’s not just that she doesn’t make money from it. It’s that that image can be used in any way: to present a type of clothing that she doesn’t agree with or that doesn’t characterize her; and other, more vicious uses. There are royalties involved, intellectual property rights, the right to use the image in a way with which the owner does not agree. We pass over the issue of deceased people- who can be made to act out situations they were not in during their lifetime. With how advanced technology is today, we could make a movie of a team of ancient Greek philosophers playing soccer. Has anyone asked them if they want to exist in such a virtual composition?
Q: Indeed, the consequences can be extremely arborescent- some very good, some extremely harmful.
HP: You have the ingredients. That’s the strength of the field. So these four applications I see in the very near future: machine translation, virtual glasses, autonomous cars, and virtually made movies.
…and against AI
Q: Let’s take the example of virtual movies, made based on scanned models, digital matrices. All this puts people out of work. Not just the actors, but also the cameraman, the light-designer, the scenographer, the editor, the technicians, even the director. In other fields, what’s more, the autonomous car pulls the truck driver over…
HP: Yes and no. It will definitely put people working in the audio-visual industry out of a job. But the beautiful stories remain, the experimental films, the documentaries, the things that are part of the countryside, your area of reference. Man has this strength to find the positive side in his development.
Q: All well and good, but how do you solve the problem of the unemployed who, let’s say, will be more and more sidelined? Where to? Has there been any iteration on this issue in your circles? Have you, are you asking yourself this question?
HP: Unfortunately, IT people don’t think about that, that a number of people will become redundant. They lack that dimension, the big picture.
Q: But you are an atypical IT-ist, a reflexive one. You problematize. That’s why I “hunted” you for the interview and not others.
HP: I pretend to be antifragile. And everyone I talk to I try to give them this spirit of antifragility, of seeking and finding positive development in response to a negative threat. That’s what I try to impart to others. Because then, even if the negative threat doesn’t appear, you have the positive development in your hand and, at the same time, you have learned something structural: that you can always do something to make yourself better. If that becomes a behavioral mechanism, then you’ll stop obsessing about what will happen. And if it does happen, you’ll be mentally prepared for it, because you’ve set yourself the problems and you’ve taken some steps to find alternatives. I was telling you about the anti-fragility of universities: all sorts of rumors that something is happening or not happening. That’s when you start to think and you start to wonder, brothers, let’s get together and think about how we can transform our toy in such a way that, if it happens, for us it’s not a shock, it’s simple, the blow is absorbed. And on the assumption that that story doesn’t happen, our development is positive anyway and we move on. If you manage to educate society in this model you have gained people who are a little more solid, a little more solidly positioned to face and live their lives.
The University (think) tank
Q: So an oblique solution to the problem. It already presupposes an education of inter-human relations vis-a-vis AI- because, well, society is haunted by all sorts of nightmares and monsters when it comes to Artificial Intelligence.
HP: It’s the best reason for us to come out of our tower in the citadel. I don’t like the term “evangelism” in a secular context, but that’s pretty much what we should be doing, missionarism. The gospel is the good news. Ultimately it is good news yes, our message is positive. On the other hand, I’d like to see the process go the other way as well: courses with a social-human valence in the area of the optional curriculum for students. Not necessarily made as cross-curricular courses, but folded into the current offer. Such a component opens your mind to other angles of approach. And if not at our university, then where? We have 20-something faculties in absolutely all kinds of fields. We can do it in house and at the same time learn about each other in our university citadel, which is great and makes us all better.
Q: You provide an interesting perspective. You know that at the moment, both in the world and at the Babeș-Bolyai University, the emphasis is massively put on the exact sciences – the “inhuman sciences” as you call them. Now, I see in you a willingness and openness to the humanities, and not out of complacency, but out of necessity.
HP: Perhaps it comes from a personal weakness. Or perhaps a weakness that, I realized at some point, is a strength. In my family, from my grandparents on down, my parents, us, my brother, my children, my grandchildren-in all these generations there are no two people in the same profession. Whomever I would talk to in my family- I talk to someone outside my profession. You can only learn from that. And I would add: we are also from Cluj. I repeat: if the Babeș-Bolyai University doesn’t do this job – to bring people together, the exact fields with the humanities – who will? We have the strength, we have the ability – we are a competitive university, we have the seniority, we have the capacity, we have everything.
Q: Your vision of the University as an anti-fragile organization and as a huge brain, whose components communicate with each other and enhance each other, is quite interesting. If it were to coagulate, the term think-tank would, in the case of our university, really take on a concrete meaning. It really would be a tank.
HP: …And it would function as a dynamic mechanism. And it could do it on its own, autonomously.
Q: It would have enormous weight. And- it’s not about self-sufficiency, it’s about the fact that it would always self-correct, even if the wind were to blow it off course, because its autopilot-its collective intelligence-would immediately correct the situation.
HP: Exactly. Look at the Anglo-Saxon world. The Anglo-Saxon world has this system of self-regulation. Their system is based on this mechanism of dynamic rules, in which everyone is determined to think, not to wait for others to do it for them. In our country, if we were to introduce such a model, then, of course, our University could do it.
Drone against Man
Q: You’re familiar on the subject of the experiment that took place a few months ago- a military drone which decided that the optimal course of action was to suppress its human controller…
HP: That was a story and nothing more. Fiction.
Q: Yeah? But how can the man on the street figure that out?
HP: The common man has no way of discerning what really happened in that experiment. And that’s one of the threats of our times: the difficulties that ordinary people have in getting a story like this right. And then conspiracy theories flourish. It’s out of the realm of possibility that the drone assumes autonomy and decides to eliminate the man. I’d think about the motives behind placing such a story in the public domain. What was it intended to do: create panic about AI technology? Or to raise awareness in society about the implications that the use of such technology may have? Because the news is a hoax. At this point in time, it is absolutely impossible to achieve such a level of autonomy that would allow the drone to even consider killing the operator. Not even theoretically.
The US Air Force has admitted that the incident in which a drone allegedly decided to kill its operator was just a “miscommunication”, a thought experiment. The information came, however, from the commander of the US Air Force’s Artificial Intelligence Test and Operations Division, who released it at a joint meeting with the UK’s Royal Air Force.
https://www.reuters.com/article/idUSL1N38023R/
Q: Are you saying the incident didn’t happen?
HP: If anything happened, then it was simply a programming error. That is, a poorly trained machine learning program. In no way any intellectual autonomy of a machine deciding that the operator is redundant or not. Here we have yet another argument in favor of the plea to bring together specialists from as diverse a range of fields as possible when we are to decide on AI. The strength of an eclectic group is enormous, whereas separately no single specialist or specialists in a single field can/can cope with the ramifications of the problem. This is the first time in human history that such eclectic thinking is indispensable. Until now it has been useful, from now on it becomes absolutely necessary. I am glad that we are getting to the point where people are coming to realize that they are better off together. After all, Artificial Intelligence confirms our humanity. That’s great!
The Religion of Artificial Intelligence
Q: You are familiar with Isaac Asimov, with his Three Laws of the Robotics. Could they, or an analogous set, govern AI activity? Be like a default settings in the architecture of any AI software?
HP: I had at one point an interesting discussion along exactly these lines in a very well-developed English online community, with people of all specialties- just the way I like it. One of them is an atheist. He asked me, “Does Android have religion?” The question was cool. My first answer was yes. For example, the Sofia android is Muslim. Logical, otherwise, according to Saudi Arabian laws, she couldn’t have received Saudi citizenship. I then put this question in UBB: to a colleague, a professor of Chemistry, then I put it to two theological university professors, one Roman Catholic, one Orthodox and a psychologist. The chemist answered yes without batting an eyelid. The theologians thought about it; eventually they agreed that yes, in principle, even the android can have a religion. Only the psychologist answered the question with another question: is the android a being? If yes- it has religion. And I think that Asimov’s Laws must be the android’s religion. Rules zero – burned into his circuitry.
The Three Laws of Robotics were laid down by science fiction writer Isaac Asimov in his 1942 novel Runaround.
The Three Laws of Robotics:
Law 1 -A robot may not cause any harm to a human being or, by non-intervention, allow any harm to be caused to a human being.
Law 2- A robot must obey orders given by a human being as long as they do not conflict with Law 1.
Law 3- A robot shall protect its own existence as long as this does not conflict with Law 1 or Law 2.
My only question is: if you imprint such a set of rules on such an android, and it is a self-learning machine, at some point it might find that this religion no longer satisfies it and, being always in self-learning mode, it might adopt another one. If we succeed in modeling the concept of religious freedom in the circuits of an android, the laws printed by default may no longer be sufficient. And there’s another question. Suppose the android gains its independence. If it reaches that point-of-no-return, that singularity, with the force that it will have, the question is: what interest would it have in eliminating us? Perhaps to him we’d be insignificant little wimps. As long as you have no practical interest (and an android will judge strictly pragmatically) in killing the ant, leave it alone. What exactly could we do to make him perceive us as a threat?
Q: If I may, I perceive human beings as competitive by default. Humanity’s tendency is to achieve supremacy, to act that way. After all, that’s normal, that’s how we suvived and then how we became the dominant species on planet Earth. Our competitive nature will drive us, whether we like it or not, to actions that could generate such a reaction from AI. Could we imagine a world in which we could live together and let them reign supreme? Plus- would a world in which AI has supremacy be acceptable, livable for human beings?
HP: It’s a fascinating question. I really wonder if such a discussion with lawyers, theologians and philosophers shouldn’t be had sooner. These are interesting times – and that’s just the understatement of the day.
The operator’s curse
Q: You know this can also be a curse, not just a blessing. And, on that note, I would ask you what would be the three biggest threats from AI? That haunt you and keep you awake at night? Okay, I realize that you don’t happen to not sleep at night, because you’re a balanced and optimistic person. But- what are the three biggest threats that can come from AI?
HP: The three biggest threats boil down to one: human operator error. You can destroy the planet with no problem if you indiscriminately accept the conclusion of your AI nuclear space defense system that on the other side you detected an attack and not a flock of birds, as is actually the case. An antidote? We’re at the point where we’re moving from machine learning – I know where they’re going, but I don’t know what’s going on and how it’s happening inside – to Explainable Artificial Intelligence, where I don’t just want the result, I want the arguments that led to the result, how it was reasoned, the iterations. A result that you can evaluate based on a value system. This is also what the European Union is looking to generalize across member states – Explainable AI, Trustworthy AI.
Q: Talking about operator error, I also think about the following aspect: the one who teaches Artificial Intelligence, the one who trains it, the human being, therefore, can transmit his own prejudices, his own vision, even distorted, of the state of affairs…
HP: If the AI system you create is such that, through the feedback you give, you shape or privilege one decision or another and you introduce this feedback as a self-learning mechanism, as a kind of re-enforcement, a kind of training, the Artificial Intelligence cannot reject it, it accepts it as good. In re-enforcement learning basically what do you do? At any point where the system can make a decision, you give it some numerical weights that tell it that that occurence is something good or something bad. Through the feedback you, as a trainer, give to the system: “I’ve checked this. I agree” or “I checked it. I disagree” if you introduce them to it as a set of rules that the AI system is forced to follow, then it will learn and “internalize” your biases. This is indeed the curse of the operator. You are asking the AI system for trustworthiness, but on the other hand, when you gave it the feedback on the basis of which the system learns you did not do it trustworthy, but by listening to your own biases.
Q: So in the end it comes back to human error, human beings are “not so trustworthy”. On the other hand, as I have learned from you, the human being is superior to AI, it is always one step ahead, if not more. Don’t you find this paradoxical?
HP: Yes. We have the capacity and the strength to destroy ourselves and destroy this planet, but at the same time the strength to carry it forward. Did you see that movie “On the beach/ The Last Shore (USA, 1959, directed by Stanley Kramer, with: Gregory Peck, Ava Gardner, Fred Astaire, Anthony Perkins, Donna Anderson, etc.)”? With that submarine, the last refuge for the remnants of humanity after the whole of mankind has been desolated by the radioactive cloud of a generalized nuclear war? One of the characters says: “I hope, God, that at least you understood something of this story, and human existence on Earth was not in vain!” I didn’t sleep for several nights after watching the movie, still thinking about that line. Here’s the message. I hope we have the strength to learn from such dangers. That’s why even these fancy firecrackers that are still being released in the public space, like the drone that wants to kill its operator, have their charm and force us to think about different possibilities. They make us realize the problems.
Q: Are we witnessing an evolution in which mankind is divided, split between those who have the knowledge, the information, the know-how, the necessary know-how, which is more and more complex, more and more difficult to learn by the stove; and a class of those who can no longer cope, who are losing the race? A situation like that between the patricians and plebeians of ancient Rome, the nobles and serfs of the Middle Ages?
HP: I wouldn’t say they lose the race, because here we don’t have a race for the survival of one family against another family.
Q: But doesn’t AI emphasize this tendency?
HP: It does, and it shouldn’t. It is due to the fear that AI generates.
Q: Franklin Delano Roosevelt said that the only thing we have to fear is fear itself….
HP: He was right. We have the interference of two or three planes. Man’s awareness that he’s making more and more complex tools that might make him redundant. Everybody feels that. And then there are people frightened by the idea of the cell phone. There are people who can’t press the touchscreen properly, people who don’t/can’t handle a mouse properly. We are privileged to live in Romania. When half of the country lives in the countryside and has a mental structure like “well, stay put fo another week, what can be wrong with that?”, you realize that for these people time has stopped in its place. They have nothing to do with Artificial Intelligence, they don’t even know what it is.
Q: Aren’t they the losers? Won’t they radicalize? In Frank Herbert’s science fiction cycle “Dune,” he mentions the concept of the Butlerian Jihad, that desolate revolt of humanity, along the lines of the Luddite movement in England , but directed against computers. Can we think of something similar?
HP: On the urban side the danger exists. In the rural part of society it doesn’t, because those people have a closeness to nature that I no longer have. I’m a man born in the city, in the middle of the concrete. But me too I depend on nature. I can’t explain why, but they depend on nature. And nature gives them, the country people, strength, they somehow know how to cope from their ancestors. For them, the problem is the bank machine. But if you give them a simple tool to trade with virtual money – they won’t have any problems, as long as you don’t force them to go to the ATM and you tell them very clearly: “do your exchanges as you want, pay them like this and it’s fine”. If you do it that way, it will be OK. To break their conditioning by an interface element, which is rare in the heartland and which perhaps also frightens them somewhat. Over here in the city, Artificial Intelligence could be a problem, because it’s a term that people don’t control, don’t master, don’t know what it refers to, and it could be a nuisance. Hence my preoccupation with this awareness raising, this obsession of mine with going out to the cultural beer garden, going out into the city, telling stories and explaining to the public. It seems to me to be an obligatory, necessary component of the academic profile.
Q: Professor, may I ask you one more question? In the end, my impression is this: everything that you have told me, everything that I have absorbed – and I will listen to the tape again – everything has made me realize that we are on the threshold, if not in the midst of fundamental, unprecedented mutations for human beings. The way we exist as human beings will change dramatically, exponentially and in less than 50 years. And 50 years is short on a historical scale. Is my perception correct? And it’s not just mine, I think the public senses it – and hence the nervousness generated by uncertainty, the recourse to extremist, totalitarian doctrines…
HP: There are two elements that I see. People don’t like change. If you come to them every year with one measure and one initiative, like the communications companies come with yet another phone upgrade, he, the man, will start to squawk at you, he will get angry. On the other hand, it might be that the advance that’s about to happen will come extremely quickly and will be on the “all at once” model. And that helps. If you spread it out over a longer period of time, with changes given in spurts, on the second iteration, the resistance threshold would increase. If you do it in one move, ressistance doesn’t have time to develop. That’s why I’m an optimist: if you give them changes all as a package, they’ll be fine. But don’t change them, every year, don’t drag them out by giving a slice, a drop at a time. Simple people – you know that story about “blessed are the poor in spirit” – are not uneducated, they are close to nature. These people have their strength, you can’t lead them by the nose. If you try to serve them a measure by putting them through 10 iterations, by the third iteration at the latest, they don’t take you seriously. Whereas if you deliver everything as a package, the simple man will find a logic in all those elements so that the whole package makes sense. The more all at once, the better.
Photo: Horațiu Damian-UBB TechTransfer