On Tuesday Feb 18th 2025 Yale students gathered in the ballroom of the Anderson Mansion to engage in a conversation about Jewish Mysticism and AI.
The panel was led by Rabbi Simon Jacobson and Samuel Loncar PhD and was moderated by Kira Berman YC '25
Rabbi Simon Jacobson is the author of Toward a Meaningful Life ( William Morrow, 2002) that has sold more than 500,000 copies and a world renowned Jewish intellectual and mystic. Through his lectures and writings, he is a mentor to thousands.
In 1979, Jacobson began directing a team of scholars that memorized and transcribed entire talks that the Lubavitcher Rebbe OBM gave during the Sabbath and holidays (when writing and tape recording are not permitted under Jewish Law). This team published more than 1,000 of the Rebbe's talks.
Jacobson heads The Meaningful Life Center, described as a "spiritual Starbucks" by The New York Times.
Samuel Loncar earned his Ph.D. at Yale University. He is a philosopher, scholar, and consultant who works at the intersection of philosophy, religion, science, technology, and art.
His book Philosophy as Science and Religion is forthcoming with Columbia University Press.
Dr. Loncar is the Editor-in-Chief of the Marginalia Review of Books, where he directs the Institute for the Meanings of Science. He is the co-founder of The Writing College and the creator of Becoming Human, a project making his work as a philosopher and scholar publicly accessible.
Gutnick Academy is honored to release this film on Lag Ba'omer 2026/5786, the day on which the plague that killed Rabbi Akiva's 24,000 disciples came to an end, and marks the cessation of the mourning period of the Counting of the Omer.
This day marks the anniversary of the death of Rabbi Shimon bar Yochai, author of the Zohar, the magnum opus of Kabbalah-the traditional source of Jewish Mysticism.
What Do Yale Students Believe?
-
Kira Berman: Hi everyone. As a reminder, I'm Kira Berman. I'm your host tonight. And the program tonight is that Rabbi Jacobson will speak for 15 minutes. Mr. Loncar will speak for 15 minutes, and then we will have a discussion for 20 minutes, and then we'll have questions for 30 minutes. But before we begin, I'd like to pose a question to you. And since there are so many people here, we're actually- Use the mic for.
Kira Berman: Oh, yes. Since there are so many people tonight, which is really a pleasure. We will have around eight or so people answer the question and make sure you say your name, your Yale affiliation, your major, where you're from. And our question has to do with tonight's theme. What is something you believe in, even if it can't be proven or is unmeasurable? So, as always, raise your hand when you have your answer. We'll wait for.
Eli Tadmor: Everybody could hear you?
Kira Berman: Can everyone hear me? Yes.
Eli Tadmor: Yes. We can hear you.
Kira Berman: Yeah. When you have the answer, raise your hand. Even if you're not.
Rabbi Shmully Hecht: Do we have volunteers to answer that question or am I going to have to just start picking people.
Kira Berman: Volunteers. Okay, Eli, why don't you start?
Eli Tadmor: Oh, I didn't mean to. I thought you were just volunteers first.
Kira Berman: Okay.
Eli Tadmor: So raise your hand if you want to volunteer. Otherwise, we'll just ask you a volunteer. So, Leland, you're going to volunteer. Oh. So Zach's gonna.
Kira Berman: Trevor.
Eli Tadmor: Trevor's volunteering over here. Volunteer.
Valentina Simon: Mitchell a volunteer.
Eli Tadmor: Okay.
Kira Berman: We have two. We have three.
Eli Tadmor: I'm happy to choose.
Kira Berman: Okay, let's just let's just start with you. It'll inspire more people to volunteer. Yeah, yeah.
Eli Tadmor: Okay. You gotta hold the mic.
Mitchell Dubin: Okay. Hi, everybody. My name is Mitchell Dubin. I'm a senior in the college. It's good to see you all. You guys are in for a big treat. I've had the chance to chat with the Rabbi a little bit, so we'll get out of the way quickly. Something I believe in. Many people in this room at various junctures have asked me, do you believe in God? And I always respond with the following, which is I'm going to divert from that question and go to the following, which is something that I believe that simply could not be proven in any way, is that I am a blood direct descendant of Abraham. I believe it and maybe prove, maybe not prove, but I just have with complete conviction, in the same way that I imagine very religious people have a conviction about God's existence. I have with with full conviction that I have a blood descendant of Abraham. Okay. People tell me I also look like Putin.
Eli Tadmor: So maybe. Maybe.
Mitchell Dubin: Okay. Eli.
Eli Tadmor: No, I think there was a. Please just get up.
Kira Berman: Say your name, your affiliation.
Eli Tadmor: Okay. Hi, I'm Eli Tadmor. I'm a former PhD. I'm Eli Tadmor. I'm a former PhD student, currently PhD, unemployed. And I studied ancient Mesopotamia which is where Abraham was from. So I have a I have a soft spot for Abraham. He's the- genealogy. No, I don't personally believe I'm a descendant of Abraham, nor in the historicity of Abraham, per se. But, you know, who cares what I think about that question? God laughs in the end.
Rabbi Shmully Hecht: Do you think Abraham actually existed?
Eli Tadmor: I don't think so. But I wouldn't be opposed to it. I mean, it's I'm kind of a I'm agnostic on Abraham's existence. I want to do. Yeah, I wanted to say that I think I think with AI as well, it comes into my mind, I heard this AI podcast and there were the voices, and it's like my brain when they hear a voice, you think there's a person behind it. And in this case, I'm like, no, there is nothing. I was like, yeah, because already with all other humans, it's an act of faith to believe that their consciousness, that they also have a consciousness just like I do. And it's of course, it is an act of faith, a very basic one. And if you're a sociopath, if you don't, but it's also kind of an act of faith.
Trevor MacKay: Good evening everyone. I'm Trevor MacKay. I'm a senior in the college studying history from Vermont. I used to be an atheist. And then I began looking up at the night sky. And I saw the beauty of the world around me. This is genuinely true. There was no logical argument, no rational argument that convinced me of the existence of the Almighty other than the world around me. And I think that by extension of that belief, I've also, you know, believed that there is genuine good and that that good should be uplifted and that there is genuine evil, and that that evil should be stamped into the ground.
Shay Shimonov: Good evening. I'm Shay Shimonov. I'm a a PhD student for applied math. I also-.
Rabbi Shmully Hecht: Let it be quiet.
Shay Shimonov: I've worked with AI for a long time in the Army and now in school. Something that I believe in is a thing two fold. First of all, I think that we are all part of a bigger one. We can, from the religious perspective, like we have a piece of God inside us, or but we are all connected in some way. And the second thing that I believe is that basically manifesting and karma, kind of like a combination of the two that you can summon things to your life, and also that good deeds go a long way. That's what my belief is.
Kisshan Sankar: Evening, everyone. My name is Kisshan Sankar. I'm a third year joint degree MBA and Masters in Public Policy. Something that I believe in would be the power of brotherhood. And not necessarily in, like, a brother sense, but, like, you know, broader sense and the ability for people to want to serve for a simple reason of just like the people around them. I think I've seen I was in the military before this, and I think that whether it's small tasks or small deeds or things that are significantly larger, I think things that people will do just for the people around them and their brothers is pretty powerful.
Zach Reich : Thank you. Hi, guys. My name is Zach Reich. I am a junior at Yale. I study graphic design. And I actually, I thought of an answer really quickly to this question because something that. So in my work as a graphic designer, I work with, like, typography a lot. And something that I learned last week was that there's actually, like, very little math or geometry that goes into what makes type look good versus look bad. So like the spacing between letters is basically arbitrary. And the rule is like basically adjust it until it looks good. So that kind of got me thinking, like part of the reason why I was originally really interested in studying graphic design was that it felt pretty structured and it felt like there were rules to follow, but type is relatively ambiguous when it comes to that. So ever since then, I've kind of asked myself, like, do I believe that typography can be good or bad? And I think my answer is, yeah, I mean, there is good or bad type. But the rationale to me has always been like, either like it or I don't. So that's kind of like a more esoteric response than I think some people were giving. But it's definitely something I believe in, even though there's very little like concrete mathematical proof that you can give about what makes it effective.
Speaker12: Favorite fonts?
Zach Reich : Ooh. I actually I use only one font in all of my work, which is a font called Helvetica.
Speaker12: Yeah.
Zach Reich : Wow. I'm surprised people know what that is.
Kira Berman: Where did you- One over here and then one over here.
Eleanor Schoenbrun : Hi, everybody. My name is Eleanor Schoenbrun and I'm a senior in the college studying global affairs and political science. And one thing that doesn't make any sense, rhyme or reason whatsoever, is that I'm exceptionally superstitious. So, like, step on a crack, break your mother's back. My big ones that my roommates use against me all the time is like salt on the table. Like you have to throw it over your shoulder. Otherwise the devil's going to get you. And open umbrellas inside. That is like a major no, no.
Victor Agbafe: Good evening everyone. My name is Victor Agbafe. I'm from Wilmington, North Carolina, and I'm a second year at the law school. And I genuinely believe that vision that's rooted in some type of philosophy or meaning in something greater, has the ultimate power to shape the future.
Rabbi Shmully Hecht: I'd follow you into battle. You have a very...
Kira Berman: Is there anyone else wants to go? Can you take one more? Yeah.
Alex Bavalsky : Hi, everyone. I'm Alex Bavalsky. I am a senior. Sure. I'm Alex Bavalsky. I'm a senior at Yale College studying global affairs. I'm from New York City, and I believe, although I cannot prove this that Timothy Dwight College is the best college in Yale College. And, you know, a lot of people say the courtyard is very small. The food doesn't taste good. It's quote unquote far away, which it literally isn't. It's the closest to Shabtai. Obviously, it's the most important, the most important value a college could have. But also, I think this is genuinely connected to a global issue, which is there's a very small country in the world which is often demeaned, which is often ridiculed by the international community and which is actually a great and beautiful country. And I think we all know what I'm alluding to. And that is a kindred spirit of TD. Israel.
Valentina Simon: Hi all. I'm Valentina Simon. I'm a senior in Timothy Dwight College. I'm studying statistics and data science and I'm from the DC area. Something that I believe in is, I suppose, the magic of the human mind. I'm taking a computational psycholinguistics class right now, and we're learning about how in order for AI to learn the biases that humans learn, it takes them billions of times more data. And it's just really incredible to me how a child learns language and learns how to interact with their environment from such a limited amount of data and then eventually becomes this like fully thinking, living, creative, thoughtful human being that is able to exist in an ecosystem with other human beings. And that's really magical to me. So thank you.
Mitchell Dubin: So what's the thing you believe?
Valentina Simon: Oh, I believe in just like like this human brain is this mystery that's incredible to me. And it's it's crazy. I just like. I don't know. I feel like this is something that, like, over the course of this semester, I've grown in my appreciation for. If that makes sense. And if you ask me, like, what is one of the wonders of the world? Two months ago, I wouldn't have necessarily said the human brain the way I do now.
AI and Jewish Mysticism | Rabbi Simon Jacobson and Samuel Loncar PhD
-
Kira Berman: I appreciate that answer. And with that, I think we don't speak about that enough. Oh, yeah. And with that, we'll do our intros now and then before our speakers start, Toby will say a few words. Rabbi Simon Jacobson is author of the best selling book, Toward a Meaningful Life, which has sold over 400,000 copies to date and has been transcribed into Hebrew, French, Spanish, and many other languages. Rabbi, including Georgian and many others. We don't have time to even mention. Rabbi Jacobson heads the Meaningful Life Center, called by the New York Times a spiritual Starbucks which bridges the secular and the spiritual through a wide variety of live and online programming. For over 14 years, Rabbi Jacobson was editor in chief of Vaad Hanachos Hatmimim, where he was responsible for publishing the talks of the late Lubavitcher Rebbe, one of the most influential Jewish leaders of the 20th century. Beginning in 1979, Rabbi Jacobson headed a team of scholars that memorized and transcribed entire talks that the rebbe gave, which is very impressive because those were during Shabbat, so they had to be memorized. In which he worked very closely with the Rebbe himself. He also headed the research team for Sefer Halikutim, an encyclopedic collection of Chassidic thought, which includes 26 volumes over five years.
Kira Berman: Rabbi Jacobson is one of the greatest scholars and most sought after speakers in the Jewish world today. He has lectured to diverse audiences on psycho-spiritual issues and applying Jewish thought to contemporary life. He has been interviewed on over 300 radio and TV shows including CBS, CNN, Fox and NPR. He has also been the chairman and publisher of the Algemeiner Journal, which, according to CNBC, is the fastest growing Jewish newspaper in America. Now, Samuel Loncar, who earned his PhD in philosophy and religion from Yale, is a distinguished scholar, writer and educator whose work bridges science, philosophy and spirituality. He is the editor of Marginalia Review of Books, founder of the Institute for the Meanings of Science, and creator of The Becoming Human Project. His research on Christianity and anti-Semitism reflects his deep engagement with Jewish thought and history. A sought after speaker and consultant, his work has been featured globally, including at the Max Planck Institute and the Mosaic Magazine. His book, Becoming Human Philosophy as Science and Religion From Plato to Posthumanism, is forthcoming from Columbia University Press. We're very excited to have our speakers here.
Toby Hecht: I think we can stand in the center here so everyone can see me. Good evening. I'm Toby Hecht, one of the directors here. So I wrote some notes down. I didn't want to miss anything. Tonight is very special because I've been waiting for some time to introduce Rabbi Simon Jacobson to my dear friend, Samuel Loncar. Rabbi Jacobson is not only a relative of mine, but he taught Tanya to my 12th grade class in Brooklyn many years ago, and I remember thinking how brilliant and humble he was at the same time. I can picture him now at the desk in the classroom on Crown Street. I have known Samuel for more than a decade, and he is one of the most brilliant people I've ever met, not just in terms of knowledge. I don't even know how he stores it all, but in terms of his steadfast curiosity for more. Nothing is beyond him and he doesn't settle. He is a pursuer of truth. He also introduced me to one of my closest friends and a confidant, Alexandra Borowski, who is not here with us tonight. And though she is here for sure in spirit, who encouraged me and is responsible for helping me write a book about my beloved grandmother. So thank you both for coming up to New Haven, away from your hectic schedules, to be with us here tonight. The invitation for this event read The Soul AI and the Meaning of Life. Conceptually, soul and and AI seem disconnected from each other at best, if not in total conflict. Whereas the meaning of life eclipses both altogether because it fundamentally outweighs either of them. As a lifelong search at the core of human identity and thus resolves the differences at the heart of that perceived conflict. Rabbi Jacobson and Samuel have dedicated their lives to filling those gaps, attempting successfully, each in their own way, to unite all three through determining what is a meaningful life. Much of today's literature suggests that we are deficient in finding purpose. Our individual and communal raison d'etre. Thanks, Eli, as always. Reason. Reason for being.
Rabbi Shmully Hecht: The reason we keep Eli around.
Toby Hecht: Yes, exactly. Always. Eli, you cannot leave New Haven, no matter how hard you try. It's true that in looking to science, technology, and philosophy, we have found many answers to solve human longevity. But the method of living well remains unsolved. And perhaps we are asking the wrong questions. Maybe it's fear of the truth. Being human assumes language like autonomy, freedom, independence, a sense of self, our vision of dignity. Can it also mean submitting to something greater than the self? Not only the recognition, but the knowledge of the unity of the divine, that which supersedes that freedom we cherish. Our intellect, our degrees, our successes, our ten year plan, our mortality. So then the question is not really about identity. Are you a believer or not? What's real, what isn't, what matters and what doesn't, but rather how we go about identifying the answers to these questions. The method of that pursuit is the purpose of life. I know that after tonight, we may leave with more questions. Even the dreaded need for introspection. Yet I guarantee the motivation will be less about fear or doubt, and more about desire and drive. To know more and to learn more about the human Constitution, our experience and fulfillment. And with that.
Speaker18: Thank you.
Simon Jacobson: Okay. Good evening everyone.
Betty Kubovy-Weiss: Good evening.
Simon Jacobson: Thank you, Shmully, Toby, for those beautiful words. It's an honor to be co-panelist with Samuel. And thank you for hosting Kira. And to all of you. I think I'd like to begin with a story that captures, in a way, my life, which in many ways is part of the discussion this evening. So, as you can see, you have your eyes open. I look like a Jew, right? I have a beard. I have a yarmulke, and I'm a proud Jew. Okay, so let's just establish that right here. And I've never been prouder. However, yeah, due to my life experiences and journeys, I have met many people who find me a little strange. Different, archaic. Many people have their stereotypes about Jews in general, religion, God, and many things that they think I represent and most of them are negative stereotypes, I have to say. So one among many, many stories after my book was published, toward a meaningful life. So the book the publisher, William Morrow, sent me on a book tour. I don't know if they do that anymore, but back then we're talking about 1995-96. And if you were old enough to remember that anyone was born, then I feel like the oldest guy in the room. You are. They are. Okay. Anyway, they sent me on a book tour. It began with 20 cities. It ended up being 90 cities. And you speak at a bookstore, Barnes and Noble. Those days there was also borders. And throughout the day, the publicist set you up with.
Simon Jacobson: Then there was no internet yet. At least not for the public. So it was print, radio and TV interviews. This was in Cleveland. So I was booked on the Cleveland morning show, like one of the Good Morning America type of shows, where they feature new authors like a 5-6 minute interview on a couch, and I come into the studio schedule time like 7:30 in the morning, and I'm welcomed by the producer. And she in turn introduces me to the anchor who's going to interview me. As she walks out and she sees me, literally her jaw dropped. She turned pale and I don't know what was bothering her, but something I remember touching my face. Maybe I don't know. You know, I hadn't eaten breakfast yet. So anyway, I said, is everything all right? She says, not exactly. So I said, why? She says, I love your book, toward a meaningful life. It's on my night table. I read it almost every night. It's a life changing book, but I never expected that the author would look like you. Now, I wasn't insulted. Just for the record, I said, what did you expect? Because remember, my picture's not in the book, so nobody knows what the author looks like. I said, what did you expect? She said, I expected a six foot two, sexy, skinny looking guy and clean shaven. I said, I thought I looked like some of that, no? Maybe two out of the three, you know. I was-. Yeah, I was balding already.
Rabbi Shmully Hecht: Definitely sexy. I'm not sure about six foot two.
Simon Jacobson: Okay, there you go. Anyway, she says, I'm sorry if I insulted you. I said, no, not at all. You're being honest and I like that. And I said, so why don't we talk about that? Perfect. Ask me that. She said, I can't ask that on television, you know. I said, phrase it in a more, I guess, you know, civil way. So we sit down on the couch and, yeah, the interview begins and she says the following. Says, I have a new author here. Author of a new book, Toward a Meaningful Life, Rabbi Simon Jacobson, and it's a beautiful, great book. I recommend it. Covers the entire spectrum of life. And I want to ask you, Rabbi, being that it's such a relevant book to me, and I'm sure to so many other readers, wouldn't it be more acceptable if you looked like one of us as the way she put it, if you looked like one of us? So I said, this is all live TV, by the way. It's not rehearsed. There's no editing. Whatever you say is on the record. So I said to her, I don't think you'd be able to get away if I was a Saudi Arabian prince dressed in one of those garments or some other cultural garment. I think you can only say that because I'm Jewish. They would never allow that. And I said it with a smile. I didn't mean to offend her. I just wanted to make my point.
Simon Jacobson: So I said, but on the contrary, I don't understand. Why are we judging each other by our looks? I wouldn't ask you to change your garments or change your look. You know, we're souls trying to connect with each other. Souls transcend garment. You know, labels are for clothing, not for people. And maybe this is a good opportunity for us to see if we can connect about things we believe in, about ourselves, about our deeper meaning in life. And instead of getting caught up with these outer stereotypical images and looks. Anyway, it ended up being a great conversation and she was really appreciative. And it was because she started with this provocative question. So this is a world- I straddle two worlds. I come from- I grew up in Crown Heights, USA. You've heard of this place? Yes. It's in a little town called New York, a little south from here and. Right. And my parents were both Russian born. They came to the country in the United States after World War II. I'm the oldest of five children, and I grew up in a very intense Jewish community, Chabad Lubavitch community. Went to yeshiva from seven in the morning till 9:00 at night. The whole the whole thing. But thank God I grew up also in a home that was non-dogmatic. You know that word dogmatic? Yeah. Okay. I was wondering if any of you were going to say that you absolutely believe without any proof in dogma, but nobody said that.
Simon Jacobson: Okay. So I grew up in a non-dogmatic home. My father was a journalist. My mother, both of them very well read. And my home. There was no. We were never silenced. Ideas were welcome. Skepticism was welcome, which really worked well for me because I'm naturally skeptical. And I was able to go on my journey while getting a very intense Jewish education, also to find what I believe in, you know, not something that I'm just conforming to or fitting into someone else's expectations. I remember when my my son came home from school many years ago and he said to me, I'm really excited about something I saw. He was really excited. I said he was seven years old. So I said, what are you so excited about? He says, I heard today in school that you were born an original. Don't become a copy. Okay? Yeah. So. Yeah. And I remember it till this day and the years passed. Today he's a father of his own. He lives in Pittsburgh. And I asked him recently, I said to him, do you remember that story? He said, no, he didn't even remember. That's how it is. One day you'll be parents. You'll understand. And I said to him, so what happened with you? Are you a copy or an original? Now, no pun intended, he happens to be a copywriter. He's a creative director. He says to me, I'm an original copy.
Simon Jacobson: Whatever that means. Another cryptic line. So I was able to grow up in a home, as I said, with strong standards, high values and very intense, but also to try to find my own originality, my own voice, my own song, which is not easy to do in our world because every community, whether you grew up in a home of believers and nonbelievers, atheists, agnostics, moderate atheists or radical atheists and so on, everyone is shaped by our parents, whether you like it or not. Maybe you're trying to rebel against it. Maybe you've maybe you've embraced it. Maybe you don't even know how to deal with it. But we're in our impressionable years. We're affected by our society. That's how it is by parents, by education, by community. Today, by the media, I mean, I don't need to list it all. You all know we're inundated and people don't want you to just, you know, everyone likes to think out of the box and let's be a little different, but not too different. You know, you don't want to be weird. And therefore those pressures tend to sometimes as Oliver Wendell Holmes puts it so powerfully in his tragic poem called The Voiceless, where he says, alas, to those that die with their song still inside them. You hear that? Alas, to those that die with their song still inside them. We all have unique song inside of us.
Simon Jacobson: But often we're forced to sing someone else's song, and we never even discover who we are. And we don't have the courage to do so. So to me, the journey that I grew up with is one really, as I said, trying to integrate these two worlds. On one hand, a very modern world with all its advancements, but very often a world that does not allow your soul to be itself, to discover who you are and whatever it is. It's the pressures, the demands, all the psychological forces that impact us. So in a way, being here, standing here this evening, I feel it's a great opportunity to talk about this because I know it's a universal challenge we all have, and especially as a teacher and a mentor. I've always dealt with this because I've always dealt with the secular world, and I'll just share one more anecdote that really captures it. What's my time frame here? Okay. So I remember years ago I started giving a class. It was in New York City, and the core group were happened to be people from the arts and entertainment industry, musicians, songwriters, poets, you know, some Jews, some non-Jews. They were all spiritual, but hardly traditional. So it was a very interesting confluence here of two different worlds. You know, I would say my spirituality came from Jewish mysticism mostly, though, of course, I've read other schools and so forth and so on.
Simon Jacobson: But there in that group, most would probably say that their spirituality came from Zen Buddhism or a thing called LSD. Which is not an acronym for those that are interested in let's start Davening. Okay. So I realized that even before I opened my mouth and share an idea, as I said before, my look was not neutral. And I know in communication, even before you speak, you know, people are already judging you whether they like it or not. They fit you into some box or some stereotype. So I didn't know what people were thinking. I was thinking, here I am sitting there with a beard and a yarmulke. Maybe like the story in Cleveland that I shared, I may remind someone of an angry grandparent that schlep into the synagogue on Yom Kippur against his will. You know, I may be reminding him of an irrelevant Hebrew school teacher. Or frankly, maybe war memories. So I tried an experiment. This is a real experiment in real time. I can't call it scientific, but it worked. An experiment was I decided I'm not going to use any language, any words that have any religious connotations. No Jewish connotations. Nothing based on Bible, anything that can be misinterpreted. I decided I'll create my own language and that's what I did. Instead of God, I used the words like higher reality, the essence of it all. If it was a particularly new age group, I used words like indeterministic layers of undefined energy or something like that.
Simon Jacobson: And instead of Torah or Bible, I use the word blueprint. Instead of mitzvot and good deeds, I use the word connections. And instead of redemption or what some would call Messiah and so on, I use the word destination. And here I was, waxing eloquent and pontificating about touching the essence of all of reality and reaching places by traveling through this blueprint, making these connections, and finally creating seamless fusion between the indeterministic and deterministic, between the outer and the inner, between form and function, between soul and body into one beautiful, seamless harmony that transcends diversity and brings everything together in this one synchronicity of one wholeness, but at the same time respecting the dignity of each individual. That was the tone. And I went on like that for hours. It was great. They were listening and like, you know, weeks went by. Finally a guy came over to me. I don't know if any of you remember or know of a group called Jay and the Americans. They were a rock group in the 60s. They had some big hits. Anyway, he was part of that group. His name was Kenny Vance, and he comes over to me. Before the class, he says to me, you know, everyone loves this class. This is like, unbelievable stuff. I'm just wondering, are you talking about God? I said, yes, but don't spoil it.
Simon Jacobson: But don't spoil it for the others. Anyway, so I let the cat out of the bag already. But my point was, it worked a lot better than I expected this experiment because I did not use any words that anyone could interpret. So it was words that everyone felt were neutral, even scientific, you can say, and I realized, you know, we talk so much about communication, connecting through words. Don't be silent. Nobody knows what you're thinking. She doesn't know what you feel. He doesn't know what you mean. You know all that. But words could also be loaded. And I can use a word that's innocuous to me. But it'll cause you to go crazy because your mother went ballistic every time she heard that word or whatever. So to be very careful when we use words, and I find my biggest challenge, frankly, is not to communicate ideas, but to uncommunicate, to try to strip things from these myths and stereotypes and distortions. So finally, in this eye opening remarks, I would like to say very much, as Toby put it, so well. Indeed, when we talk about AI, the soul, talk about meaning of life. I think before we even, you know, everyone's so amazed about what it's capable of doing. You have to know your definitions. You know, I would just put this right there on the table, and I'm sure we'll talk about it throughout the evening. You know, like people ask me the question. The big two visions of the future, will there be an apocalyptic one where machines will take over, like in The Matrix, or will it be a utopian version vision of the future with AI and all its developments? It's a big question, and they're debating it, but it also comes down to what exactly is intelligence and what exactly is a human being and what is exactly a machine.
Simon Jacobson: So let's put it very bluntly. If you think you're a robot, and a machine, can assure you that the machines that will be built will be better than you. That's not even a question. They once thought that chess masters would never be able to beat by a machine, the top ones, and that's not the case any longer. But if you're something more than a machine, then we have a fighting chance. So maybe I look at AI really as a challenge to what is the human being? Who are we? It's going to force us to have to define ourselves. If we're just efficient machines and we're able to function well and survive, we will build better ones, more powerful ones, immortal ones even. And on the other hand, this is a real, I would say, a battle for the soul of what is and what is a soul. There we go again. Another word that I wouldn't use in that conversation, because every one of us has a different meaning, but I'd like to talk about it more as this evening goes on. So that's my opening remarks.
Simon Jacobson: And the bottom line I'd like to say is that I have no doubt, as a real optimist, that the soul will prevail, that the machines, no matter how good they are, will force us and propel us to new heights that we've never seen before. And we will be forced to become exactly that. And if not, we will become obsolete. But I'm confident that the first will happen, because that's what human beings are like. And I see it already happening that we will go through really what I would call after the agricultural and the industrial and the information revolution, a new revolution that we're at the verge of entering, a spiritual revolution where we'll realize that all the material in this world and all the successes in the world are just a big tool chest. And the question is, how are you going to use that tool chest? And we're going to be compelled to look at our values, what we love, what are we ready to fight for and what are we truly believe in? Not just in an ambiguous way, but in a way that we're ready to really give our lives for. I mean that in a positive way. So I feel this is like, really, we're living through history right now. This is a historical moment, and you're all blessed. We're all blessed to be here because how we will behave, will define the next generation and the future. Thank you so much.
Samuel Loncar: Thank you very much, Shmully and Toby, for the wonderful opportunity and for those beautiful words. Toby, I really appreciate it and I'll pass them on to Alex. And thank you so much to Rabbi Jacobson. I've been watching Rabbi Jacobson's YouTube channel now for years. As a professional philosopher of religion and science, when you discover someone of Rabbi Jacobson's quality for free, you can imagine the kind of joy it is. I'm amazed that mainstream media hasn't realized the revolution that's already taken over the world, which is the world's best teachers, are basically free, and the idea was that that could never happen because how could they make money? And it turns out the mitzvahs of the Torah is true, that if you give generously, as every tradition teaches, somehow the universe has a way of always bringing it back to you. The questions we face today are questions that require first order thinking, questions about the meaning of life. The great filmmaker Andrei Tarkovsky said, by spirituality, I mean first of all, question of the meaning of your life. So by the term spirituality, I'm going to use spirituality to mean the question of the meaning of your life. And I want to explain what the Becoming Human project is about, and bring a report from the frontiers of science as I see them. I'm a philosopher of science. So you could say, well, you're not a real scientist. That's what a lot of scientists think about everyone who doesn't have a PhD in the natural sciences.
Samuel Loncar: That's not what any fair minded scholar thinks. We all recognize we're doing different kinds of work in German, French and other languages. The sciences mean everything in a university. But I have a view of philosophy, which is very different than academic philosophy. Because of the work I did at Yale, I made a major discovery. And it's this. What we call religion and science are modern inventions, and the historical context to understand them is philosophy as a way of life. This is the essence of my work. It is a historically incontestable fact. My book establishes it. All the scholars who are familiar with the argument know the facts are true, but facts being true does not give us a vision of their meaning. What does it mean to say science and religion are modern inventions that, to be understood have to be placed in their historical context. Well, it turns out that the atheism I think that's most popular in modernity is the atheism against history. People act as if, because there's discomfort in the past, which we see as one of the fundamental crises on campus and around the world, the impossibility of negotiating with the truth of our past. The truth of the human past is horrific. It's beautiful, and it is absolutely horrific. And if you choose to be judicious and honest, you will have to report on the horrors as well as the blessings. Why does that matter? Well, you could say there's nothing scientific about that.
Samuel Loncar: Has any of you heard of the alignment problem in AI? Do you understand the alignment problem is unsolvable, and that I get paid a great deal of money by clients to explain why. AI ethics, which is a very important cutting edge field, can go nowhere unless it can answer fundamental spiritual questions about what humans are. Why? Because the problem with AI alignment is that alignment is impossible when the people doing the alignment are not aligned. The data sets that create the horrendous bias in AI that have we have recently discovered, of course, caused many, many people of color, for example, to be arrested based on simply biased data. Not the algorithms. As many of the scientists realize, it's much more disturbing. The algorithms are great. The data is the problem. What is the data? It's what we have said and done. The data of human action is what creates the alignment problem. Because if you feed all of human history into an honest system that could eventually process it, you will get an absolute set of contradictions. And so all of the problems in alignment, particularly the problems around so-called super alignment related to the issue of so-called superintelligence, are actually forms of philosophical anthropology that are existentially urgent in tech, policy, global governance and computer science. So it turns out core scientific and policy problems today are actually part of the history of philosophy. And so if you want to approach those problems not simply with their current form, but to place them in a context where you have the hope of resolving them meaningfully, we have to see a much bigger picture of the human species than what we have in our society.
Samuel Loncar: So the vision of philosophy I have is a revival of the ancient worldview that I believe Rabbi Simon Jacobson embodies, which is how is it that someone who has the abilities- I don't know if you've seen Rabbi Jacobson's video on memory, but you heard Kira, and I thank you very much, Kira, for your hosting, his introduction that Rabbi Jacobson has this capacity that during Shabbat with Rabbi Menachem Mendel Schneerson, he could remember an entire talk for hours and transcribe it. I understand verbatim. You think about that. That's a form of what we would call superintelligence. It's extremely relevant to people doing cognitive psychology to understand how is that possible? And we're not nearly curious enough, although I appreciate your curiosity in the wonders of the human brain, the human brain is absolutely wondrous and inexplicable, and the patterns between it and neural nets and the capacity to then systematize meaning this is related to our most ancient traditions. What's the one tradition that still treats scripts as alphanumeric? Kabbalah. How does AI work in the most fundamental sense? We've been able to take all of the written output of human beings and convert them into numbers, and specifically in the case of LLM, vectors.
Samuel Loncar: So what we're doing is taking everything we've said and making a new level of meaning. So what AI is confronting us with is ourselves. And the alignment problem, the biggest problem, the problem that Nick Bostrom got a whole millions of dollars in prestige at Oxford to have a center around, oh, maybe we'll create a superintelligent machine that will kill us all. So let's do that. It's a very strange argument, but this is the argument also of Sam Altman and others at OpenAI. There's a great danger around superintelligence. So let's have an arms race to see who can monetize and privatize it first. So there's clearly not a totally serious idea that if you really said we're going to discover something greater than nuclear weapons and unambiguously is destructive, why would we support with billions of dollars anyone doing that? But we are doing that. So we are a contradictory species and the hope that we can use computers to iron out our contradictions is actually belied by the beauty of science, which, in pursuit of truth, we discover realities that destroy our egos idea of ourselves. If your idea of your society and culture is that you're basically okay, and then you put all of your data into an AI, guess what you're going to find out? You're not basically okay. You're systematically oppressing certain groups. You're systematically misrepresenting people. You're essentially, we see through the alignment problem that the history of humanity is a kind of elaborate story of gaslighting people who aren't in power.
Samuel Loncar: And these are very serious problems. They require all the smartest people working at the level of the computer science and the algorithms, but they also require people like Rabbi Jacobson and others from traditions who are deeply informed about the fundamental issue what is a human being? This is what gave birth to the revolution of philosophy. About 2,600 years ago, a person named Pythagoras, who you've probably all heard of, invented not only what we think of as the foundations of the Western mathematical tradition, the Western diatonic musical tradition. He was a very strong believer also in the wrongness of animal sacrifice. He believed in what scholars call metempsychosis, a fancy word for reincarnation. That's right. So behind our mathematical and spiritual traditions is a single tradition of people who always thought science as the highest aspiration of the mind to know exists within a framework of realizing what it truly means to be human. If you separate the pursuit of knowledge from the pursuit of meaning, what you get is knowledge that serves the agendas of whoever has discovered it. So I ask you a simple question, which is something I've posed a recent client. We know open AI will never achieve ethical superintelligence. Why? Because no one trusts Sam Altman with a nuclear bomb. And I'm not criticizing Sam Altman. I'm just saying everyone who was the original member of his board has left, and all for similar reasons, and all of them very judicious.
Samuel Loncar: And I'm not criticizing him. He's an amazing entrepreneur. But would you trust Sam Altman with power greater than a nuclear weapon, given what you know about him? It's a very simple question. I wouldn't trust myself with that. So we're not being serious right now as a species. On the one hand, we're putting more money and energy into AI than into any tech revolution in history, even thinking about reviving small nuclear power. If you guys are investors, you probably know the nuclear industry is some people looking in hope that maybe there's going to be a revival of nuclear because of how much energy we currently put into. What if we put the same amount of energy into asking, what does it mean to be human? If we want to be able to use the super development of our abilities, maybe the first thing we need to do is ask, where did these abilities come from? How is it a Jewish rabbi, a man who has the woman said, looks like you reaches the entire world with a wisdom that doesn't have any coding. Doesn't need any name. Doesn't need any specific vocabulary. Why? Because it's true. That's what philosophy is about. I don't care what you call me. Philosopher, spiritual teacher, philosopher of religion, philosopher of science. They all matter for context. Context for clients. Context for publishing. Context for dinner engagements. At bottom, the question that unites us all is what does it mean that we're human? And I would submit to you that's not an obvious question, or we wouldn't be so divided on our visions of how to build good human societies and good human lives.
Samuel Loncar: And I am a great believer in the potential of what's happening in AI. I also think it will be catastrophic as it is already becoming because we're becoming illiterate. I don't know if you've noticed. But literacy rates are plummeting. And this is not just in the West, but it is in the West. Places like Germany, famous for their education, have serious educational crises. The US, everyone knows, has not been able to educate. Covid really destroyed the lives of many young people educationally, but this is a kind of culmination of a long trajectory, of a decline in the capacity to decode text. Well, there's a tradition that, in spite of all of its incredible vicissitudes in the persecution against it, there's a tradition of people who, without any formal degrees, have gone on doing what they've done, exactly like the ancient philosophical schools. Gathered around a shared way of life in reverence, submission to a profound idea that the universe has a creation and origin, and therefore a purpose and a destiny. And within that worldview and their shared practices that no one imposed upon them, they have produced this book, if you don't mind, Ellen. They've given us wisdom that not just from Rabbi Jacobson, but, as he says, the wisdom of Rabbi Menachem Mendel Schneerson.
Samuel Loncar: So it doesn't matter as I often tell students. When I taught at Yale, it doesn't matter what you think about religion. Religion has defined the horizon of your identity. Just as it doesn't matter whether you're a scientist, scientist has defined the horizon of our identities. And guess what? There is a view of philosophy that is scientific. And one of my most recent YouTube lectures, I predicted the next ten years of science and I made three predictions. You can see them. One of them was already coming true. If you know the work of Michael Levin in biology. So I will tell you one of the things I predicted that is coming true. We're living through the greatest revolution in biology. I'm currently running a project related to it, and I'm going to publish an interview I hosted with Sir Paul Nurse, the director of the Francis Crick Institute, next month with Philip Ball. The revolution in biology has brought back, you know what concept in biology? Purpose. Purpose is now a central concept in biology, and it was absolutely eliminated from modern science in the 17th century. And it is an essential concept in modern biology, not intelligent design, but purpose at the level of organisms. Michael Levin and a co-author, Chris Fields, I think, from Harvard, just published a new paper in which they made an argument using cognitive science that all systems that exist and maintain their organization of space and time, not just living systems, require essentially features that we currently think of as like life.
Samuel Loncar: Levin is the director of the Institute for Alien Biology at Tufts University. And so he says, so basically, where are the aliens? He literally just said this on X. All around us. In other words, within the next ten years, we're going to see the borders between biology and physics break down. And I predicted this because my view of philosophy is if it's right, put your money where your mouth is, make predictions, build projects, organize scientists, and help do something. We can all do this. And if you're wrong, correct it and get better. So the view of science we were taught, that science is just these itty bitty things that maybe you can be very technical, but the big things don't matter. This is breaking down. All of the leading, most elite scientists are talking to each other in different fields. Cognitive science as a field has already begun the bridging that led to so much in AI between computer science, the philosophy of mind, and many other fields of technical programming and mathematics. Now all of mathematics is coming into computer science. Geometric models. Models that use the latest work literally in topology are now being used to build systems. So science has gotten to a point where all of the scientific disciplines are converging. Hence the issue of how do we make sense of this? What is the meaning of science in this world? And we used to think, well, we can't say anything about that scientifically.
Samuel Loncar: That's not true. If by scientifically you mean can you make an informed, judicious assessment about the meaning of science, the answer is yes. People will disagree. That informed, judicious assessment, I think, looks something like this. We are living through the most exciting period in human history when if we decide to recognize that our advance in knowledge, if it were met by an equal advance in existential passion to recognize your own life is infinitely more valuable than anything you can know about the material world as beautiful as that is. If you can combine the mystery of your own existence as being infinitely worth your time, that it's infinitely worth your time to discover, what does it mean to be me? What does it mean to have the history I have? How do I relate my freedom to this past? If we put as much passion and money into that at the same time that we then thought as a community bringing all of our expertise together, how can we think to use these great tools to serve us and not destroy us? Then I think we will enter into a utopia. But that will be a battle because there are already people who call your body a flesh bag. Who say it's going to be gone by 2050, in the singularity, and who are happy that you won't have a biological body anymore.
Samuel Loncar: That's called transhumanism, and I think it's dark side. That's a real vision, and it's not an obscure one. As many of you know, Sergey Brin and Larry Page, many other very famous billionaires, have a version of the transhumanist ethos that says, let's get rid of our bodies, let's get rid of our biology, let's turn ourselves into machines. That's not science. It's spirituality. It's not science, it's spirituality, but it's a spiritual interpretation of the science. And my point is, that's unavoidable. Every scientist has a spiritual interpretation of the meaning of their own life in their career. Their colleagues will often disagree. The history of science will often disagree. But we can't live without meaning, and we can't live without purpose and acting like we can ask AI what our purpose is, as if we aren't making it, as if what it's not confronting us with is ourselves. Undeniably unedited. AI stops us from gaslighting ourselves about how we aren't what we think we are. We are something infinitely grander than what we have thought we are in infinitely more terrible. And I believe the question of human purpose is a project. We don't start knowing who we are. We have to become who we are. And that's what the Becoming Human project is all about, unifying the resources of science, spirituality and history to address that question. And I couldn't be more privileged to share that conversation with you and Rabbi Jacobson. Thank you.
Simon Jacobson: So I'll just I'd like to share a few comments on what Samuel just said. I couldn't agree more. I just wanted to add, particularly, I think history is vital to understand how we got here. We can't understand the future if you don't know the past. I've done a lot of research and study and a big question that I'm sure many of you have heard about. And that is when did science and religion diverge? Because there was a time that the doctors of this world were soul doctors. Spirituality and the material world were seen as one. So approximately what we identify as being the time of the enlightenment right after the Renaissance. And let's cut straight to the chase. The real culprit is the distortion of religion and its abuse by its authorities and leaders that finally people said enough is enough. And it was the so-called open minded thinkers and scientists that began to challenge the authority of the church. You know the Galileo story is maybe the best one, where he was forced to recant his scientific discoveries because it contradicted the church doctrine. And ever since they became, I would say, worse than tenuous. But there's tremendous tension of two different worlds. Science was meant to replace religion. If you read Voltaire as some of the other French Enlightenment, it will basically say that within 200 years from now and they're talking about the 18th century, 20th century, there will be no more religion except for the masses. I don't know the word for it in French. Do we have a French expert?
Eli Tadmor: No, not the word for it.
Speaker21: Eli, I am so glad-
Simon Jacobson: For the for the.
Rabbi Shmully Hecht: Plebeians.
Betty Kubovy-Weiss: I'm so glad you're speaking French.
Simon Jacobson: Anyway, a derogatory term for the peasants. But that's not what happened. Not only is religion still a vibrant force.
Speaker12: Many peasants.
Betty Kubovy-Weiss: I'm French.
Simon Jacobson: No, it's a word. I think it's almost like the garbage. There's a word for, like, garbage anyway. But maybe you don't know that character in France. You know.
Rabbi Shmully Hecht: You gotta read more Voltaire.
Simon Jacobson: Okay, anyway. Let's not get caught up in semantics here. Point being is that that's not what has happened. And I believe a big part of the conflicts that are going on right now is this continuous inability to find peace between faith and and science. And it's very clear, especially those that are more sophisticated, who realize, yes, it's true that there were those that abused religion and it's true that that should be rejected. But there are other parts of faith that are that are the driving forces of the most noble drives in the human being. So I think AI is actually going to be a market correction in this regard. That's one comment I wanted to make, because I think all of us, if you really think about it, your own conflicts, your own personal conflicts between whether it's faith or whether it's values or whether everything, anything you believe in is very relevant to this, to this period in time. And as I said, as AI develops, it's going to force us to have to address that issue. Another point I wanted to just make, and I'd love to hear your thoughts on this. Yes, I totally agree with the concept of, I mean, I did write Toward a Meaningful Life, so obviously meaning is significant, but I think it goes even deeper than that. How many of you in business school. Yeah. Okay. A small minority. Good. Yeah. Someone has to fund the AI.
Speaker19: There you go.
Simon Jacobson: Yeah, we got the Rabbi.
Rabbi Shmully Hecht: You raise your hands again, we can get names now.
Simon Jacobson: Anyway, you don't need business 101 to know the following statement. There's no business. There's no institution. No organization. Nothing can exist that does not have a mission statement. Right? That's indisputable. A mission statement is as my Samuel just said, purpose is coming back into biology and so on. You have to have a mission. If someone came to you for an investment of $100 and you ask them what's your mission statement? And they would tell you, well, we're still working on it, or it's going to take several months for me to explain it to you. Or we have several options or something, some platitude, you wouldn't give them a penny. Mission is the focus. And even companies with mission statements usually don't even succeed, let alone without one. So here's a personal question that I ask each of you. What is your personal mission statement in life? Now most people will answer, I've tested this, to be happy. That's what most people say. Or to make money. And if I have money, I can do whatever I want. Or to bring up a healthy family. You know, maybe throw in a few values. Those are all beautiful answers. But do they qualify as a mission statement? A mission statement has to be short, unique, measurable. So if Google or any of the big monsters came out and said their mission statement is to make money or to have happy employees or happy customers, that's not enough.
Simon Jacobson: It has to be unique. I think Google's mission statement is to organize all the information in the world and make it readily accessible. I'm not- This is not an ad for Google, by the way. Just one that came up. I think Microsoft is a computer in every home and in every office. I don't know if it's still that mission. Point being is, what is your personal mission statement? And I wouldn't be surprised if you would answer, I don't know, I never even thought of it. I'll tell you why. Because why did you think? Do they teach in the school? Do they teach it at home? So usually at some point in life we start asking ourselves, why am I here? But by then, you're already on a 90 mile an hour roller coaster trying to make ends meet. You don't even have the time, the luxury to think about it too much. So I would say the single most important commodity, especially in our time, is going to be not fossil fuels. Not even information is going to be purpose and meaning. From a young age, teach your children, why are you here? Whatever that answer may be. But it has to be a compelling answer because everything else is determined by that. Because if your life is if your mission is arbitrary or it's ambiguous, then everything you commit to is also going to be arbitrary.
Simon Jacobson: You know, if it doesn't, if you don't really matter, how much can your decisions really matter? If you're one speck of sand on a big beach, one person among 8 billion people who really cares? If someone asked you, would the world be different if you were never born? What would your answer to that question? So most people giggle or whatever. And then you think about it. I say, let me, you know, remember before you were born, nobody knows you're coming. So it's not like you're missing from the scene if you don't show up. So it's a cosmic question. A big cosmic question. Of course, you matter to your family and to the few people that love you and so on. But on a cosmic level, would the world be any different if you were never existed? And if your answer is, I'm not sure, you tell me what are the consequences of that? And these are the biggest questions I believe are most relevant in our time. And they're going to shape everything else we do. We are replacing we think machines are going to answer that question for us. No way. They will have even more flawed answers than we do because they're only feeding off of us. You know, so some food for thought. We'd love to hear what you.
Mitchell Dubin: What's your mission statement?
Simon Jacobson: How did I know that was coming? Because every time I talk about it, they say, okay, what's yours? You know.
Speaker28: Mitchell was the only one to ask it.
Simon Jacobson: Okay, you can be. Rest assured that I have one. Yeah, yeah, yeah, but are we going to go around the table and ask everyone too? Okay my mission, which I discovered around 17 years old. So I'd like to say that I'm 17 years old with 51 years of experience. Okay. My mission is to use my skills of communication and writing and ability to understand somewhat human psychology to help people find their mission. Or to put it in different ways, to unclutter the means from the ends, to be able to discover your voice instead of the voice of others. That's my mission, and I stick to it till this day.
AI and Jewish Mysticism Q&A
-
Leland: Thank you both for being here. I have an informative question. Just I'm something I'm curious about. And then a more polemical one to raise. So the informative one is I'm just curious what you think was unique about the Greek situation when Pythagoras was thinking that this is a light, easy question, but that led him to be as creative as he was. And subsequently, Plato and a few others. And then I guess the more polemical question is at least from my own knowledge, you know, it's extraordinary how far the Greeks went. They almost had an industrial revolution. They were very close to it. But it seems that scholars today agree that they just didn't care about that sort of thing. So there was a big lack of interest in the practical or the profit motive in some sense, for the Greeks, in a way that maybe the Romans didn't have the more brutal Romans, that that didn't come up with their own inventions and things. But if they did, it was for practical reasons. So I wonder about this, this age that we live in that's totally, I would assume, as you both allude to profit obsessed and I imagine most people here at the table will, you know, go into jobs or seek careers that are mainly for profit or to live a comfortable life. And that's all a very good thing. And this is the goal for most people, is to be comfortable and to have a nice family and to to do all these things. But is our obsessive drive towards profit and comfort somehow standing in the way of any of the sort of ingenuity that we might have found in the Greek era such that that sort of spirit is now no longer relevant for us today. Yeah, that's the question.
Samuel Loncar: Thank you. That's a- Can I ask your name again, sir?
Leland: Oh. It's Leland.
Samuel Loncar: Leland. Thank you. It's a very for me beautiful question because I love Pythagoras. And let me start at the last part of it. I'm not against it. All the profit motive. You know, if anyone wants to donate to the science project, $100,000, you'll get your name on it and can come to the castle.
Rabbi Shmully Hecht: I'm sorry. Would you mind standing and speak up.
Samuel Loncar: So the question about Pythagoras is related to a very important question that's connected to all the things we've been debating for 10 or 15 years in our culture around DEI, which is in many ways a debate that happened in the Academy when a famous book by Bernal came out. Actually, there was a view called The Greek Miracle. This is the basic historiographical view from the enlightenment until the mid 20th century, which is that the Greeks were unique and they were unique because they were the predecessors of essentially the white colonial empires who were doing the scholarship. And that's not being reductive. That is how this form of thinking came. It came out of Oxford and Cambridge in the English context and in the German context. It, of course, came out of the German universities and long before Germany became an integrated empire in 1871. So that is the normal view is that there's something unique about the Greeks. So you're raising an extremely important and active scholarly question that has a lot of significance. And this is the answer. I did an audio course on Greek philosophy teaching this, and this is what I bring out in much more detail in my book that Pythagoras said he learned it from the Egyptians and not only from the Egyptians, where we know we got a lot of Greek mathematics from. We know the Babylonians could prove the what we call the Pythagorean theorem at least around 1600 BC, maybe even another thousand years earlier. So the best that we know about the history of Greek mathematics is that what the Greeks invented was the proof system.
Samuel Loncar: So this is very important, right? Babylonians, for example, had a base six system. It's kind of amazing. That's why we still have a lot of base six number systems in our culture. Because the Babylonian role in astrology and astronomy and mathematics. But the Greeks invented a method. And this is what the philosophers did. The people that I consider myself the spiritual inheritor of, because it means I can inherit everything, the science and all of the religious wisdom. And I think that's the unique element. And so I don't want to take anything away from Pythagoras. He was, by any measure, what we would consider a world historical genius. But I think the key to it was that they saw what we would consider science and math as a way of life that had intrinsic value. And so that's connected to the second part of your question, which is, does that mean that they were impractical? So there's a great scholar who was a fellow at the Institute for Advanced Studies at Princeton for his work in the history of Greek mathematics. He's a distinguished chair in philosophy and Russian studies at Notre Dame, Vittorio Heusler. He is himself a genius. But his work on the history of mathematics is extremely distinguished. I did an interview with him. You can read it. It's called The Two Scientific Revolutions. His answer to the question is essentially, the Greeks give us a proof system, and they were far more advanced than we even recognized. There's compelling evidence that the Greek mathematicians knew about non-Euclidean geometry, as we would call it, which in the Western tradition we thought was only invented in the 19th century.
Samuel Loncar: Of course, very crucially, in the case of Riemann, because of its applications by Einstein with general relativity. But in point of fact, the best work in Greek mathematics suggests that even understood non-Euclidean geometrical systems. So why didn't they come up with a scientific revolution? I can't answer that even in ten minutes. Never mind one more minute. But the short answer is I think you're on to something very deep which is they did have values of practicality. Thales was one of the great founding philosophers, was mocked for being impractical, and to prove people that his knowledge of math and the stars was relevant, he predicted an eclipse and made a fortune off of it. And he also made a fortune off of being able to predict weather patterns. So philosophers have sometimes tried to prove their points and play the market if they at least with great success. But it's true, why did then we get the scientific revolution? Well, I'm happy to see that co-director of the Meanings of Science project, Peter Harrison, is the world's leading scholar on this. He just wrote a book called Some New World by Cambridge University Press, and the shortest answer of about 50 years of the past work in the history of science is Christianity, and broadly Judaism. In the 16th and 17th century, the view of people like Isaac Newton, as I was talking to you about Kira, Newton and many others who founded the Royal Society, the longest running, most prestigious scientific body in the world, which I think is currently debating what to do about Elon Musk.
Samuel Loncar: But the Royal Society was founded by people who were absolutely Christian, and they had very novel ideas. And one of the ideas was Adam and Eve were perfect, and they knew everything and they could know everything. And the literal view that animated the founding of modern science in the Royal Academy, the Royal Institution, was that through science, which they called natural philosophy, the term science as we know it wasn't invented till the end of the 19th century. They thought that using science, we could recover humans abilities that sin had destroyed. So the reason that we have the modern idea of science is because Francis Bacon famously said, the purpose of science is not to know things, it's to change things. He said it was to ameliorate the human condition, then that is not a Greek idea. It is a Christian or Judeo-Christian idea that the purpose of knowledge is not knowledge for its own sake, but that the purpose of knowledge is to improve the world. And we live in a world that is divided still by this inheritance. The weakness of pure mathematics and fundamental research being funded in our culture has to do with the fact that we've abandoned the Greek founding vision that these things are intrinsically worthwhile, the power of our technology, and the stupendous amount of progress we've made has everything to do with the ethical culture of the scientific revolution, which saw the purpose of science as the improvement of the human condition and this was a religious idea.
Betty Kubovy-Weiss: Was that the end of the question? Okay, great. Hi. Thank you so much for coming. This was an incredible talk. I'm a philosophy major, so this really appealed to me in particular. And I had this experience a couple of years ago where I was sitting with my brother, and I was talking about whether or not I believe in God. And he was sort of asking me these questions about, well, if you saw a miracle in front of you, would you believe in God? And I said, you know, I just think I would assume there was some other explanation that had nothing to do with God. And he turned his computer around, and he had been typing everything I was saying into ChatGPT and asked ChatGPT to ask questions to continue the conversation along. So I basically had this like fall from God with ChatGPT in this dialogue that I thought my brother was my interlocutor, as it were, but it wasn't. And so this got me thinking kind of about the spiritual capacities of AI. And you guys obviously have spent a lot of time thinking about the meaning of life and how AI is either bringing us towards that, reflecting sort of our own meaning back onto us. And I'm not saying, you know, that my one experience in my dorm room with my little brother has anything, you know, has to make significant impacts on how we think about it. But I was just wondering your thoughts on kind of the spiritual potential for AI. And it kind of not only being this force that is super mechanistic and super perhaps negative and drawing us away from our spirituality. But if there is perhaps a way to harness it for, I guess maybe in my case it wasn't so positive of a spiritual situation, but in general spirituality.
Simon Jacobson: Okay. Thank you. Thank you. Interesting that you bring that up because this may sound like a plug, but literally my organization just launched an experimental site called SimonJacobson.AI. I kid you not. You can check it out.
Betty Kubovy-Weiss: You can pay me after the dinner. You can pay me after the dinner for dropping the question.
Simon Jacobson: It's free. No monetization involved. It's not at this stage, at least. And it's essentially, you can have a virtual conversation with me or a chat.
Betty Kubovy-Weiss: Oh, because it's trained on your.
Simon Jacobson: Yeah, it's trained because I have a lot of content online and I've been having chats with myself with it, and I think it's better than me. Better than I am. And doesn't get tired. So it's and to me, it's really to address exactly the question you're asking, which is how to use these tools. And I see them as tools. Remember, I grew up in a world where I used a typewriter. And if you know what a typewriter is- Really why is that a joke? I mean, nobody uses a typewriter anymore. So I use the typewriter, and from a typewriter, we went to a word processor and a word processor, then a fax machine, then a then a broadcast fax. Then I don't know if you know about a BBS that was called a BBS. A bulletin board system was like an electronic documents. It was the precursor before websites. Anyway, so to me it's all about tools. I always saw it as tools, more sophisticated, more sophisticated. So the question that I posed to myself and to my team was, and we happen to have an AI engineer, top guy from Silicon Valley that called us, that loved our content. And he says, why don't I train one of the AI agents? And the question was, how can we bring a spiritual message to as many people as possible that really AI can do at almost no cost? And then multiple languages.
Simon Jacobson: So right now we're experimenting exactly with that. So to me, there's no question the whole purpose of it all is to bring the soul to the world. You know, I know monetization business. Most companies are using it for efficiency and trying to cut costs. But in my case, I'm using it completely for altruistic purposes, which is how these tools can help disseminate. We're experimenting now with five languages Spanish, French, Arabic, Hebrew, and Russian. We're going to expand soon. Interestingly, if I can tell you some results, especially the Arab Muslim world. It's hundreds of thousands of people are watching my messages in Arabic right now, and it's completely AI, which means they sync your lips. My voice. We, of course, sent it to some friends in the Arab world to be able to make sure it's accurate and so on, but it's amazing. I laugh every time I listen to myself speak Arabic. But this is what it is. So. So I don't know how others are using it for this purpose, but I can tell you from the front line that we're using it exactly for that to test the waters.
Simon Jacobson: I can tell you some people can't stand it because they're afraid of anything that's AI. One guy told me, I thought he'd be impressed, he says, next thing you're going to tell me my dead grandmother's going to be talking to me, you know? So he's afraid of anything like that. But there are enough people who are early adopters or, you know, all this different type of targets, and we're going to see how it works. It's all experimental at this point. So I really believe it can literally create a spiritual revolution because you can bring you can repurpose messages at a pace that you can never do manually, as well as answer questions that just humanly not possible. I asked some questions that I myself asked to myself, and it answered as good as I would have answered. So it's training well. Of course it's not perfect, and especially the languages are very exciting because language you speak to people in their language. That was never possible. You know, I mean, without interpretation. Here, it has that capacity. So I see it as a tremendous tool in that direction. And we're just it's just the best is yet to come. What do you think?
Samuel Loncar: I think it's amazing. It's like something I aspired to, this sort of use. I'm one of the people who was much more afraid, I'll say, coming at these things as a writer and an editor a few years ago. And when ChatGPT came out, I was like, I had studied it intellectually, but I was like, I need to start using this. And I was wary about putting in my own data. But I think your example is a beautiful example of where it's intrinsically spiritual. I think, when we don't see spirituality because we're being scientifically inadequate. So if you think about what happened, your brother, a human being, wanted to talk with you, and he wanted to hear what you said. And he used that, not telling you. But there's a there's an incredible human depth and connection behind the use of the technology. And he got things from you, including the reaction at the end when you realized it. He got things from you that are profoundly human. And so he this this spiritual dimension was built right in. So that's exactly the issue with ethical superintelligence. There are complicated issues that we can't discuss in 15 minutes. But what I said generally about alignment, that's the fundamental issue, is the issue in science, why we're rediscovering purpose at a very new level because of the complexity, particularly of the cell it has to do with people in biology will know this. Molecular and systems biology in particular, thanks to the integration of mathematics and combinatorial structures, we can decode the incredible complexity of a lot of biological systems, and it's clear that they have agency.
Samuel Loncar: And so the issue of purpose is related to actually effectively acting in the world, which many of you know, is where AI, the dream of AI is to go towards agentic or agency based AI. It's always one step around the corner. The best experts say it is soon. I can't comment, they're the experts. But I think the concern about AI and agency is the problem you're raising becoming very explicit, which is will we acknowledge that we are the ultimate agents or will we lie to ourselves? It's much better if you're pitching for billions of dollars to lie and say, I, Sam Altman, am not the agent who is ultimately humanly responsible for everything my company does, but that is what you're responsible for if you run a company. And so the issue is, I think with science, are we going to become rich enough and comfortable enough as people to realize science is this incredible process that philosophy in the deepest sense has given us to pursue the truth. But to be completely scientific and honest we have to be honest about our limits, about what we don't know and about why we are doing things. If you knew what person was doing research solely and exclusively for their career, you would trust them a little less than if it was Einstein who was doing it as a patent clerk because he couldn't get a job. And after publishing the most famous papers in 20th century physics, it still took him nine years to get a job.
Samuel Loncar: So if we didn't have that intrinsic love and Einstein said it must be something like his words of religious passion. So I think the beautiful thing about AI is, like all science and technology, the people doing this are often literally what they think they are. The smartest, most ambitious people with the best values and ideals. The issue is that means they're philosophers. So I think when we begin to acknowledge the philosophical dimension or the spiritual dimension in Tarkovsky's sense of the purpose. Why are we using these things? That determines what they mean. If we're not honest that you can use AI to build weapon systems like DARPA is doing, and those are these are serious issues that we as citizens- I would just say, please think about this and vote. We need a very different political culture in which you tell your congressman, I don't want people developing AI systems to destroy China, because these are things currently we're not voting about, but they're going to wreck the world if we don't realize we have civic agency. So I think the spiritual dimension of AI is always latent. It's who's the human using it? Who are the humans building it? For what purpose did they build it, and for what purpose are people using it? And that's a scientific question. If I ask a scientist, what is the purpose of your research, like Rabbi Jacobson said, and they can't get a good answer, they're not getting a grant.
Mitchell Dubin: Why don't we want AI war fighting tools?
Samuel Loncar: Well, that's a that's a big question. We can talk about it.
Mitchell Dubin: Why not? I mean, you're saying like, as if we would think that we should.
Samuel Loncar: Well, first of all, we already have some. Right. But if you're talking about agentic AI, there's no going back from a mistake. So there's a very simple ethical principle. If a mistake or harm is irreversible, you need an extraordinary burden to enact the act that could cause an irreversible harm. As soon as you start developing weapon systems that are trained to kill people, all of the error problems in AI would mean you kill the wrong people.
Mitchell Dubin: Right?
Samuel Loncar: Right.
Mitchell Dubin: Like humans do.
Samuel Loncar: But humans are held responsible for it. And we currently don't even have a legal framework to hold people responsible for what happens in a car crash. Currently, right now, that's being litigated and some of the responsibilities going to the consumer or the person who made the car, and some of it's going to the actual vehicle. But these things are currently not even debated in that tort sense in civil law. So do you want the government of your democracy deciding to solve a global ethical issue without our permission to say, let's create systems that can kill people. I would just say, as citizens, we want to have a very deep and well informed conversation before our government, it turns out does that. I'm not saying as a military person, I don't want us to use all the best tools, but that logic is what led to nuclear escalation.
Simon Jacobson: I want to.
Kira Berman: Yeah, actually, Mitchell was next. So you have a different question on that.
Mitchell Dubin: I do have a different question, but if people are more interested in this topic.
Simon Jacobson: I wanted to just add one point from this topic. You know, I would submit, if I may, one more acronym called EI Emotional Intelligence. And I think if we don't take that into account, we're really cutting out maybe the single biggest force that makes decisions in our lives. I know everybody likes to think that they're intellectuals and the mind controls it all, but I would challenge you and say that your biggest decisions in life are emotional ones. Maybe informed by intelligence, maybe not. And so basically, if you cut out psychology from this philosophical discussion when I mean my psychology, I mean the human aspect, which in many ways is what we've been saying in different words. We will not really get anywhere. And I think that's a big problem because there's a big disparity between our academic credentials, our intellectual capacity, and our emotional maturity. I don't even know what that disparity is, but it's definitely not close. You know, you ask most people, you know, they're brilliant, their minds. But ask them how mature they are emotionally. You know, how fast do they get insulted? What about their egos? How many things bias them? All the prejudices additional to all the traumas of our childhood and you name it. I mean, trillions of dollars are going into one thing more than AI, and that's therapy and medicine and self-medicating and all the other addictions that are somewhat trying to numb our existential pain. And that wasn't even brought up tonight. You take that out of the equation, you're taking out a major part of humans, and we think we're just like these the superhumans. We're building these machines, and it's all going to be great. It doesn't work like that. They used to say- Do they still say? Junk in. Junk out. That's what they say at the time of technology. You put junk in, you're going to get more junk and faster pace. Is it junk? I don't know, it's garbage in, garbage out. Something like that. So I just wanted to throw that in that monkey wrench. Okay. Thank you.
Kira Berman: We're going to take a few more questions. So if anyone has- Shay. Yes.
Shay Shimonov: Thank you both for very interesting talks. I will pose a question to both of you. Different questions. So, Simon, I got caught by your story about teaching religion without talking about religion. And I was wondering, I also saw some of your videos on YouTube, and it seems to me that, like, you talk to a very universal crowd and especially with the experience of talking about religion without insinuating religion. Does it teach, do you think, in your eyes, any universality on religions in general? Basically, if the same kind of principles now, are very a big success in the Arab world, does it insinuate anything about Islamic and Jewish religion and the connection between that. And Samuel, I loved your talk. I agree with a lot of things. I also disagree with a lot of things. I do now computational biology, and I come from AI and also been in the Army for ten years. And I think I have a lot of questions, but the question that I will focus right now is the problem of the alignment.
Shay Shimonov: It seems to me like you talked about alignment, like, it's kind of like how AI can be politically correct or not discriminate between people, which is a big problem, I agree. I think that people inherently are discriminating and creating these boxes, because our mind is not capable of capturing the reality without these boxes, right? And maybe the convergence of science, as you said it also, I'm sure that it is connected to technology and the fact that watching you on YouTube is so easy right now. And knowledge. Yeah. So basically what what I think that the alignment is not losing control on AI. Basically this technology has the capacity to not think like a human brain. And although a human brain is very is very interesting and important to research, we might have a better architecture than the human brain to get insights and knowledge. So as I see it, the alignment problem is more about how do we not lose control of the development of AI than making it not discriminate between Mexicans and Chinese people.
Kira Berman: So before you start, can you also just actually define the problem?
Simon Jacobson: I mean, this is my question. Sure, sure. Please. Okay, so I'll answer the first half that you directed to me. Unless you want me to answer his question then. I have a mic here, so. Well, I personally hate the word religion. Let me just make that clear. I never understood what it meant. To me, it's a label. It's like someone says. Are you religious? I remember once a New York Times journalist asking me, so how do I define you, your religious, Orthodox, ultra orthodox, rabbi? I said, how do we define you? Are you a religious Orthodox journalist? Says, I'm just a journalist. I said, I'm just a human being. You know, so to me, okay, you're a human being and a religious human being- What means you're more human or less human or more, more refined, less refined. And we all know religion doesn't necessarily mean refined, you know? So, I don't know. I found it to be a word that that's really meaningless unless you define what you mean. So I would vote if I were able to get rid of all the denominations. You know, I remember once a woman asking me what my opinion on Conservative Judaism? And Conservative here means not versus liberal, but versus reform, Orthodox, Reconstructionist. And I said I don't I don't accept it. She says, I know all you Orthodox are the same. I said, but I don't accept Orthodox Judaism either, or ultra Orthodox, because these are all labels and denominations that are man made. Let me ask you this. Are you a reform soul or orthodox soul or a conservative soul? What about Moses? Was Moses an Orthodox? Or what about God? What about God? Is God even Jewish? Is God circumcised? I mean, you know. So. So basically what we're doing is imposing man made labels on ideas that are esoteric. You know, soul doesn't have a color, doesn't have a shape. And you got to really reframe and revisit the words we use when we're imposing material terms on spiritual concepts. This is a major problem.
Rabbi Shmully Hecht: So do you believe there are Jews and non-Jews? So that's a label.
Simon Jacobson: Okay. We need to define what that means. Yeah, exactly. Because let me put it this way. There's one God who's neither Jewish or not Jewish created all human beings in the divine image. That's what the Bible says. So is there a difference between a Jewish divine image and a non-Jewish divine image? I would ask you, Rabbi. So, yes, I could talk about it. But.
Rabbi Shmully Hecht: All I'm saying is that you do believe in a world where there are labels.
Simon Jacobson: Well, I would say.
Rabbi Shmully Hecht: Are you going to choose which ones to strip and which ones to keep?
Simon Jacobson: No, I would say not labels, but I think diversity just like there's diversity. So what there are people with brown eyes and blue eyes is that is a real distinction. But when.
Rabbi Shmully Hecht: Male and female. Do you believe in male/female or is that a label?
Simon Jacobson: No, I think male/female is a real distinction. I think most people would feel that way. Some people who may have issues with that, that's their issue. But I'm saying, I mean to say that what are we going to eliminate all male female definitions? I don't see that as a label, by the way. Just like I don't think if someone says they're five feet tall or six feet tall, that's not a label. So I was talking about man made labels that humans impose on things like soul or human beings. But if someone were to say to me that the non-Jew is inferior to a Jew, I would say, in whose eyes? In God's eyes? In your eyes? So yes, I know that these distinctions can become the source of all racism and discrimination. So I'm just trying to distinguish between the two things. So my response to you is like this. I my understanding of Judaism, that there's a universal truth that comes from God for all human beings on this earth, 8 billion of them. And that's how I try to convey the message. That doesn't mean there aren't specific messages for a Jew, just like there's specific messages- When I speak, for example, to the Muslim Arab world, I speak about Abraham. I say, what is our grandfather Abraham, our common ancestor, say, would say to us all? So I speak to them like my cousins.
Simon Jacobson: You know, it's a little disarming and some of them don't like it. I get hate mail, too. Trust me. But the point being is, I think it's time for us to be able to transcend the human, the human imposition of all these words on people. That, to me, would be the real emancipation. So it's a big part of my own internal battle and the work that I do is to try to get rid of the the human definitions and try to get to something that is more, what I would say, I mentioned before, like a cross-pollination of the dignity of every human being, no matter who you are, no matter what background, no matter what choices you make for that matter. I may totally disagree with you, but I still will never invalidate you as a person. I could disagree with you. I disagree with some of my siblings and or maybe all of them, but I never ceased to love them. And this is a big problem today. People don't know how to- Everything is personalized. You know, if you don't agree with me, with you means something's wrong with you. You know, you have to be wrong for me to be right. That's a major distortion that comes, frankly, from immaturity and definitely not in open intellectual inquiry. So that's part of the there's a lot more to say on this, obviously. Yeah.
Samuel Loncar: Thank you I appreciate it. I'd love to talk more and hear more about the disagreement. Of course you are the expert and you're of course right. Let me just connect, you know, this sort of starting broad before going technical. So my understanding is, of course, the same as yours. But this is how I see them connected, particularly to the issue of super alignment, which is the first issue, as you said, and you correct me if any of this misrepresents your understanding of the state of the art. Please. But the first issue is the very basic issue of will the AI we're developing do only what we want it to do? And for example, not do these techniques where it develops means that we didn't think about to achieve its end, these gaming things that can have catastrophic functions like in a war context or medical context. And so I do understand that's the sort of you could say, the most nuts and bolts issue of alignment is at an engineering level, we have to make sure before we release systems into a certain context of use, or the wild in a commercial sense or an application sense. We need to make sure that they are consistent in meeting the mandate that we, as the engineers and designers, have put into them.
Samuel Loncar: Correct. That's like the level at which you're talking about control. And so but building from that, it's a very short step to the issue around so-called super alignment because, you know, I don't know if there's a moore's law, but AI, I know there's debates. I'd be interested to know what you think about the debate about, are we reaching a scaling wall? You know, I know many of the big companies say no. Marcus and others famously have been arguing, yes, maybe we need different systems. The issue is, let's assume we don't have a scaling problem. Let's assume that something like Moore's Law is going to apply to the development of AI systems, even if they're not LLMs in 20 years. Well, then we're going to have AI systems in ten years that are by definition, many orders of magnitude more capable, which would mean that the kinds of things that they could do would be vastly more unpredictable than we are currently capable of understanding at this time. So the alignment problem to me masks intellectually a much deeper problem, which is the control over an engineering system that you're designing to be agentic. Raises the question whether you can rationally say, you know what any current finite human biological actor would do if you give them a goal? And the answer is we don't know that.
Samuel Loncar: So in other words, I think there's a conflict between classical computing ideals which say computing systems are deterministic. And therefore, if you have a proper engineering understanding of the system, you can literally build it. And at any point in the system, you can stop it and figure out exactly what input led to that output. Right. That's the classical idea we had of computers. But my understanding again is a non technician who's listening to the experts like yourself is that's exactly what is different about AI is that there is a degree of cognitive like features like spontaneity, relative creativity and therefore definitional unpredictability. Unpredictability is creativity viewed from the standpoint of the person who finds it inconvenient. If you want people to be creative, you want them to do things you can't predict. So I think the alignment problem masks very hard problems in engineering, which I'm sure you know better than I do, which is okay, I want an AI system to do what? AGI? The definition of that, even though it's incredibly vague, is exactly multitask, comparable to a human being. Well, we don't even know how to define what humans are doing when we switch contexts. We have no neurological or scientific description of what's happening when I context switch.
Samuel Loncar: It's so complicated. It's very easy for me to say, I can look at the wine glass and bring my hand to it, or I can look at my hand and bring it towards the wine glass. To cognitively capture that, we currently can't do that yet. And that's one of the reasons robotics is so much slower, because we're finding out mimicking what the body does is enormously difficult. It's easier so far to do the llms than to have robotics that can do something like this reliably and not have enough force to accidentally crush a person's hand. So I think the actual issue of the fact that we can't predict our own biological organism accurately, but we presume that we'll scale systems that will be smarter than us, that we could control. This is a very serious, scientific, ethical and I think engineering problem at its very root. The idea that alignment just means control. But what do you want to control? If you want to control something smarter than you by definition, you cannot control an agent that is more intelligent than you are. By definition. And therefore the super alignment problem, I think, bleeds down into all of the alignment issues that are working towards the goal of AGI, if that makes sense.
Trevor MacKay: Yeah it is. It is 9:30 everyone. So this is our soft close. So if you have to go please take 60s to leave. And we are going to be serving dessert and continuing the conversation till about 10 p.m..
Rabbi Shmully Hecht: That's okay.
Trevor MacKay: That's okay with our speakers. But before you all leave, please join me in thanking Kira for hosting our speakers.
Rabbi Shmully Hecht: So 60s break if you want to leave the table.
Speaker22: And Shmully should we should we pass around desserts?
Samuel Loncar: I would love to hear your, you know.
Lizzy: Hello. I'm Lizzy, thank you all so much for your remarks. One thing that I think about when I think about AI is, you know, I can't divorce it from, like, all the harms it causes. So whether that's enslaved labor, mining materials for technology, the ecological devastation or the connection to the loneliness epidemic, I'm just curious what y'all think is, like, are we risking too much? Whether that means, like, harming our souls or at least breaking the bonds of human connectivity by just trying to confront ourselves with this technology.
Simon Jacobson: Can you rephrase it a bit?
Lizzy: Yeah. Just like with all the harms associated with this technology, we've talked about its ability to confront ourselves with it, but also its ability to have this sort of spiritual nature. But at the same time, are we risking too much harming ourselves and harming human connectivity by still using this technology?
Simon Jacobson: Yeah. Well, look, first of all, the cat's out of the bag, so I don't think you can impede the progress. I mean, we could pose these questions. I don't even know if you can legislate the technology ultimately. It's going to ultimately come down to people's responsibility. I think there's going to be major debates. Maybe the biggest one was mentioned earlier. At the end of the day, money talks and the materialistic urges of trying to become wealthy is going to be maybe the biggest driving force. And we know that has nothing to do with ethics. So the question is who's going to keep the checks and balances? But I don't see technology being able to be stopped. You know what are we going to do? Stop this whole- It's just there's too many opportunities, too many different people working on it. And I don't know if we want anyone controlling it and say, no, we'll only go that type of progress. So I think it's going to be a major existential human question that's going to change the course of history. I think it's going to be people like ourselves who are going to take a stand, because the fact is, right now, the ones that are most powerful are the ones that have the money and the ones that are controlling the levers, so to speak.
Simon Jacobson: But unless the human race itself in some way rises to the occasion, which I don't really have any idea how that can be. I mean, my role in this, I feel as a teacher and as a person who has a platform, is to just say it all, because I believe that knowledge is what sheds a light, shed a light on all these questions. So it forces people because remember a lot of the let's call it the more, I guess the more dangerous forces at work love to thrive in darkness and ignorance. So I think we need to shed a light on all these big ethical questions and make sure it's up front and in front of everyone's eyes, glaring. And that's to me, the real way of really keeping a check and balance on all of this. I mean, in some ways, like the nuclear, the nuclear, deterrence was the only way to keep people in check. It's not perfect, but I don't really see another way to approach it because to stop it all, it's not going to work. To just let it go without ignoring these bigger human issues that, to me is worse than dangerous. What do you think?
Samuel Loncar: Yeah I agree broadly, I would just add, I guess, a layer of building on what Rabbi Jacobson just said, which is I think I mean, I sympathize with the concerns. And in fact, as a humanities side person who came into the world of working with scientists from the background and philosophy, and so therefore, kind of humanities in the American sense, I, I these are the concerns that a lot of people, I think, who study this culturally or historically have. But the unfortunate reality is all the negative things you described, as you know, they're already about what's in your iPhone before there was AI. So the issue is, is it very disturbing that we haven't corrected all the incredible ethical problems around the technology we already have, and now we're building more powerful technology? I think it is disturbing. I agree that with Rabbi Jacobson I think that what do we do about that? Well, one, we have to educate ourselves enough to to raise the questions you're asking and think about what our policy responses to that. Sherry Turkle, who I don't know, if you guys know who she is, she's you know, this, you know, brilliant anthropologist of technology at MIT involved, I think, with the Media Lab. She was just with Jaron Lanier at the New Yorker interview. Did you guys see the New Yorker conversation? And there's like three months ago. And Turkle was very adamant that she did not think young people should have access to these chat systems.
Samuel Loncar: So I think things like that, we need to have real, urgent cultural, ethical debates about. And I think that what that's going to require in the long term, and this is stuff I'm interested in doing, I'm interested in partnering with people doing this. But I think, frankly, we need to build something like, you know, not another UN, because that may be arguably didn't work out so well. But we need to build we need to build at least localized versions of communities that will actually debate and find policy responses to these things. So in other words, it can't be are we going to use AI or not? I agree, it's like China is already in many ways much more advanced than we are. And I'm not anti-Chinese. I don't like the way that the propagandistic aspect of the State Department demonizes Chinese culture, just because we disagree with the government's state policies. But like Huawei, many of you probably know Huawei. They are really, in a way, one of the most impressive, I think, technology companies ever created. And they have most of the 5G network in the world, around 30% of it relative. The US has been pushing back. So the reality is there's already going to be AI developed that is developed by an explicitly communist surveillance state. So there I agree that at a geopolitical level, you have to just accept that that's happening in the State Department and the Defense Department and deal with it.
Samuel Loncar: And I'm concerned about what that means, as I indicated. But I think at a civil level, what we can do in a democracy is, say, the question you're asking me is really important. We should work on building projects, nonprofits, even businesses to find things like what Jaron Lanier calls MIDS in his Harvard Business Review paper that essentially could work to organize these things and bring them to a policy level, because I do think we should say, yes, we're pro AI, but we don't want it in schools, for example, or yes, we're pro AI, but we don't want the horrible things that we already know are happening with people's images. I think we need serious legislation around data privacy. And so I think the next frontier of ethics is self-ownership. We do not have a legal regime that acknowledges that your bodily data belongs to you, because no one imagined that concept of that in the 17th century. But currently, all of our physiological data is extracted and sold by what Shoshana Zuboff calls the surveillance capitalist system. And I think we need to use AI, but we also need to radically reform. I think the vision we have around technology. And so I think if we don't do that, I think that worries you have will be valid. While they unfortunately also won't stop everyone from the arms race.
Kira Berman: I have a question. You speak about. Oh. Should I use that? I guess my question is, what? What can we do? Like you speak about, you know, theoretically, you know, kind of thinking about the human experience and not just, you know, like figuring out AI before we actually know the questions ourselves. But what can we do? Like, is it should we go into AI so that we can make it better? What can what can we do about it? What can can be done about it?
Samuel Loncar: I'll go first, then move on to. Yeah, I think. Can I just ask how many of you use AI regularly? Like, I use perplexity all the time now, especially since deep research just came out. Yeah. So a lot of it. So what do you guys use it for? Like-.
Kira Berman: I mean google has like- It comes up with an AI right thing.
Samuel Loncar: Right. Exactly.
Kira Berman: Yeah.
Samuel Loncar: So I think one thing we can do is I think we can decide whether or not we like the tech ecosystem we're in and vote with our money. So that's the first thing I would say is like, I think with the tech ecosystem we're in is disruptive, like Gmail was already not private. Just so you know, if you haven't thought about it, any email that has AI is completely insecure. Like this to me is a major problem. Like I do consulting in sensitive areas like this is a sensitive area, if you think about it from a superintelligence is of grave concern to militaries. And that aspect of this to me is a disaster. Like I do not like that Apple integrated AI. Why? Because we know they're selling our stuff to OpenAI. That's why they did it. Apple has a contract with OpenAI, and now everything on your phone is getting fed to them, even though that's illegal. But we know that it's happening. So I think the first thing we need to do is I personally, this is a billionaire who wants to fund this. We need a tech ecosystem that's focused on data ownership and data privacy that explicitly doesn't build, I think surveillance and AI systems, like, I would like a phone that's just a phone. I would like an email account that is not being read by essentially a kind of quasi cognitive agent that's giving detailed marketing level, PhD, CIA psyops level of reports of my content, supposedly anonymized, to Google to feed their AI. Because why are they doing this? Because they need our data. So I think the first thing we need to recognize is the current expansion of AI into all of our platforms is a complete breach of trust in all of us as users.
Rabbi Shmully Hecht: Yeah, but why is AI? Why is the AI aspect of it bothering you? I mean, your data is being read and monitored and stored even short of AI by the big tech companies. Like, is it the AI? Like, because it's not an AI element. I mean, once you own the phone and you buy something on Amazon, they know what you like to buy.
Samuel Loncar: Well, it's true that it's a fair point, but it's true that marketers know. But if even if a defense company wanted to go through my emails ten years ago, they literally would have had to have an analyst read the emails. They would. But no.
Rabbi Shmully Hecht: The data theft is there.
No, but they didn't. But they didn't have (inaduible).
Samuel Loncar: Right. But now you could literally just take a person's email and say, well, tell me what they're talking about, or tell me where they're most important. So I think the I think the breach is a much higher breach.
Rabbi Shmully Hecht: You're going to use the phone. Have you seen that phone booth? Have you seen the phone booth?
Samuel Loncar: No.
Rabbi Shmully Hecht: I think- You haven't seen the phone booth?
Samuel Loncar: I don't think so.
Rabbi Shmully Hecht: We have a phone booth in the-.
Samuel Loncar: Okay, I'll use that. Yes, I will.
Rabbi Shmully Hecht: And you're going to have. I think you're going to be limited to that phone from here on forward.
Samuel Loncar: I hear that. So the so that's the that's the debate is are we going to go on with that? So I think we can vote as consumers to build an alternative technical ecosystem, because I do think there's a lot of money in companies that privilege data ownership and data privacy.
Mitchell Dubin: What kind of phone do you have?
Samuel Loncar: I have an Apple, but it's an old one.
Mitchell Dubin: But why not? Why not have a flip phone if you want to vote?
Samuel Loncar: Well, I do want that. I'm just saying.
Mitchell Dubin: No, but you could have it. You could have it. I don't understand.
Simon Jacobson: Let me. I would like to add something.
Samuel Loncar: Yes, please. Sorry. It's not as much of an answer.
Simon Jacobson: Here's my suggestion of what we can do. Own your soul. Figure out why you're why you're here. Discover your mission in life and discover your soul. That's the single biggest thing you can do. First of all, to immunize yourself the best defense is offense. You're not going to be able to play defense against these big machines, but you can control yourself. If I would leave one message, one takeaway message from this evening is you have a soul. It's the single most important commodity. More than all these tools or more than all the information in the world. And own it. Get to know your soul. And you do that by you know. I like to call it a spiritual SPA. SPA is an acronym for study, prayer, and action. Every day, study something about your soul, something spiritual, a book, an idea, a concept. Listen to a video, prayer. Something emotional, a poem, something that meditation that touches you emotionally. And finally, action. You know, good deeds. Behavioral. Basically, it's a cognitive, emotional, and behavioral conditioning. Take control of your life. Because most of us are not in control of our lives, even though we think we are. And these machines will take over if you don't take control of your life. It's as simple as that. Your soul and your mission. Find your mission statement and you'll see how many problems you'll solve in your life. That's my takeaway.
Kira Berman: And with that, we're going to end for the night.
Rabbi Shmully Hecht: Oh, okay. We're done.
Kira Berman: I think that, yeah. And then ask private questions.