How Big Tech's AI Impersonation and Algorithms Undermine Democracy and Trust

Guy Rolnik discusses how Big Tech's business model, including AI-generated impersonation and algorithmic curation, erodes trust in institutions and weakens democracy. He highlights the lack of accountability, surveillance capitalism, and the need for regulation to address the epistemic crisis and protect democratic processes.

English Transcript:

[HOST] I'm here with Guy Rolnik, who is teaching at the University of Chicago Booth School. But he is known and extremely well-known in Israel, where he founded a financial and economic newspaper that is doing great media work. And. Right now, he's also interested in. He's here for discussing the topic of the information ecosystem, media, and social media issues that come up, misinformation, et cetera. He's going to be part of a panel here tomorrow, that's called Media in the Parent Disparent Information Ecosystem. Welcome, Guy.

[GUY ROLNIK] Great being here. [HOST] And we are just leaving a session with Renée DiResta, who wrote a book called Invisible Rulers, and it's all about the dynamics and curation of speech online, and whether we know what is true online and the extent to which various curation mechanisms affect democracy and lots of other issues that come up in this case. One of the things I wanted you to talk about. Describe the experience you had with AI content, impersonating you on YouTube. Can you tell us about that? [GUY ROLNIK] Yeah. Well, the specific incident is not that important.

Okay, it just serves as a way to understand the business model of companies like Meta and the rest of the social media companies is incompatible with democracy. So it was April of 2025, and I started getting from former students of mine, acquaintances, and readers, and just people that I know that they see ads on Facebook and on Instagram that I have opened this new group where I invite people to join me, and I will give them stock tips. Okay? So I don't know the exact numbers, but I guess. 80% of the people understood that this is a scam. 10% of the people say, "That's interesting. Looks weird." And 10% of the people said, "That's it."

The guy decided that teaching at the university and being a journalist is not enough. He's now into selling stocks and tips on the internet. Well, that was part of a big operation of some fraudsters all over the world that were impersonating two well-known, like a celebrity, economic business figures, you know, famous journalists, famous economic professors. And they thought that my brand identity is good enough to start attracting some people. At first I thought it's funny because it is. And then I, because I assumed that, People will know that this is a scam.

Okay. And I thought this will go away very fast because you see all kinds of crazy stuff online. Then in the next, I don't know, 24 to 48 hours, hundreds of people are reaching out to me with all kind of questions. Some people, "This is a scam," and some people, "What, what's happening to you? Why, why are you doing that?" And some people saying, "I want to join you." [HOST] Yeah. [GUY ROLNIK] So I understood, okay, this is a big problem. So I reached out to the law firm of Meta and complained, and then I started to go through this real ordeal, horrible process with those people that basically they

are putting so many hurdles in your way, just how you complain and what they ask you, so they automatically shift the burden of the incident to you. [HOST] Right. [GUY ROLNIK] Now, we have to understand, here is a company that is enabling fraud still, deceiving its own users, its own clients, and of course, harming me. In what turned out later to be a significant way. And the minute that you provide them with information about that, the first reaction is shift the blame and the work to you, the individual.

Now, the harm, the victim. Now, the real victims in many ways are all hundreds of thousands of users. Okay, I'm also a victim. Anyway, so they shift automatically the burden and the blame to the victim. And then they tell you what you should do. How do you report? And when do you report? They log you off from Instagram and Facebook. And then they asked you, and you don't know whether it's worked or not. You have to still wait for other people to do it.

Well, they didn't take it down. And then I reach out to them again and again. And then, and then there is a lot of grotesque and Kafkaesque things happening because at some point they tell you something, "Yeah, we-- you don't expect us," this is all in writing, "You don't expect us to be able to monitor everything that is happening on our platforms." This is Meta telling me, "Well, you don't expect us to make sure that we are not deceiving our users," and this is in text, okay? And then they tell you at some point where I kept getting more and more, and I reached out to them and they stopped answering.

The answer was, "We don't work over the weekend." Now, people, we need to give some context. Okay, we are talking about the company of the valuation of something like anywhere between $1.5 to $2 trillion. It's the fourth or the fifth largest company in the world. It was one of the most profitable companies in the history of corporations. We're talking about a company that makes anywhere between 50 to 100 billion dollar free cash flow every year. This is, we're talking about a company that invests tens of billions of dollars every year in AI.

So the most powerful company, one of the most powerful companies in the world actually telling you, "We can't get rid of those people that are impersonating to you on, and harming you and harming our users." So this has been going on for weeks, and I'm not able. Now, the thing of talk about the, you know, the Legal environment that protects Meta, the lawyers that send me these letters, negotiating with me, answering me, they have at the very end of the letter, they say, they remind you, they have like that, we are actually not representing Meta. And the reason they are writing it, and yet they are communicating with me is that they want to tell me that I cannot sue them. So the real liars, if I'll sue them,

are of course in California. And the reason they want to make sure that it's in California, because they want to make sure that here in California, it's protected. There is immunity under what we call the Section 230 of the 1996. Communication Act. So here we have a company that has a business model actually that gives them incentives to defraud their users. And this is like, this is a case where the defrauding and the scamming is directly at their own users and as me, as someone. By the way, I don't use Facebook for years.

It's been now in the last 10 years, I probably posted on Facebook five times or something like that. So this is before we talk about all the negative externalities to societies. So from this short story, understand how incompatible Meta is with anything that, with any. With any democratic or any. [HOST] How is this about democracy in the sense that the law does? [GUY ROLNIK] In the sense that those people- no, it gives them protection? No, in the sense that the same culture, the same incentives.

Are in the way the algorithm works, okay? That this company, because it's a monopoly, because it has legal protection, because you cannot sue them, because they are totally opaque, because there is zero accountability on all fronts, because there is no corporate governance, because there is no regulation, and more, any harm that they do to our discourse, to this trust of people in institution. No, we cannot do anything. [HOST] Right. So we're powerless. Yeah. So you're just saying that society has ceded control over important functions.

[GUY ROLNIK] But the most important. [HOST] Yeah. [GUY ROLNIK] You know, of course. Yeah, over the most important. We need to understand that we are in point in time that we are probably in the worst Epistemic crisis that we had for hundreds of years. Okay. And when I say epistemic crisis, people, it sounds very abstract. What is epistemic crisis? So we need to understand epistemic crisis is basically when there is a total breakdown in the trust and the legitimacy of our knowledge institutions. Okay. But what people today take for granted Knowledge institutions. And they forget that the ascent of knowledge institutions in the last 100 years, 200 years is what enabled everything that we have here today.

Okay? Science, university, state capacity, journalism, this is the ascent of those institutions and the efficacy of those institutions, the trust in those institutions is what enabled everything that today we take for standard. It's liberal democracy, it's protection of the state, it's courts, it's holding powerful to account. It's everything. So this entire infrastructure of our Democracy, and especially the kind of democracy that we have since World War II, underneath it there is information infrastructure. And we have five companies that are destroying this information infrastructure, and this is where we are today.

Yeah, it gets corrupted, and it gets manipulated, and more so it is controlled by five individuals that have totally different interests. Okay, they are now becoming very close to government. Government all over the world. Government is getting very close to them. I think this is the number one issue that anyone that is in academia, in politics, everywhere should be concerned about.

[ANAT ADMATI] I mean, we had here as a guest of the Corporations and Society Initiative, and I didn't introduce myself. I'm Anat Admati, the director of that initiative that's sponsoring also this Power to Truth that we're recording. We had Nicole Perlroth, who talks about cybersecurity, and we had an event with her very recently where she expressed significant- Dane, uh, you know, concern about what AI will do to cybersecurity's, particularly infrastructure. Uh, and sure enough, we just learned that Anthropic decided not to release certain technology because it was going to really-they were seeing that it found a lot of vulnerabilities in a lot of infrastructure. And so they gave me-

[GUY ROLNIK] But even if in this example that you gave, Anat, there is something weird because- [ANAT ADMATI] Anthropic is deciding- [GUY ROLNIK] Do we want Anthropic? People, you know, to decide what is dangerous to society and what is not dangerous. You know, the other just- [ANAT ADMATI] what I wanted to tell you is that Nicole Perlroth, who has sounded the alarm, she has a book called This Is How They Tell Me the World Ends, which is from 2021, uh, was saying, to your point. Towards the end of her session, she said that there are three things that are most concerning for society. From the third to the first, cybersecurity was number three.

Climate change number two and disinformation number one. In other words, the problem you're talking about. In other words, that problem of not knowing what's true or not, of not sharing basic facts and knowledge and being unable to trust anything, is the biggest problem. [GUY ROLNIK] Yeah, it's, you know, we use when we discuss the harms of. Big tech and social media, and now of course, LLMs. We automatically talk about misinformation, disinformation, and malinformation. We need to understand that the problems with those machines go much farther than just misinformation, disinformation.

Social media doesn't just distort what we know. It rewires the way we think. And since everyone is using social media and now LLMs for the past 10 or 20 years, we have to understand that those five companies, with the way they designed our algorithm, has rewired all our brains and rewired all our society. Okay, now there are many, many ways. We won't go into a lot of details. There is tons of body of literature on that, that actually you can and you cause a lot of harm with real information. [ANAT ADMATI] Right.

[GUY ROLNIK] You don't need misinformation, you don't need disinformation. If information can become harmful, Just because it's optimized for emotion, just because it has no friction, it's instantaneously, and we see it everywhere. So the real problem is much deeper than misinformation. It's a system that trains us to prefer moral outrage over truth. And as I said, the algorithm do not optimize for truth, but they optimize for engagement and outrage. And we have become a society that is full of rage. [ANAT ADMATI] Yeah. The age of rage- [GUY ROLNIK] is everywhere, and we are- So people my age and your age are used to the idea that there is forcible truth, and more importantly,

there are methods for inquiry of truth. [ANAT ADMATI] Yeah. [GUY ROLNIK] There are messages. [ANAT ADMATI] And basing policy on truth, [GUY ROLNIK] and basing a life on the truth, on knowledge institutions. It's debatable. Truth changes over time as science develops, and there is a discourse. Of course, we know how to debate it in a transparency way. We have now a generation, actually generation that was, you know, grew up on those machine that do not understand there is something called truth. They don't trust anything.

[ANAT ADMATI] Well, I mean, not that. [GUY ROLNIK] We are creating an electorate that think the truth is just a narrative. Right. There is no such thing as truth. And they say, "Okay, there is-" [ANAT ADMATI] and that, by the way, is the- yeah, is, is the dream. Of, you know, dictators. [GUY ROLNIK] Dictators and total dictators, yeah. [ANAT ADMATI] That the ardent totalitarianism says, you know, the key is that people don't believe anything.

[GUY ROLNIK] Yeah. [ANAT ADMATI] That is where people who just want control thrive, because then there's just no, no, no, nothing on the other side. [GUY ROLNIK] Yeah, so just yesterday, I think it was in The New York Times, and it all was, there is new research that actually shows that the Google AI search results Systematically have hallucinations and just outright inaccurate information. So the data is, I have it here, that one out of 10 answers- [ANAT ADMATI] Yeah, 10%. [GUY ROLNIK] 10% of what you get. Now, we have to understand that Most users, when they go on Google and search for something, and this is the first thing that comes up, They don't doubt it.

[ANAT ADMATI] They don't doubt it. [GUY ROLNIK] Okay, they don't doubt it. And here we have a company, four trillion dollar company, Google, saying, "We will give 10% of our answers are going to be a hallucination or just making stuffs up or stuff that is based on Facebook posts." So we have the largest companies in the world. Knowingly undermining everything, undermining knowledge. What is knowledge? And the thing about AI answers, especially when you get it from chatbots and from Google search, is that it's fast. And people conflate fast with authoritative.

Mm-hmm. People think that this is the truth. Now the problem is not only that they are not making enough efforts to make sure that there is no hallucination, lies, and just made up stuff from social media. The problem is that we are in a. since there are only a handful of companies, at some point, if those companies, when they make decisions tomorrow, where are we going to train our models? Are we gonna train it with the New York Times or we're gonna train it with some fringe crazy website that says that science does not exist, or someone? So the decisions of people there, a few people, handful of good people, totally opaque.

That where we're going to train, on which data we're going to train our models will decide what kind of information billions of people will get. Okay? Now, the idea that we can have four companies, all right, or five companies that control social media, instant messaging, and LLMs, unregulated, and we will have a democracy, it's absurd. This is incompatible. You know, there is a famous quote by Justice Louis Brandeis, 120 years back that says, "We can have democracy and we can have great wealth, a concentration of wealth.

We will not have both." The same thing can be said about the state of the affairs today. About democracy. We can have unregulated, concentrated big tech companies, or we can have democracy. We will not have both. Mm-hmm. [ANAT ADMATI] Well, a very grim note, but just speak for a couple of minutes before we end about ideas that might help. [GUY ROLNIK] Yeah. So first of all, and this has to do with some of the discussions that we had here in the conference and in other settings here in Stanford in the last two days.

I think that one thing that we should be very wary about and cognizant is that there is a lot of. Interest by many people that when we talk about those, the problems talked by technology to think that the technology is also the answer. We have those horrible negative externalities to societies caused by technology. Let's bring more technology. We need more AI. We need more tools. We need more smart thing. No. If we look at the history of technology and we have the history of democracy, we know that you solve those problems, not with more technology.

You solve those problems with laws, regulation, and rules. And we have so much. If you look at the entire discourse around what we just discussed in the last 30 minutes, much of the discourse always goes to some more and more granular discussion about technology and that version. To give you an example, one of the judges in the Google case, you know, government brought a suit to break up Google because of the monopoly and the harms of Google. So his first ruling, he says, "Yes, Google obtains its monopoly by illegal technology." Conduct. Okay, that was in 2024. And then in 2025, he says that, "But I'm not going to introduce any meaningful remedies."

Why? Because technology's changing. Now we have AI. Okay? So there is this, ah, belief that we are going to so- Yes, we do have a lot of problem. We see democracy all over the world in decline, but soon we're going to have more technology, and technology will save it because now AI will come. And no, the only way to solve those problems, if you look at the history of the last 500 years, of the last 100 years, we solve these problems with laws and regulation. Think about it.

We have anywhere between four to five billion people on those platforms today. Those platforms will collect an infinite amount of information about anything that we do. They know everything, okay? They know what you searched for, what you stopped for, what you shared, what you looked, everything. So they have built and they're building. Vast amount of psychological and political profile on every user in the world, on every person that has a smartphone. So we have lost our sovereignty.

We have lost our self-governance. Why? Because any of those tech companies know exactly our particular taste in politic and whether we are swing votes and what matters to us And how we can be manipulated. They have all this information. We don't have this information. Okay, we don't want the government to have this kind of information. [ANAT ADMATI] What should we do? [GUY ROLNIK] Okay. So I think we should think about rules in a few buckets. The first one is the simplest one, liability.

Okay. Someone with so much power should not be totally exempted from. this is Section 230. [ANAT ADMATI] To him? [GUY ROLNIK] To whom? Exactly. Yeah. [ANAT ADMATI] Like, to individuals or to the companies, of course. No, no, no, I know, but who would be. What would they be liable for and in what form? [GUY ROLNIK] For any harm that happens on their platform. If I am a business- [ANAT ADMATI] But you have to be able to show that you were harmed.

[GUY ROLNIK] Yes. You know. You can sue any company for the harm they do. Which companies are exempted for harm? [ANAT ADMATI] The internet. [GUY ROLNIK] The biggest stock companies. Number one, we have to deal with the liability. Number two, we have to deal with the amount of. Information that they can collect on us. The surveillance, what they collect, how they use it to target us, how long and how deep is the information that they collect on us.

Number three that it won't discuss is in banking. As we know, the first set of regulation is Know Your Customer. [ANAT ADMATI] Yeah. [GUY ROLNIK] You cannot walk into a bank tomorrow with a sack of $5 million dollars and tell them, "Please deposit it and then send it to this place in the Middle East or that place." Of course not. Everyone, we need to have Know Your Customer. On every tech platform. No, we cannot give special rights to bots, [ANAT ADMATI] okay?

[GUY ROLNIK] Yeah. So that's the second back. The fourth one is, you know, those companies operate globally, and they have immense influence on all countries around the world. All citizens in all countries around the world should be able to sue and to hold to account those companies locally. Yeah, under their laws. [ANAT ADMATI] So they have to incorporate in that jurisdiction. [GUY ROLNIK] Yeah, they have to incorporate, and goes back to your research.

They have to have local executives and directors that you can sue. You can hold them to account locally. You cannot tell them, "Ah, go to California." Sue me in California. And actually, I don't work here. Yeah, "I'm not. you know, I'm here, I'm running everything," but I'm not a director, and I don't work here. It's Dublin, it's California, it's Russia. Okay. So they need to be accountable. And the last thing, of course, that we have to do, and a lot of countries around the world are already leading this effort. The US is behind.

Is make sure that our kids are not there. [ANAT ADMATI] Oh, the kids, yeah. [GUY ROLNIK] Okay. We don't allow kids, you know, until the age of 18 or 21 to do a lot of things. The only thing that we allow them is to have Meta and TikTok and X manipulate our kids, okay, in such a young age. And we know that internal data in those companies show the huge harm there is to children. And I think that in the future, when we look back, At this period of time of 10 or 20 years, and said, we let our kids on those platforms. People say, "What were we thinking?"

This is crazy. And how come that we had to fight tooth and nail with the managements of those company to stop that? How come that, you know, we knew that they want to addict us, okay? And we don't do anything. Think about those companies. It has to do again with the internal culture of this company, how corrupt it is. They know exactly what they are doing to children, and they don't care because this is part of their important part of the business model. [ANAT ADMATI] Well, I think more people are becoming aware of these issues, and there are some lawsuits and. And using product liability and other methods as opposed to going after specific speech. And so there's many, many issues.

And we're going to discuss more of this in a panel that will, I believe, be posted as well with Renée DiResta and with Alexandra Geese, who is from the European Parliament and very active on these issues. And I think you and her see eye to eye pretty much on a number of these kinds of issues. Yeah. Thank you very much, Guy. [GUY ROLNIK] Thank you for having me. ♪ (upbeat music) ♪ (silent)

More Tech Transcript