How Humanity Can Avoid an AI Takeover

Gideon Lichfield: Like writing simple marketing copy, for example.

Daron Acemoglu: Like marketing, marketing and advertising, or news summaries like BuzzFeed used to do. I don’t, I don’t see anything wrong with that. I’m not against automation. I think it’s good if we automate certain things, but at the same time we have to create as many new things for humans to do productively and contribute and expand their creativity as we are automating. And that latter part is not being done. And that’s my sort of beef with the direction in which large language models are going right now.

Gideon Lichfield: What would it be like then to do that? You know, here’s something that I can see is that you see a lot of people using image generators like Dall-E and Midjourney to create art in much quicker form. And some people are saying, “This can augment my work as an artist.” And then some people are saying, are saying, “No, but that will actually take away from the work of many illustrators or stock photographers.” So how do you use it in such a way that it is augmentative rather than just diluting people’s work? 

Daron Acemoglu: The parts that I have emphasized, like information curation, information filtering, I think those things can really lead to many new functions and many new tasks for workers, for knowledge workers, for white-collar workers. But the problem there is that the current architecture of LLMs is not very good for that. Like what do LLMs do? I think they have been so far partly optimized for impressing humans. The tremendous meteoric rise of ChatGPT is on the basis of giving answers that humans find intriguing, surprising, impressive. But what that also brings is that it’s not sufficiently nuanced. So if as a journalist or as an academic, I go to GPT-4 or GPT-3 and try to understand where different types of information is coming from, how reliable different types of information is, it does not give good answers. And in fact, it gives very misleading answers.

Gideon Lichfield: Right, it hallucinates often, yes.

Daron Acemoglu: It hallucinates or it makes up, it makes things up, or it refuses to recognize when two answers are contradictory or where two answers are saying the same thing, but are being represented as independent pieces of information. So there are a lot of complexity to human cognition that has evolved over hundreds of thousands of years that, you know, we can try to augment using these new technologies, but this sort of excessive authoritativeness of large language models is not gonna help.

Gideon Lichfield: Right now, we have the film and TV writers of Hollywood on strike, and one of the demands is that the movie studios take steps to ensure that AI doesn’t replace them. So what should the studios be doing? 

Daron Acemoglu: So the fundamental issue, which is, again, central to not just large language models, but to the entire AI industry who controls data—I think the real argument that is very valid that’s coming from the Writers Guild is that these machines are taking our creative data and they’re going to repackage it. Why is that fair? Actually, think of the large language models. If you look at the answers that they give, the correct and relevant answers that they give, a lot of it comes from two sources: books that have been digitized, and Wikipedia, but none of that was done for the purpose of enriching OpenAI, Microsoft, or Google. People wrote books for different purposes, to communicate with their colleagues or with the broader public, people devoted their effort and time to Wikipedia for this collective project. None of them agreed that their knowledge was going to be taken over by OpenAI. So the Writers Guild is trying to articulate, I think, a deeper problem. I think in the age of AI we have to be much more cognizant of whose data we are using and in what way we are using. I think that requires both regulation and compensation.

Source

Author: showrunner