Page 1 of 1

When did you know this was going to be a success?

Posted: Wed May 28, 2025 4:32 am
by MasudIbne756
Sometime after gpt-2, around 2019.

What’s the most complex part of dealing with the hallucination problem?
There’s a lot of technical challenges, but one of the non-obvious things is that a lot of the value from these systems is heavily related to the fact that they do hallucinate. If you just want to look something up in a database, we already have good stuff for that. But the fact that these ai systems can come up with new ideas, can be creative, that’s a lot of the power. You want them to be creative when you want and factual when you want, but if you do the naive thing and say ‘never say anything that you’re not 100% sure about,’ you can get a model to do that, but it won’t have the magic that people like so much.

Get articles selected just for you, in your inbox
sign up now
what’s the scariest thing you’ve seen in the lab?
Nothing super scary yet. We know it will come. We won’t be surprised when it does. But at the current models, nothing that scary.

You said recently that large language models were a reflection back america phone number list on human intelligence. What were you trying to say?
Intelligence is an emergent property of matter to a degree we don’t contemplate enough. It’s something about the ability to recognize patterns in data, the ability to hallucinate, to create and come up with novel ideas and have a feedback loop to test those. We can look at every neuron in gpt-4, every connection.

We can predict with confidence that the gpt paradigm is going to get more capable but exactly how is a little bit hard. For example, why a new capability emerges at this scale and not that one. We don’t yet understand that. If we assume this [current] gpt paradigm is the only breakthrough that’s going to happen, we’re going to be unprepared for very major new things that do happen.