• Skip to primary navigation
  • Skip to main content

BIGTV

  • 🛖 Home
  • 🔍 Guide
  • 💯 Quynhhx
  • 🥛 Minhh
  • 🐤 Tuh
  • 🎳 All
You are here: Home / Quynhhx / The inside story of ChatGPT’s astonishing potential

The inside story of ChatGPT’s astonishing potential

9 Tháng 8, 2024 by admin

We started OpenAI seven years ago because felt like something really interesting was happening in AI we wanted to help steer it in a positive direction. It’s just really amazing to see how far this whole has come since then. And it’s really gratifying to hear from like Raymond who are using the technology we are building, others, for so many wonderful things. We hear from who are excited, we hear from people who are concerned, we hear from people who feel both emotions at once. And honestly, that’s how we feel. Above all, it feels like we’re an historic period right now where we as a world are to define a technology that will be so important for our society going forward. And I that we can manage this for good.

So today, I want to you the current state of that technology and some of the underlying design principles we hold dear.

So the first thing I’m going to show you what it’s like to build a tool for an AI rather than building it a human. So we have a new DALL-E model, which generates images, we are exposing it as an app for ChatGPT to use on behalf. And you can do things like ask, you know, a nice post-TED meal and draw a picture of it.

(Laughter)

Now you all of the, sort of, ideation and creative back-and-forth taking care of the details for you that you get of ChatGPT. And here we go, it’s not just idea for the meal, but a very, very detailed spread. let’s see what we’re going to get. But ChatGPT doesn’t generate images in this case — sorry, it doesn’t generate text, it generates an image. And that is something that really expands power of what it can do on your behalf in terms of carrying your intent. And I’ll point out, this is all a demo. This is all generated by the AI as speak. So I actually don’t even know what we’re going see. This looks wonderful.

(Applause)

I’m getting hungry just looking it.

Now we’ve extended ChatGPT with other tools too, for example, memory. can say “save this for later.” And the interesting about these tools is they’re very inspectable. So you get little pop up here that says “use the DALL-E app.” And by way, this is coming to you, all ChatGPT users, over upcoming months. And you look under the hood and see that what it actually did was write prompt just like a human could. And so you sort of have this to inspect how the machine is using these tools, which allows us provide feedback to them.

Now it’s saved for later, let me show you what it’s like to use that information and to integrate with applications too. You can say, “Now make a shopping for the tasty thing I was suggesting earlier.” And make it a tricky for the AI. “And tweet it out for all TED viewers out there.”

(Laughter)

So if you do this wonderful, wonderful meal, I definitely want to know it tastes.

But you can see that ChatGPT is selecting all these tools without me having to tell it explicitly which ones to use in any situation. And this, think, shows a new way of thinking about the interface. Like, we are so used to thinking of, well, we have these apps, we click them, we copy/paste between them, and usually it’s a experience within an app as long as you kind of the menus and know all the options. Yes, I would like you to. Yes, please. good to be polite.

(Laughter)

And by having this unified interface on top of tools, the AI is able to sort of away all those details from you. So you don’t have to be the who spells out every single sort of little piece what’s supposed to happen.

And as I said, this is live demo, so sometimes the unexpected will happen to us. let’s take a look at the Instacart shopping list while we’re at it. And you see we sent a list of ingredients to Instacart. Here’s everything you need. And thing that’s really interesting is that the traditional UI is still valuable, right? If you look at this, you still can through it and sort of modify the actual quantities. that’s something that I think shows that they’re not going away, traditional UIs. It’s just we have new, augmented way to build them. And now we have tweet that’s been drafted for our review, which is also a very important thing. We can “run,” and there we are, we’re the manager, we’re able to inspect, we’re able to change the of the AI if we want to. And so after this talk, you will be to access this yourself. And there we go. Cool. you, everyone.

(Applause)

So we’ll cut back to the slides. Now, the important about how we build this, it’s not just about these tools. It’s about teaching the AI how to use them. Like, what do we even want it do when we ask these very high-level questions? And to this, we use an old idea. If you go back to Turing’s 1950 paper on the Turing test, he says, you’ll program an answer to this. Instead, you can learn it. You could a machine, like a human child, and then teach it feedback. Have a human teacher who provides rewards and punishments as it tries out and does things that are either good or bad.

And is exactly how we train ChatGPT. It’s a two-step process. First, produce what Turing would have called a child machine through an learning process. We just show it the whole world, the internet and say, “Predict what comes next in text you’ve never before.” And this process imbues it with all sorts of wonderful skills. example, if you’re shown a math problem, the only way actually complete that math problem, to say what comes next, green nine up there, is to actually solve the math problem.

But we have to do a second step, too, which is teach the AI what to do with those skills. for this, we provide feedback. We have the AI out multiple things, give us multiple suggestions, and then a human rates them, “This one’s better than that one.” And this reinforces not the specific thing that the AI said, but very importantly, the whole process that the AI used to that answer. And this allows it to generalize. It allows to teach, to sort of infer your intent and apply it scenarios that it hasn’t seen before, that it hasn’t received feedback.

Now, sometimes the things have to teach the AI are not what you’d expect. For example, we first showed GPT-4 to Khan Academy, they said, “Wow, this so great, We’re going to be able to teach students things. Only one problem, it doesn’t double-check students’ math. If there’s some bad math in there, will happily pretend that one plus one equals three and run with it.” So we had collect some feedback data. Sal Khan himself was very kind and offered 20 hours his own time to provide feedback to the machine alongside team. And over the course of a couple of months we were able to teach AI that, “Hey, you really should push back on in this specific kind of scenario.” And we’ve actually lots and lots of improvements to the models this way. when you push that thumbs down in ChatGPT, that actually is kind of like sending a bat signal to our team to say, “Here’s area of weakness where you should gather feedback.” And so when you do that, that’s one way that really listen to our users and make sure we’re something that’s more useful for everyone.

Now, providing high-quality feedback is a hard thing. If think about asking a kid to clean their room, all you’re doing is inspecting the floor, you don’t if you’re just teaching them to stuff all the in the closet. This is a nice DALL-E-generated image, by the way. the same sort of reasoning applies to AI. As we move to harder tasks, will have to scale our ability to provide high-quality feedback. But for this, the itself is happy to help. It’s happy to help provide even better feedback and to scale our ability supervise the machine as time goes on. And let me show what I mean.

For example, you can ask GPT-4 a like this, of how much time passed between these two foundational blogs on unsupervised and learning from human feedback. And the model says two months passed. But is it true? Like, models are not 100-percent reliable, although they’re getting better every we provide some feedback. But we can actually use the AI to fact-check. And it can actually its own work. You can say, fact-check this for me.

Now, in this case, I’ve actually given the AI a tool. This one is a browsing tool where the model can issue queries and click into web pages. And it actually out its whole chain of thought as it does it. It says, I’m just to search for this and it actually does the search. It then finds the publication date and the search results. It then is issuing another search query. It’s going click into the blog post. And all of this you do, but it’s a very tedious task. It’s not a thing that humans really want to do. It’s more fun to be in the driver’s seat, to be this manager’s position where you can, if you want, triple-check work. And out come citations so you can actually go and very easily any piece of this whole chain of reasoning. And it actually turns out two months was wrong. months and one week, that was correct.

(Applause)

And we’ll cut to the side. And so thing that’s so interesting to me about this whole process is that it’s many-step collaboration between a human and an AI. Because a human, using fact-checking tool is doing it in order to produce data another AI to become more useful to a human. I think this really shows the shape of something that we should expect to be much more in the future, where we have humans and machines kind of carefully and delicately designed in how they fit into a problem and we want to solve that problem. We make sure that the humans providing the management, the oversight, the feedback, and the machines are operating a way that’s inspectable and trustworthy. And together we’re able to actually even more trustworthy machines. And I think that over time, we get this process right, we will be able to impossible problems.

And to give you a sense of how impossible I’m talking, I think we’re going to be able rethink almost every aspect of how we interact with computers. For example, about spreadsheets. They’ve been around in some form since, we’ll say, 40 years ago with VisiCalc. I don’t think they’ve really that much in that time. And here is a specific spreadsheet of all the papers on the arXiv for the past 30 years. There’s about 167,000 them. And you can see there the data right here. But let show you the ChatGPT take on how to analyze a set like this.

So we can give ChatGPT access yet another tool, this one a Python interpreter, so it’s to run code, just like a data scientist would. And you can just literally upload a file and ask about it. And very helpfully, you know, it knows name of the file and it’s like, “Oh, this CSV,” comma-separated value file, “I’ll parse it for you.” The only information here the name of the file, the column names like saw and then the actual data. And from that it’s able infer what these columns actually mean. Like, that semantic information wasn’t in there. It to sort of, put together its world knowledge of knowing that, “Oh yeah, arXiv a site that people submit papers and therefore that’s these things are and that these are integer values and so therefore it’s a of authors in the paper,” like all of that, that’s work for a human to do, and AI is happy to help with it.

Now I don’t even know what I want ask. So fortunately, you can ask the machine, “Can you make exploratory graphs?” And once again, this is a super high-level instruction lots of intent behind it. But I don’t even what I want. And the AI kind of has to infer what I be interested in. And so it comes up with some ideas, I think. So a histogram of the number of authors per paper, time series papers per year, word cloud of the paper titles. All of that, I think, will be pretty interesting see. And the great thing is, it can actually do it. Here we go, a bell curve. You see that three is kind of most common. It’s going to then make this nice of the papers per year. Something crazy is happening 2023, though. Looks like we were on an exponential and dropped off the cliff. What could be going on there? By the way, all is Python code, you can inspect. And then we’ll see word cloud. So you can see all these things that appear in these titles.

But I’m pretty about this 2023 thing. It makes this year look really bad. Of course, the is that the year is not over. So I’m going to push back on machine. [Waitttt that’s not fair!!! 2023 isn’t over. What percentage papers in 2022 were even posted by April 13?] So April 13 the cut-off date I believe. Can you use that to make fair projection? So we’ll see, this is the kind of ambitious one.

(Laughter)

So know, again, I feel like there was more I out of the machine here. I really wanted it to this thing, maybe it’s a little bit of an for it to have sort of, inferred magically that this is what I wanted. But I my intent, I provide this additional piece of, you know, guidance. And the hood, the AI is just writing code again, if you want to inspect what it’s doing, it’s possible. And now, it does the correct projection.

(Applause)

If you noticed, even updates the title. I didn’t ask for that, but it what I want.

Now we’ll cut back to the slide again. This shows a parable of how I think we … A vision of how may end up using this technology in the future. A person his very sick dog to the vet, and the veterinarian made bad call to say, “Let’s just wait and see.” And the dog not be here today had he listened. In the meanwhile, provided the blood test, like, the full medical records, to GPT-4, said, “I am not a vet, you need to talk to a professional, here some hypotheses.” He brought that information to a second who used it to save the dog’s life. Now, these systems, they’re not perfect. You cannot overly on them. But this story, I think, shows that a human with a medical professional and ChatGPT as a brainstorming partner was able to achieve an outcome that would not have otherwise. I think this is something we should all reflect on, think about we consider how to integrate these systems into our world.

And one thing I really deeply, is that getting AI right is going to require participation everyone. And that’s for deciding how we want it to slot in, that’s for setting rules of the road, for what an AI will and won’t do. And if there’s one thing to away from this talk, it’s that this technology just looks different. Just from anything people had anticipated. And so we all have to become literate. And that’s, honestly, of the reasons we released ChatGPT.

Together, I believe that we achieve the OpenAI mission of ensuring that artificial general intelligence benefits all humanity.

Thank you.

(Applause)

(Applause ends)

Chris Anderson: Greg. Wow. I mean … I suspect that every mind out here there’s a feeling of reeling. Like, I suspect that a very number of people viewing this, you look at that and think, “Oh my goodness, pretty much every single thing about way I work, I need to rethink.” Like, there’s just new possibilities there. Am I right? Who that they’re having to rethink the way that we things? Yeah, I mean, it’s amazing, but it’s also scary. So let’s talk, Greg, let’s talk.

I mean, guess my first question actually is just how the have you done this?

(Laughter)

OpenAI has a few employees. Google has thousands of employees working on artificial intelligence. Why is it you who’s come up with this that shocked the world?

Greg Brockman: I mean, the truth is, we’re all on shoulders of giants, right, there’s no question. If you at the compute progress, the algorithmic progress, the data progress, of those are really industry-wide. But I think within OpenAI, we made a lot of very deliberate choices the early days. And the first one was just to confront as it lays. And that we just thought really hard about like: What is going to take to make progress here? We tried a lot of that didn’t work, so you only see the things did. And I think that the most important thing been to get teams of people who are very different from each other to work together harmoniously.

CA: we have the water, by the way, just brought here? think we’re going to need it, it’s a dry-mouth topic. But isn’t there something also just about fact that you saw something in these language models that meant that if you continue to invest in and grow them, that something at some point might emerge?

GB: Yes. I think that, I mean, honestly, I think the story there is illustrative, right? I think that high level, deep learning, like we always that was what we wanted to be, was a learning lab, and exactly how to do it? I think that the early days, we didn’t know. We tried a lot of things, and person was working on training a model to predict next character in Amazon reviews, and he got a where — this is a syntactic process, you expect, you know, model will predict where the commas go, where the and verbs are. But he actually got a state-of-the-art analysis classifier out of it. This model could tell if a review was positive or negative. I mean, today we are just like, come on, can do that. But this was the first time that you this emergence, this sort of semantics that emerged from this underlying process. And there we knew, you’ve got to scale thing, you’ve got to see where it goes.

CA: So I think helps explain the riddle that baffles everyone looking at this, because these things are as prediction machines. And yet, what we’re seeing out them feels … it just feels impossible that that could from a prediction machine. Just the stuff you showed us just now. And the key of emergence is that when you get more of a thing, different things emerge. It happens all the time, ant colonies, single ants around, when you bring enough of them together, you these ant colonies that show completely emergent, different behavior. Or a city where few houses together, it’s just houses together. But as grow the number of houses, things emerge, like suburbs and centers and traffic jams. Give me one moment for when you saw just something pop that just blew mind that you just did not see coming.

GB: Yeah, well, so you can try this in ChatGPT, you add 40-digit numbers —

CA: 40-digit?

GB: 40-digit numbers, the model will do it, which it’s really learned an internal circuit for how to do it. And the interesting thing is actually, if you have it add like 40-digit number plus a 35-digit number, it’ll often get it wrong. And you can see that it’s really learning the process, but hasn’t fully generalized, right? It’s like you can’t memorize 40-digit addition table, that’s more atoms than there are in the universe. it had to have learned something general, but that it hasn’t really fully yet learned that, Oh, can sort of generalize this to adding arbitrary numbers arbitrary lengths.

CA: So what’s happened here is that you’ve allowed it to scale up look at an incredible number of pieces of text. it is learning things that you didn’t know that it was going to be of learning.

GB Well, yeah, and it’s more nuanced, too. So one science that we’re to really get good at is predicting some of these emergent capabilities. And to do actually, one of the things I think is very undersung in this field is of engineering quality. Like, we had to rebuild our stack. When you think about building a rocket, every tolerance has to incredibly tiny. Same is true in machine learning. You have to get every piece of the stack engineered properly, and then you can doing these predictions. There are all these incredibly smooth curves. They tell you something deeply fundamental about intelligence. If look at our GPT-4 blog post, you can see all of these curves in there. now we’re starting to be able to predict. So we were able predict, for example, the performance on coding problems. We basically look at some that are 10,000 times or 1,000 times smaller. And there’s something about this that is actually smooth scaling, even it’s still early days.

CA: So here is, one of the fears then, that arises from this. If it’s fundamental to what’s happening here, as you scale up, things emerge that you can predict in some level of confidence, but it’s capable of surprising you. Why isn’t just a huge risk of something truly terrible emerging?

GB: Well, I think all these are questions of degree and scale and timing. And I one thing people miss, too, is sort of the integration with the is also this incredibly emergent, sort of, very powerful thing too. And so that’s of the reasons that we think it’s so important to deploy incrementally. And so think that what we kind of see right now, if look at this talk, a lot of what I on is providing really high-quality feedback. Today, the tasks that we do, can inspect them, right? It’s very easy to look at that problem and be like, no, no, no, machine, seven the correct answer. But even summarizing a book, like, that’s a hard thing supervise. Like, how do you know if this book summary is good? You have to read the whole book. No one to do that.

(Laughter) And so I think that important thing will be that we take this step by step. And that say, OK, as we move on to book summaries, we have to this task properly. We have to build up a track with these machines that they’re able to actually carry our intent. And I think we’re going to have to produce better, more efficient, more reliable ways of scaling this, sort of like the machine be aligned with you.

CA: So we’re going to hear later in this session, there critics who say that, you know, there’s no real inside, the system is going to always — we’re never going to know it’s not generating errors, that it doesn’t have common and so forth. Is it your belief, Greg, that is true at any one moment, but that the of the scale and the human feedback that you talked about basically going to take it on that journey of actually getting to things like truth wisdom and so forth, with a high degree of confidence. you be sure of that?

GB: Yeah, well, I think that the OpenAI, I mean, the answer is yes, I believe that is where we’re headed. I think that the OpenAI approach here has always just like, let reality hit you in the face, right? It’s like this field is the field of promises, of all these experts saying X is going to happen, Y is how works. People have been saying neural nets aren’t going to work 70 years. They haven’t been right yet. They might right maybe 70 years plus one or something like is what you need. But I think that our approach has always been, you’ve got to to the limits of this technology to really see in action, because that tells you then, oh, here’s how we can on to a new paradigm. And we just haven’t exhausted the fruit here.

CA: I mean, it’s a controversial stance you’ve taken, that the right way to this is to put it out there in public and then harness this, you know, instead of just your team giving feedback, the world is now giving feedback. … If, you know, bad things are going to emerge, it is there. So, you know, the original story that I on OpenAI when you were founded as a nonprofit, well you there as the great sort of check on the big companies doing unknown, possibly evil thing with AI. And you were going to models that sort of, you know, somehow held them accountable and was capable of slowing the down, if need be. Or at least that’s kind of what I heard. yet, what’s happened, arguably, is the opposite. That your of GPT, especially ChatGPT, sent such shockwaves through the world that now Google and Meta and so forth are all scrambling to catch up. And some their criticisms have been, you are forcing us to put this here without proper guardrails or we die. You know, do you, like, make the case that what you done is responsible here and not reckless.

GB: Yeah, we think about these questions all time. Like, seriously all the time. And I don’t we’re always going to get it right. But one thing I think has incredibly important, from the very beginning, when we were thinking how to build artificial general intelligence, actually have it all of humanity, like, how are you supposed to do that, right? And default plan of being, well, you build in secret, you get this powerful thing, and then you figure out the safety of and then you push “go,” and you hope you got it right. I don’t know how to that plan. Maybe someone else does. But for me, that was always terrifying, it didn’t feel right. And I think that this alternative approach is the only other path I see, which is that you do let reality hit you in face. And I think you do give people time to give input. do have, before these machines are perfect, before they super powerful, that you actually have the ability to see them action. And we’ve seen it from GPT-3, right? GPT-3, we were afraid that the number one thing people were going to do with it was misinformation, try to tip elections. Instead, the number one thing was generating spam.

(Laughter)

CA: So Viagra spam is bad, but there are that are much worse. Here’s a thought experiment for you. Suppose you’re sitting in a room, there’s a box the table. You believe that in that box is that, there’s a very strong chance it’s something absolutely that’s going to give beautiful gifts to your family and everyone. But there’s actually also a one percent thing in small print there that says: “Pandora.” And there’s a chance that this actually unleash unimaginable evils on the world. Do you open box?

GB: Well, so, absolutely not. I think you don’t do that way. And honestly, like, I’ll tell you a story that I haven’t actually before, which is that shortly after we started OpenAI, I remember I was in Puerto for an AI conference. I’m sitting in the hotel just looking out over this wonderful water, all these people having a good time. And you think it for a moment, if you could choose for basically that Pandora’s box to five years away or 500 years away, which would you pick, right? On the one you’re like, well, maybe for you personally, it’s better have it be five years away. But if it to be 500 years away and people get more time to get it right, do you pick? And you know, I just really it in the moment. I was like, of course do the 500 years. My brother was in the military at the time and like, puts his life on the line in a much more real way than of us typing things in computers and developing this at the time. And so, yeah, I’m really sold on the you’ve to approach this right. But I don’t think that’s playing the field as it truly lies. Like, if you look at whole history of computing, I really mean it when I say that this an industry-wide or even just almost like a human-development- of-technology-wide shift. And the that you sort of, don’t put together the pieces that there, right, we’re still making faster computers, we’re still improving algorithms, all of these things, they are happening. And you don’t put them together, you get an overhang, which means that if does, or the moment that someone does manage to connect to circuit, then you suddenly have this very powerful thing, one’s had any time to adjust, who knows what of safety precautions you get. And so I think that one thing take away is like, even you think about development of sort of technologies, think about nuclear weapons, people talk about being a zero to one, sort of, change in what could do. But I actually think that if you look at capability, it’s been quite smooth over time. so the history, I think, of every technology we’ve developed been, you’ve got to do it incrementally and you’ve to figure out how to manage it for each moment that you’re increasing it.

CA: what I’m hearing is that you … the model want us to have is that we have birthed extraordinary child that may have superpowers that take humanity a whole new place. It is our collective responsibility to the guardrails for this child to collectively teach it to wise and not to tear us all down. Is basically the model?

GB: I think it’s true. And I it’s also important to say this may shift, right? We’ve got take each step as we encounter it. And I think it’s incredibly important that we all do get literate in this technology, figure out how to provide the feedback, decide we want from it. And my hope is that will continue to be the best path, but it’s so we’re honestly having this debate because we wouldn’t otherwise if weren’t out there.

CA: Greg Brockman, thank you so much for coming TED and blowing our minds.

(Applause)

Filed Under: Quynhhx

Copyright © 2026 · Canh on Genesis Framework · WordPress · Log in

  • 🛖 Home
  • 🔍 Guide
  • 💯 Quynhhx
  • 🥛 Minhh
  • 🐤 Tuh
  • 🎳 All