Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Education Programming

Khan Academy Chief Says GPT-4 is Ready To Be a Tutor (axios.com) 58

For all the high-profile examples of ChatGPT getting facts and even basic math wrong, Khan Academy founder Sal Khan says the latest version of the generative AI engine makes a pretty good tutor. From a report: "This technology is very powerful," Khan told Axios in a recent interview. "It's getting better." Khan Academy was among the early users of GPT-4 that OpenAI touted when it released the updated engine. This week, two more school districts (Newark, N.J. and Hobart, Indiana) are joining the pilot of Khanmigo, the AI-assisted tutor. With the two new districts, a total of 425 teachers and students are testing Khanmigo.

The chatbot works much like a real-life or online tutor, looking at students' work and helping them when they get stuck. In a math problem, for example, Khanmigo can detect not just whether a student got an answer right or wrong, but also where they may have gone astray in their reasoning. ChatGPT and its brethren have been highly controversial -- especially in education, where some schools are banning the use of the technology. Concerns range from the engines' propensity to be confidently wrong (or "hallucinate") to worries about students using the systems to write their papers. Khan said he understands these fears, but also notes that many of those criticizing the technology are also using it themselves and even letting their kids make use of it. And, for all its flaws, he says today's AI offers the opportunity for more kids -- in both rich and developing countries -- to get personalized learning. "The time you need tutoring is right when you are doing the work, often when you are in class," Khan said.

This discussion has been archived. No new comments can be posted.

Khan Academy Chief Says GPT-4 is Ready To Be a Tutor

Comments Filter:
  • by Eunomion ( 8640039 ) on Monday April 10, 2023 @11:10AM (#63438486)
    sometime around 20 years ago. So try to imagine what kind of shark-jumping Fonzie bullshit this looks like to me.
  • Not so sure.. (Score:5, Insightful)

    by Junta ( 36770 ) on Monday April 10, 2023 @11:15AM (#63438494)

    On the one hand, it is a use case that is a prime target for Generative AI: going over well-known information. It doesn't require creative interpretation, just interesting 'search' of the knowledge and the ability to synthesize that knowledge into a digestible text that is narrowly focused on the prompt.

    However, when it goes wrong, it goes wrong so casually competently that you wouldn't know unless, well, you know better than it already, which a student generally wouldn't.

    Further, if the human makes an incorrect leap of understanding and pursues something in a confidentially incorrect way, the GPT will readily reinforce that by going along with whatever the human says. If you ask a question to which the correct answer is "there isn't a way to do it", it will tend to fabricate a credible sounding answer that, if it were true, would answer the question in a satisfactory way. I have seen it make up libraries that don't exist, with fabricated function calls that sound like they would be right, if they existed. It doesn't understand information, but assembles it in interesting ways that can be very misleading.

    Current generation is pretty impressive, but it's tendency to be equally confident for real answers as it is for totally fake answer makes it a challenge if you don't know enough to second guess it.

    • I agree with both the strengths and weaknesses you highlight. But remember the baseline is not perfection, it's... whatever we're doing now. 1 on 1 tutoring with an expert is the gold standard, but almost nobody gets that.

      But also the progress is happening fast. For example on a benchmark Internal factual eval by category [openai.com], GPT3 scored about 60% vs 80% for GPT4, which means they slashed the error rate in half. That is real progress.

      • by Junta ( 36770 )

        One challenge with "benchmarking" is that the benchmark becomes a development target. So it's hard to say how meaningful it is outside the context of the things specifically benchmarked. One would expect it to do very well on the things the team specifically saw get wrong and they intervene to correct.

        But it's less about the error rate, and more about how the errors look. When a human tutor gets in over their heads, they are likely to recognize it and acknowledge "I don't know". The AI tends to fabricate

        • Yeah, in the past, when a large language model was almost purely a hands-off optimization of billions of weights using a fairly simple algorithm and objective function, I would have said that "teaching for the test" (optimizing the model for a specific benchmark) was easier to avoid. But as OpenAI has become less open, and GPT4 incorporates more algorithms and data sources, it seems to be more of a possibility.

          And how will we know whether the students taught by AI are learning better or worse than those w

    • It's important for students to learn how to recognize incorrect info and verify/corroborate, even teachers teach wrong info at times.
      Generative AI are tools students will be using in their jobs (we already have jobs requiring its use just like we used to require web-searching abilities).

      Knowing your AI tutor and teachers are fallible is invaluable. Learning how to recognize such and work with it and rank knowledgeable sources is invaluable. We can't limit students' abilities in this regard without raising

      • by Junta ( 36770 )

        It may be useful to refine a body of relevant text into a specific incarnation subject to proper skepticism, but the *feeling* I get is that people are *way* overconfident about it. Hence we need to contextualize the role. Calling it a 'tutor' clearly suggests a substitute for human that would do just as well.

        While it is true human instructors can be 'confidently incorrect' in the same manner AI can be, it's more often for the instructors to recognize the limits of their knowledge and make it more clear whe

    • I agree but also I am concerned the use of such tools will mean less research and innovation in the long run.

      If we start using tools like chatGPT to tutor people then there will be less teachers teaching, creating new ways to teach, why would they when it is cheaper and easier to have your 24/7 tutor available to you.

      I see this type of thing happening now with simple skills that we are already losing like my 70 year old aunt being able to deduct 10% from a price in her head while the shop assistant couldn't

    • by narcc ( 412956 )

      This might be the kind of application people want, but it's certainly not an application to which this kind of program is well-suited. Not only will programs like this confidently assert nonsense, they will vehemently defend that nonsense with more nonsense!

      It's not a problem with the "current generation". This kind of failure is exactly what you should expect given how these kinds of programs work. This isn't going to go way with incremental improvements. Something fundamental needs to change.

      Current generation is pretty impressive

      That th

  • There is a down side to personalized learning: the lack of socialization that kids get by learning together. Does anybody remember Issac Asimov's The Fun They Had [wikipedia.org]?

    • Well TBF no where in TFA does it say they intend to replace teachers and classrooms. Instead imagine a world where besides human teachers and classmates, every child has exclusive access to their own electronic tutor which helps them learn in their own particular style in their own time.

      • by laktech ( 998064 )
        you're probably right. but still fun to extrapolate. Suppose this is true only if you're rich. Everyone else else gets the equivalent of a bot in a 1000:1 student-teacher ratio environment.
      • Well TBF no where in TFA does it say they intend to replace teachers and classrooms. Instead imagine a world where besides human teachers and classmates, every child has exclusive access to their own electronic tutor which helps them learn in their own particular style in their own time.

        In a world of tight budgets, "besides" quickly becomes "instead of".

      • Instead imagine a world where besides human teachers and classmates, every child has exclusive access to their own electronic tutor which helps them learn in their own particular style in their own time.

        This frankly would be an amazing thing. Especially for the poorer students, but really everyone could benefit from being able to ask one-off questions if you get stuck or don't understand something. My parents could've afforded a tutor for me but it's not like I had someone on call 24/7, so often I'd just did my best guess and moved on. Sometimes years later I'd realize I have gaps in my knowledge or understanding :)

      • Isn't the Kahn Academy already focused on individual learning without teachers, classrooms and fellow students? Looks like this could be a good match.
  • by atomicalgebra ( 4566883 ) on Monday April 10, 2023 @11:55AM (#63438600)
    See the 2 Sigma Problem [wikipedia.org] "The average tutored student was above 98% of the students in the control class." which makes an expert tutor the strongest form of learning. Can a computer achieve the same level as a tutor. No it can't. But it can get us partially the way there.
    • by narcc ( 412956 )

      If you don't mind the student learning "facts" like 1000 is greater than 1062 [twitter.com] or whatever this insanity is [twitter.com]

      A tutor does quite a bit more than just answer random questions. Students can do that on their own, thanks the the many free resources available online. A tutor is going to be able to identify any issues with the student's learning, even those the student might not understand they have, and how best to remedy, for them specifically, any deficiencies.

      • Yeah. Which is why I said " Can a computer achieve the same level as a tutor. No it can't. But it can get us partially the way there." Maybe read my statement before typing.
  • This is A Young Lady's Illustrated Primer from Diamond Age by Neal Stephenson. A tutor which can be there 24 x 7 1 on 1 with the child and help the learning process accelerate development. If it can pull it off - which it likely can given the constrained and simple subject matter it has to learn - it would be very useful. Its simple enough to restrict it to only learning materials and not have it go off on other subjects.
     

    • This is A Young Lady's Illustrated Primer from Diamond Age by Neal Stephenson.

      Yeah, but it's also only the cheap version the mouse army got.

    • by gweihir ( 88907 )

      Not until they get the hallucinations under control. If they ever manage that at all.

  • Comment removed based on user account deletion
  • by zeeky boogy doog ( 8381659 ) on Monday April 10, 2023 @04:39PM (#63439476)
    The problem is, if you asked it to show you how a^2+b^2=c^2 (in the context of the Pythagorean theorem) it would do so and properly show you all sorts of cited data, etc... and if you asked it to show you that a^2+b^-1=c^3 it would show you all sorts of remarkably convincing "reasoning" and "cite" all sorts of things that don't freaking exist to "prove" this.

    Multiple area experts have done this sort of test - it has absolutely no bullshit filter. And it's unfortunately very good at bullshitting/hallucinating these sort of fake answers. They themselves have said, "if I wasn't an expert I'd could probably have been convinced by it."
    • The problem is, if you asked it to show you how a^2+b^2=c^2 (in the context of the Pythagorean theorem) it would do so and properly show you all sorts of cited data, etc... and if you asked it to show you that a^2+b^-1=c^3 it would show you all sorts of remarkably convincing "reasoning" and "cite" all sorts of things that don't freaking exist to "prove" this. Multiple area experts have done this sort of test - it has absolutely no bullshit filter. And it's unfortunately very good at bullshitting/hallucinating these sort of fake answers. They themselves have said, "if I wasn't an expert I'd could probably have been convinced by it."

      Input: Prove a^2+b^-1=c^3 when a, b, c are sides of a right triangle
      I'm sorry, but the statement "a^2+b^-1=c^3" cannot be proven to be true in general for any right triangle with sides a, b, and c. This is because the relationship between the sides of a right triangle is governed by the Pythagorean theorem, which states that a^2 + b^2 = c^2, not a^2 + b^-1 = c^3.

      It is possible that a^2 + b^-1 = c^3 could be true for a specific right triangle, but without additional information or constraints, we cannot

  • its teachers are any better than a computer algorithm.

  • You can understand the teachers' fear of the trend towards artificial intelligence, but you have to understand that it is inevitable when the students come to the chatbot. I touched on this topic for myself when I wrote my essay about why I want to enroll in the faculty of artificial intelligence. I took my work with https://eduzaurus.com/free-essay-samples/why-i-want-to-study-artificial-intelligence-and-cyber-security/ [eduzaurus.com] as a basis and used it to reveal my own reasons. As for me all this sphere only helps wi

Computer programmers do it byte by byte.

Working...