Khan Academy Chief Says GPT-4 is Ready To Be a Tutor (axios.com) 58
For all the high-profile examples of ChatGPT getting facts and even basic math wrong, Khan Academy founder Sal Khan says the latest version of the generative AI engine makes a pretty good tutor. From a report: "This technology is very powerful," Khan told Axios in a recent interview. "It's getting better."
Khan Academy was among the early users of GPT-4 that OpenAI touted when it released the updated engine. This week, two more school districts (Newark, N.J. and Hobart, Indiana) are joining the pilot of Khanmigo, the AI-assisted tutor. With the two new districts, a total of 425 teachers and students are testing Khanmigo.
The chatbot works much like a real-life or online tutor, looking at students' work and helping them when they get stuck. In a math problem, for example, Khanmigo can detect not just whether a student got an answer right or wrong, but also where they may have gone astray in their reasoning. ChatGPT and its brethren have been highly controversial -- especially in education, where some schools are banning the use of the technology. Concerns range from the engines' propensity to be confidently wrong (or "hallucinate") to worries about students using the systems to write their papers. Khan said he understands these fears, but also notes that many of those criticizing the technology are also using it themselves and even letting their kids make use of it. And, for all its flaws, he says today's AI offers the opportunity for more kids -- in both rich and developing countries -- to get personalized learning. "The time you need tutoring is right when you are doing the work, often when you are in class," Khan said.
The chatbot works much like a real-life or online tutor, looking at students' work and helping them when they get stuck. In a math problem, for example, Khanmigo can detect not just whether a student got an answer right or wrong, but also where they may have gone astray in their reasoning. ChatGPT and its brethren have been highly controversial -- especially in education, where some schools are banning the use of the technology. Concerns range from the engines' propensity to be confidently wrong (or "hallucinate") to worries about students using the systems to write their papers. Khan said he understands these fears, but also notes that many of those criticizing the technology are also using it themselves and even letting their kids make use of it. And, for all its flaws, he says today's AI offers the opportunity for more kids -- in both rich and developing countries -- to get personalized learning. "The time you need tutoring is right when you are doing the work, often when you are in class," Khan said.
Re:If there's anyone on earth I don't trust (Score:5, Informative)
And even if it worked why would I pay this guy instead of just going to the source? Why pay a middle man?
I thought Khan Academy was primarily online videos and interactive software, not 1 on 1 tutoring. If I am right about this, it makes sense that Khan Academy would get behind Chat GPT because tools like this fit into the company's business model. They don't want to be hiring a lot of human tutors, they want scalable software to do the heavy lifting.
Re: If there's anyone on earth I don't trust (Score:2)
You are partially right. The reason it makes sense for them to use GPT is more obvious than what you think, though. Microsoft has been the primary benefactor for Khan.
Re: (Score:3, Interesting)
But it is useful is offloading some of the teaching for students who can tolerate. For the most part these are the students that donâ(TM)t need to be aggressively taught in the first place. And any interactive tool, no matter how unreliab
Re:If there's anyone on earth I don't trust (Score:5, Interesting)
Their free videos have helped my kids with certain concepts, explains concepts pretty well. It is a tool for learning, not 'cramming'.
Re: Yeah NO! (Score:2)
Re: Yeah NO! (Score:4, Informative)
Maybe its time to abandon public schools, too many school shootings, let chatgpt tutor them homeschool style
Abandoning schools isn't going to protect kids from being shot and killed by any meaningful measure. 6125 children were killed by guns in 2022, but only 31 (0.5%) of them were killed in schools. And 2022 was deadliest year for kids on school grounds in 50 years.
Our kids need less guns in our communities overall, not just when in school.
Re: (Score:1)
Uh. You know those numbers are cooked right?
They're including 18 and 19 year olds.
Which ALSO jacks up the numbers in poor (see GANGLAND) communities.
And yes, if schools continue the "Gun Free Zone" crap, turning these facilities into safe spaces for murderers, and don't allow for appropriate defensive measures (locking safety door systems, on-premises security, access control plans, etc), YES, abandoning Public Schools is the right thing to do.
We guard city facilities, we guard banks, hell, we guard PARKIN
Re: (Score:2)
Re: (Score:1)
Keep telling yourself this.
The sickness is in the society.
These guns aren't picking themselves up and forcing the owners to blow people away.
If it were, you'd be banning knives, cars, baseball bats, etc.
But hey, want to "fix" the rape problem? Cut your diick off already!
Re: (Score:2)
Re: (Score:1)
Too many people?
There are FOUR HUNDRED MILLION FIREARMS in the United States.
If the problem was guns or law abiding citizens, you'd have the bonafides in the numbers right there.
You don't.
And let's look at the demographics of a bunch of these perps.
Democrats.
Hyoer-authoritarian social activists.
Hyper-authoritarian social extremists
Criminal backgrounds in many cases.
Alphabet People
Many of whom are confused about who and what they are.
In free-fire zones *cough*sorry "Gun Free Zones".
Not saying this is ALL of
Re: (Score:2)
Re: (Score:1)
Oh this bullshit again.
Go buy a roll of tin foil already.
It doesn't distract from the fact that your opinions about gun control are naive and unworkable.
Re: (Score:2)
Re: (Score:1)
It doesn't have to be "original".
It just needs to be TRUE.
So, one of the guns, LEGALLY OWNED by a LAW ABIDING CITIZEN will SOMEHOW be pointed at me or someone I love.
How, pray tell, is this supposed to happen?
I'm law abiding as well.
The only people who're going to point guns at me ARE CRIMINALS.
You know, the people who, regardless of the law, are going to have firearms ANYHOW.
And if I
Re: (Score:1)
When it's difficult to have a rational conversation with someone who's irrationally afraid of the idea of guns, I like to use this thought experiment.
If Thanos were to snap away all existing guns but not the existing manufacturers of guns and ammo, what laws would you establish at that moment? If the person is clueless to the point of knowing just how many laws there are concerning guns already, I point them at the Vice documentary titled "Armed and reasonable" which is about Canadian gun laws.
Re: (Score:2)
"Our kids need less guns in our communities "
Unless the guns are in the hands of unstable people it's not much of a problem.
I wonder if positive vetting might help. The last two mass shooters bought the guns right before they went on a suicide shooting spree.
Would it help if they needed vetting by other gun owners? If the ones vetting the new gun owner had their own guns at risk, they should be thoughtful about what they are doing.
Or maybe not. Just an idea.
Re: Yeah NO! (Score:2)
Re: (Score:1)
Not if you school them correctly.
And if it turns them into "helpless" adults? So what?
At least they SURVIVE to BECOME adults.
And if someone's so motivated that they pull their kid out of Leftist Indoctrination Academy, chances are they're ALSO going to be schooled in being able to defend themselves.
Re: (Score:1)
Basically, Public School is pretty much a darwinian cleaning tool.
You're made, actively, dumber by the experience, and you have the idiots trying to "protect" students, by turning their facilities into free-fire zones for crazies.
But no, turning the schooling over to AI is simply going to relocate where you're being made stupid.
The internet started failing my Turing Tests (Score:3)
Not so sure.. (Score:5, Insightful)
On the one hand, it is a use case that is a prime target for Generative AI: going over well-known information. It doesn't require creative interpretation, just interesting 'search' of the knowledge and the ability to synthesize that knowledge into a digestible text that is narrowly focused on the prompt.
However, when it goes wrong, it goes wrong so casually competently that you wouldn't know unless, well, you know better than it already, which a student generally wouldn't.
Further, if the human makes an incorrect leap of understanding and pursues something in a confidentially incorrect way, the GPT will readily reinforce that by going along with whatever the human says. If you ask a question to which the correct answer is "there isn't a way to do it", it will tend to fabricate a credible sounding answer that, if it were true, would answer the question in a satisfactory way. I have seen it make up libraries that don't exist, with fabricated function calls that sound like they would be right, if they existed. It doesn't understand information, but assembles it in interesting ways that can be very misleading.
Current generation is pretty impressive, but it's tendency to be equally confident for real answers as it is for totally fake answer makes it a challenge if you don't know enough to second guess it.
Re: (Score:2)
But also the progress is happening fast. For example on a benchmark Internal factual eval by category [openai.com], GPT3 scored about 60% vs 80% for GPT4, which means they slashed the error rate in half. That is real progress.
Re: (Score:2)
One challenge with "benchmarking" is that the benchmark becomes a development target. So it's hard to say how meaningful it is outside the context of the things specifically benchmarked. One would expect it to do very well on the things the team specifically saw get wrong and they intervene to correct.
But it's less about the error rate, and more about how the errors look. When a human tutor gets in over their heads, they are likely to recognize it and acknowledge "I don't know". The AI tends to fabricate
Re: (Score:3)
And how will we know whether the students taught by AI are learning better or worse than those w
Re: (Score:2)
It's important for students to learn how to recognize incorrect info and verify/corroborate, even teachers teach wrong info at times.
Generative AI are tools students will be using in their jobs (we already have jobs requiring its use just like we used to require web-searching abilities).
Knowing your AI tutor and teachers are fallible is invaluable. Learning how to recognize such and work with it and rank knowledgeable sources is invaluable. We can't limit students' abilities in this regard without raising
Re: (Score:3)
It may be useful to refine a body of relevant text into a specific incarnation subject to proper skepticism, but the *feeling* I get is that people are *way* overconfident about it. Hence we need to contextualize the role. Calling it a 'tutor' clearly suggests a substitute for human that would do just as well.
While it is true human instructors can be 'confidently incorrect' in the same manner AI can be, it's more often for the instructors to recognize the limits of their knowledge and make it more clear whe
Re: (Score:2)
I agree but also I am concerned the use of such tools will mean less research and innovation in the long run.
If we start using tools like chatGPT to tutor people then there will be less teachers teaching, creating new ways to teach, why would they when it is cheaper and easier to have your 24/7 tutor available to you.
I see this type of thing happening now with simple skills that we are already losing like my 70 year old aunt being able to deduct 10% from a price in her head while the shop assistant couldn't
Re: (Score:2)
This might be the kind of application people want, but it's certainly not an application to which this kind of program is well-suited. Not only will programs like this confidently assert nonsense, they will vehemently defend that nonsense with more nonsense!
It's not a problem with the "current generation". This kind of failure is exactly what you should expect given how these kinds of programs work. This isn't going to go way with incremental improvements. Something fundamental needs to change.
Current generation is pretty impressive
That th
personalized learning (Score:3)
There is a down side to personalized learning: the lack of socialization that kids get by learning together. Does anybody remember Issac Asimov's The Fun They Had [wikipedia.org]?
Re: (Score:3)
Well TBF no where in TFA does it say they intend to replace teachers and classrooms. Instead imagine a world where besides human teachers and classmates, every child has exclusive access to their own electronic tutor which helps them learn in their own particular style in their own time.
Re: (Score:1)
Re: (Score:3)
Well TBF no where in TFA does it say they intend to replace teachers and classrooms. Instead imagine a world where besides human teachers and classmates, every child has exclusive access to their own electronic tutor which helps them learn in their own particular style in their own time.
In a world of tight budgets, "besides" quickly becomes "instead of".
Re: (Score:2)
Instead imagine a world where besides human teachers and classmates, every child has exclusive access to their own electronic tutor which helps them learn in their own particular style in their own time.
This frankly would be an amazing thing. Especially for the poorer students, but really everyone could benefit from being able to ask one-off questions if you get stuck or don't understand something. My parents could've afforded a tutor for me but it's not like I had someone on call 24/7, so often I'd just did my best guess and moved on. Sometimes years later I'd realize I have gaps in my knowledge or understanding :)
Re: (Score:2)
Two Sigma Problem (Score:3)
Re: (Score:2)
If you don't mind the student learning "facts" like 1000 is greater than 1062 [twitter.com] or whatever this insanity is [twitter.com]
A tutor does quite a bit more than just answer random questions. Students can do that on their own, thanks the the many free resources available online. A tutor is going to be able to identify any issues with the student's learning, even those the student might not understand they have, and how best to remedy, for them specifically, any deficiencies.
Re: (Score:2)
Re: (Score:2)
I'm saying that it can't even "get us partially the way there". More than that, I'm suggesting that such a thing would be harmful. Maybe read my statement before typing.
Re: (Score:2)
Re: (Score:2)
What a stupid thing to say. No surprise, given the difficulty you have reading. You must be someone's alt.
Re: (Score:2)
Re: (Score:2)
You're the one in error here, not me. If you don't want people correcting you, try being less stupid.
Now THIS is what ChatGPT is actually useful for. (Score:3, Insightful)
This is A Young Lady's Illustrated Primer from Diamond Age by Neal Stephenson. A tutor which can be there 24 x 7 1 on 1 with the child and help the learning process accelerate development. If it can pull it off - which it likely can given the constrained and simple subject matter it has to learn - it would be very useful. Its simple enough to restrict it to only learning materials and not have it go off on other subjects.
Re: (Score:2)
This is A Young Lady's Illustrated Primer from Diamond Age by Neal Stephenson.
Yeah, but it's also only the cheap version the mouse army got.
Re: (Score:2)
Not until they get the hallucinations under control. If they ever manage that at all.
Re: (Score:2)
Re: (Score:2)
It's a bullshit engine... (Score:3)
Multiple area experts have done this sort of test - it has absolutely no bullshit filter. And it's unfortunately very good at bullshitting/hallucinating these sort of fake answers. They themselves have said, "if I wasn't an expert I'd could probably have been convinced by it."
Re: (Score:2)
The problem is, if you asked it to show you how a^2+b^2=c^2 (in the context of the Pythagorean theorem) it would do so and properly show you all sorts of cited data, etc... and if you asked it to show you that a^2+b^-1=c^3 it would show you all sorts of remarkably convincing "reasoning" and "cite" all sorts of things that don't freaking exist to "prove" this. Multiple area experts have done this sort of test - it has absolutely no bullshit filter. And it's unfortunately very good at bullshitting/hallucinating these sort of fake answers. They themselves have said, "if I wasn't an expert I'd could probably have been convinced by it."
Input: Prove a^2+b^-1=c^3 when a, b, c are sides of a right triangle
I'm sorry, but the statement "a^2+b^-1=c^3" cannot be proven to be true in general for any right triangle with sides a, b, and c. This is because the relationship between the sides of a right triangle is governed by the Pythagorean theorem, which states that a^2 + b^2 = c^2, not a^2 + b^-1 = c^3.
It is possible that a^2 + b^-1 = c^3 could be true for a specific right triangle, but without additional information or constraints, we cannot
Khan Academy apparently doesn't think (Score:2)
its teachers are any better than a computer algorithm.
My opinion (Score:1)