

Exhausted Man Defeats AI Model In World Coding Championship 27
An anonymous reader quotes a report from Ars Technica: A Polish programmer running on fumes recently accomplished what may soon become impossible: beating an advanced AI model from OpenAI in a head-to-head coding competition. The 10-hour marathon left him "completely exhausted." On Wednesday, programmer Przemysaw Debiak (known as "Psyho"), a former OpenAI employee, narrowly defeated the custom AI model in the AtCoder World Tour Finals 2025 Heuristic contest in Tokyo. AtCoder, a Japanese platform that hosts competitive programming contests and maintains global rankings, held what may be the first contest where an AI model competed directly against top human programmers in a major onsite world championship. During the event, the maker of ChatGPT participated as a sponsor and entered an AI model in a special exhibition match titled "Humans vs AI." Despite the tireless nature of silicon, the company walked away with second place.
The competition required contestants to solve a single complex optimization problem over 600 minutes. The contest echoes the American folk tale of John Henry, the steel-driving man who raced against a steam-powered drilling machine in the 1870s. Like Henry's legendary battle against industrial automation, Debiak's victory represents a human expert pushing themselves to their physical limits to prove that human skill still matters in an age of advancing AI. Both stories feature exhausting endurance contests -- Henry drove steel spikes for hours until his heart gave out, while Debiak coded for 10 hours on minimal sleep. The parallel extends to the bittersweet nature of both victories: Henry won his race but died from the effort, symbolizing the inevitable march of automation, while Debiak's acknowledgment that humanity prevailed "for now" suggests he recognizes this may be a temporary triumph against increasingly capable machines. While Debiak won 500,000 yen and survived his ordeal better than the legendary steel driver, the AtCoder World Tour Finals pushes humans and AI models to their limits through complex optimization challenges that have no perfect solution -- only incrementally better ones. "Humanity has prevailed (for now!)," wrote Debiak on X, noting he had little sleep while competing in several competitions across three days. "I'm completely exhausted. ... I'm barely alive."
The competition required contestants to solve a single complex optimization problem over 600 minutes. The contest echoes the American folk tale of John Henry, the steel-driving man who raced against a steam-powered drilling machine in the 1870s. Like Henry's legendary battle against industrial automation, Debiak's victory represents a human expert pushing themselves to their physical limits to prove that human skill still matters in an age of advancing AI. Both stories feature exhausting endurance contests -- Henry drove steel spikes for hours until his heart gave out, while Debiak coded for 10 hours on minimal sleep. The parallel extends to the bittersweet nature of both victories: Henry won his race but died from the effort, symbolizing the inevitable march of automation, while Debiak's acknowledgment that humanity prevailed "for now" suggests he recognizes this may be a temporary triumph against increasingly capable machines. While Debiak won 500,000 yen and survived his ordeal better than the legendary steel driver, the AtCoder World Tour Finals pushes humans and AI models to their limits through complex optimization challenges that have no perfect solution -- only incrementally better ones. "Humanity has prevailed (for now!)," wrote Debiak on X, noting he had little sleep while competing in several competitions across three days. "I'm completely exhausted. ... I'm barely alive."
Sleep (Score:1)
> while Debiak coded for 10 hours on minimal sleep
Typical highly paid person who needs to sleep after 10hrs of work
Re: (Score:2)
[...] noting he had little sleep while competing in several competitions across three days.
It depends on the challenge (Score:3)
Re: (Score:3)
From the summary: "The competition required contestants to solve a single complex optimization problem over 600 minutes."
And there you have it! It's a sufficiently-specific problem based on a knowledge base that is sufficiently-well-established in sources that can be used as training materials for AI. So, that's why it did so well.
Once you start putting the AI into real world situations, like the ones described in the parent post, it performs much worse. I literally just spent time fiddling with permissi
Re: (Score:2)
The problem they offered was not a common problem, but it was definitely mappable to some real world problems.
It demonstrates
Re: (Score:2)
Exactly. In the 90s we still used to try to optimize C code by using register variables and complex function structure that happened to suit the way the processor worked.
Then we stopped doing that because we realized the new compilers could optimize it a heck of a lot better.
Now we typically don't even write programs that generate machine code any more but feed everything into a VM that generates code on the fly.
I don't remember having to do any serious optimization for years, and it was mostly stuff like b
Re: It depends on the challenge (Score:2)
Reminds me of old John Henry (Score:3)
John Henry told his captain
'A man ain't nothin' but a man
But before I let your steam drill beat me
Down
I'd die with a hammer in my hand. Lord
Lord
I'd dies with a hammer in my hand.'
Re: (Score:2)
Re: (Score:2)
Competitive coding is not... (Score:2)
...the same as solving real and complex problems, especially if they are not precisely defined
Re: (Score:2)
Re: (Score:2)
Humans always beat computers.. until we don't (Score:2)
If computing has taught us anything, it's that they tend to improve in capability
Re: (Score:2)
It's not really going to help if "AI" just makes mistakes faster than before. This is a technology that doesn't look like it improves significantly.
Re: (Score:2)
What is your basis for thinking LLMs don't improve? Genuinely, where does that come from. All the charts and benchmarks and records I see are going up every time there is a new one.
Re: Humans always beat computers.. until we don't (Score:2)
Shit like this probably: https://garymarcus.substack.co... [substack.com]
Re: Humans always beat computers.. until we don't (Score:2)
I just took the time to read most his 2022 post and it didnt hold up very well. He illustrates why gpt 3 is a dead end but I just tried the exact example on gpt 4o and it passed with flying colors.
The premise is that scaling wonâ(TM)t work and that the world is overly focused on the LLM approach.
He claims tagging images wont work well enough for radiology. Fine but companies like Surona medical seem to be still around years later.
He claims that image tagging wont work but an iPhone does an amazing job
publicity stunt (Score:2)
John Henry died (Score:2)
Today's John Henry (Score:2)
600 minutes WTF (Score:2)
Who the hell comes up with numbers like 600 minutes? Why not 36 000 seconds then?
I can compute without AI that it's 10 hours.
Meaningless stunt is meaningless (Score:2)
Real-world software creation follows other rules.
How good was each solution? (Score:2)
Next step would be for someone (or some AI?) to evaluate the solutions on metrics:
1) Performance
2) Maintainability
Without sleep? (Score:2)
Debiak coded for 10 hours on minimal sleep
Is that guy a cat who needs to nap every 2 hours?
FWIW, I once participated in a coding contest at my university in the early 90's that lasted 72 hours (the first prize was a full scholarship, which I didn't get :)) I ran on coffee and speed for the full 72 hours, then collapsed on a couch and slept until someone woke me up to come get my third prize (a Solaris license).
10 hours non-stop coding sounds like a normal day at the office trying to wrap up a project.