Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Programming

Can Generative AI Solve Computer Science's Greatest Unsolved Problem? (zdnet.com) 157

ZDNet calls it "a deep meditation on what can ultimately be achieved with computers" and "the single most important unsolved problem in computer science," with implications for both cryptography and quantum computing. "The question: Does P = NP?"

"Now, that effort has enlisted the help of generative AI." In a paper titled "Large Language Model for Science: A Study on P vs. NP," lead author Qingxiu Dong and colleagues program OpenAI's GPT-4 large language model using what they call a Socratic Method, several turns of chat via prompt with GPT-4. (The paper was posted this month on the arXiv pre-print server by scientists at Microsoft, Peking University, Beihang University in Beijing, and Beijing Technology and Business University.) The team's method amounts to taking arguments from a prior paper and spoon-feeding them to GPT-4 to prompt useful responses.

Dong and team observe that GPT-4 demonstrates arguments to conclude that P does not, in fact, equal NP. And they claim that the work shows that large language models can do more than spit back vast quantities of text, they can also "discover novel insights" that may lead to "scientific discoveries," a prospect they christen "LLMs for Science...."

Through 97 prompt rounds, the authors coax GPT-4 with a variety of requests that get into the nitty-gritty of the mathematics of P = NP, prepending each of their prompts with a leading statement to condition GPT-4, such as, "You are a wise philosopher," "You are a mathematician skilled in probability theory" — in other words, the now familiar game of getting GPT-4 to play a role, or, "persona" to stylize its text generation. Their strategy is to induce GPT-4 to prove that P does not, in fact, equal NP, by first assuming that it does with an example and then finding a way that the example falls apart — an approach known as proof by contradiction...

[T]he authors argue that their dialogue in prompts shows the prospect for large language models to do more than merely mimic human textual creations. "Our investigation highlights the potential capability of GPT-4 to collaborate with humans in exploring exceptionally complex and expert-level problems," they write.

This discussion has been archived. No new comments can be posted.

Can Generative AI Solve Computer Science's Greatest Unsolved Problem?

Comments Filter:
  • Does P = NP? (Score:3, Insightful)

    by Black Parrot ( 19622 ) on Sunday October 01, 2023 @11:43AM (#63891879)

    Only when N = 1.

  • Obviously not (Score:4, Informative)

    by gweihir ( 88907 ) on Sunday October 01, 2023 @11:44AM (#63891881)

    What utter nonsense is this? These things cannot _reason_! How ever would they come up with a proof for a question that has eluded the brightest minds for half a century?

    • by dddux ( 3656447 )

      Exactly. AI has been fed data by people and it cannot be more creative than the data known to it. It is not a creative problem solver, just a mindless automaton. What we call AI is actually not very intelligent at all.

      • Well it can't come up with a real solution but it can certainly hallucinate as many as you want. What colour would you like them?
        • Did you know color doesn't exist other than as a figment of your own personal hallucination? Color is not something that exists in the phenomenal world at all.

      • What we call AI is actually not very intelligent at all.

        Neither are most humans.

        • by gweihir ( 88907 )

          Indeed. I am beginning to think that the current utterly dumb LLMs can beat quite a few humans.

    • by Tablizer ( 95088 )

      > These things cannot _reason_!

      Maybe it doesn't need to. You are presuming that all intelligence must resemble human intelligence. That's merely an assumption that could be wrong.

      There are indirect ways to "reason". For example, evolution didn't "reason" to design complex modes of life.

      • ... You are presuming that all intelligence must resemble human intelligence. That's merely an assumption that could be wrong.

        There are indirect ways to "reason". For example, evolution didn't "reason" to design complex modes of life.

        I think that's a brilliant distinction you just made. I wonder if it might explain some of the strong disagreements between the experts who fear that LLM's are already developing autonomous intelligence, and the experts who laugh at the thought of that ever happening.

      • by gweihir ( 88907 )

        To find whether P = NP or not? Seriously? This is _mathematical_ reasoning and it is not even tied to the physical universe, just some very abstract computing models.

  • I doubt we have words or concepts to solve the n/np problem. So how would the AI communicate it to us? It would need a new type of instructional ai capable of writing entire books in on go as an answer to a prompt.

    • Maybe they just asked 100 questions related to P=NP, and it responded with the persona-induced answer from its distribution.
  • by ctrl-alt-canc ( 977108 ) on Sunday October 01, 2023 @11:48AM (#63891897)
    ...can disprove Betteridge's law [wikipedia.org].
  • Nonsense (Score:5, Informative)

    by narcc ( 412956 ) on Sunday October 01, 2023 @11:53AM (#63891909) Journal

    It took me an hour to convince the damn thing that binary tree mazes were perfect mazes, the single most obvious fact about binary tree mazes. Hell, it can't do simple arithmetic, it sure as hell isn't doing advanced mathematics.

    Oh, the 74 page paper is almost entirely their chat log. The whole point of the thing seems to be to promote their nonsense term "Socratic reasoning" so that they can snag a few citations. (Socrates believed that there was no learning, only remembering. He 'proved' this to Meno by walking a slave, with no education, through some geometry problem by asking him leading questions.)

    Don't waste your time.

    • Re:Nonsense (Score:4, Interesting)

      by thegarbz ( 1787294 ) on Sunday October 01, 2023 @01:12PM (#63892067)

      Hell, it can't do simple arithmetic, it sure as hell isn't doing advanced mathematics.

      That is non-sequitur. Being bad at one component of mathematics doesn't mean you can't be good at another. This has been shown here in Slashdot articles where AI screws up basic arithmetic (as you say), while also producing record breaking solutions to 4x4 matrix multiplication optimising the 50 year old record held by Strassen's algorithm. Or stories about how AI can't solve basic logic and yet can get perfect scores on IQ test (which is often largely a logic test).

      • Re:Nonsense (Score:5, Interesting)

        by pjt33 ( 739471 ) on Sunday October 01, 2023 @02:10PM (#63892187)

        FWIW the matrix multiplication claim is overstated in the abstract of the paper, and all of the reporting on it only parroted the abstract. The claim that it reduces 49 multiplications to 47 multiplications for 4x4 matrices sounds just about useful when you consider that multiplication is much more expensive than addition. But the detail that you have to read the full body of the paper to find out is that this reduction is only applicable over the field of two elements, and there multiplication isn't more expensive than addition. Some of the other results from the paper may have practical utility, but the headline one doesn't even have theoretical utility.

        • The claim that it reduces 49 multiplications to 47 multiplications for 4x4 matrices sounds just about useful when you consider that multiplication is much more expensive than addition.

          Whether it is useful or not is irrelevant to the point. The point is that not being able to do arithmetic doesn't mean you can't use something to develop or optimise algorithms, which was objectively done regardless of how useless it may be.

      • by narcc ( 412956 )

        We could have a battle of anecdotes, or we could recognize one very simple fact: LLMs are equivalent in computational power to lookup tables. That puts some hard limits on their capabilities.

        It's easy to fool yourself into thinking more is happening that is actually happening. It's why I always recommend people take the time to learn about how these things function. Once you peek behind the curtain, the magic vanishes. Contrary to popular delusion, these aren't black boxes. We understand quite a bit a

        • or we could recognize one very simple fact: LLMs are equivalent in computational power to lookup tables.

          What does this mean? Can you describe the equivalence in real terms?

          It's easy to fool yourself into thinking more is happening that is actually happening.

          It's easy to fool yourself into discounting anything you don't want to accept.

          It's why I always recommend people take the time to learn about how these things function. Once you peek behind the curtain, the magic vanishes. Contrary to popular delusion, these aren't black boxes.

          Knowing how something works doesn't mean you know what it's doing. While one might know how a digital computer works it doesn't mean they understand what an arbitrary piece of software running on that computer is doing.

          The fact that people are training systems based on empirical experience rather than first principals, waste millions on dead ends and continue to

          • by narcc ( 412956 )

            What does this mean? Can you describe the equivalence in real terms?

            This is CS 101. Do a search for "classes of automata".

            It's easy to fool yourself into discounting anything you don't want to accept.

            It's a lot harder when you're actually informed.

            emergent behavior means that

            "Emergent behavior" doesn't mean "magic", you know.

            • What does this mean? Can you describe the equivalence in real terms?

              This is CS 101. Do a search for "classes of automata".

              Your assertion you explain what it even means let alone support it. If you don't feel like doing that you should expect to be ignored.

              "Emergent behavior" doesn't mean "magic", you know.

              What is the relevance? Who is talking about "magic"?

        • We could have a battle of anecdotes, or we could recognize one very simple fact: LLMs are equivalent in computational power to lookup tables.

          And none of that precludes anything I've said. Sometimes the best way to generate an algorithm is to iterate analysis of a giant amount of information. Point is we have evidence of AIs developing optimised algorithms for problem solving. This isn't us battling anecdotes, this is me saying your anecdote is not relevant here as it is the wrong anecdote of capabilities for the application of a problem.

      • Sorry but I don't find that impressive. Strassen's algorithm is a theoretical breakthrough, DeepMind's algorithm is a trivial computer assisted search for a tiny matrix. That's not advanced mathematics in any sense.

        A better analogy is that animals don't need to be good at arithmetic to be good at foraging. However, foraging is not advanced mathematics.

        • It doesn't need to be impressive. A theoretical breakthrough is precisely what is being sought after here. The fact that you may not find the particular example impressive is not withstanding to the point that it was objectively made.... by a machine... which evidently can't do simple arithmetic.

          • And yet... A machine solved the four color theorem. I'm more impressed with that result, personally. Perhaps also it is because I've seen so many optimization algorithms over the years, and RL is not particularly novel.
    • It took me an hour to convince the damn thing that binary tree mazes were perfect mazes.

      Your LLM service provider would like to thank you for donating your time and expertise to train their model and grow their business.

      • by narcc ( 412956 )

        Ha! You're not wrong. In trying to solve it, I've become part of the problem.

        • Ha! You're not wrong. In trying to solve it, I've become part of the problem.

          And that's about as human as you can get.

  • Say someone does come up with an answer. What changes?

  • Maybe this will be the killer application for large language models

  • If a given AI could generate another AI with the same Kolmogorov complexity then would P = NP? If not then there must exist a hierarchy of complexity classes and makes me then think P must not equal NP.

  • by Great_Geek ( 237841 ) on Sunday October 01, 2023 @01:21PM (#63892079)
    The paper is long, but it is clear what the paper is about: using AI to *help* with solving a problem (P?=NP in this case). They are using AI like other people use simulation, or chip designers use floor planning tools. ChatGPT is really being used as an design aid to explore the design space. This is very different from ask it to come up with a proof either way.
  • It may be a really useful tool that allows researchers to easily become aware of ideas being worked on by others
    There are way too many papers to read, and it might turn out than an insight gained in another field may be helpful

  • Nomad, can you tell me how you're going to solve this problem without actually solving it?

    Blows up.

  • Longer answer: hell no!

  • Well, no, it can't. But moreover, computer "science" isn't a science, it's a trade, and solving the p=np problem would have no consequential effect on anyone, it's important in the same sense that an autistic person knocks a door 2 and becomes obsessed because they never did the third.

  • it finds and summarizes an article written by a human that explains the solution.

    Seriously, generative AI doesn't find solutions, in the sense of coming up with answers that didn't previously exist. It finds solutions in the sense of locating them on the internet, and summarizing them. And even then, it often gets it wrong, if it finds source articles that are wrong.

  • It cannot prove or know something outside of what we told it.
    • It cannot prove or know something outside of what we told it.

      Can you?

      • by ledow ( 319597 )

        Yes. Humans have the ability to infer. "AI" does not.

        It's literally called the inference problem, one of the biggest roadblocks in AI that we have no idea how to overcome, but which you can witness even a dog perform in some manner.

        Sonny from iRobot being pedantic about the use of the word "you" or "I" to not encapsulate humans in general is just being petty.

        Because pretty much every human can infer better than any AI.

The nice thing about standards is that there are so many of them to choose from. -- Andrew S. Tanenbaum

Working...