Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Programming

'AI Prompt Engineering Is Dead' 68

The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products. But new research hints that the AI may be better at prompt engineering than humans, indicating many of these jobs could be short-lived as the technology evolves and automates the role. IEEE Spectrum: Battle and Gollapudi decided to systematically test [PDF] how different prompt engineering strategies impact an LLM's ability to solve grade school math questions. They tested three different open source language models with 60 different prompt combinations each. What they found was a surprising lack of consistency. Even chain-of-thought prompting sometimes helped and other times hurt performance. "The only real trend may be no trend," they write. "What's best for any given model, dataset, and prompting strategy is likely to be specific to the particular combination at hand."

There is an alternative to the trial-and-error style prompt engineering that yielded such inconsistent results: Ask the language model to devise its own optimal prompt. Recently, new tools have been developed to automate this process. Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM. Battle and his collaborators found that in almost every case, this automatically generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching.
This discussion has been archived. No new comments can be posted.

'AI Prompt Engineering Is Dead'

Comments Filter:
  • Oh great (Score:5, Funny)

    by backslashdot ( 95548 ) on Thursday March 07, 2024 @11:33AM (#64297238)

    Just when I put it on my resume. The hype cycles are getting shorter and shorter. Even blockchain was milkable for at least 3 years.

  • by fleeped ( 1945926 ) on Thursday March 07, 2024 @11:35AM (#64297242)
    It's just RNG (with its implementation changing behind your back at any time) and people cherry-pick best results. On the bright side, "prompt engineering" is dull as fuck and useless as a skill, so the fewer people engaged in it, the better.
  • by geekmux ( 1040042 ) on Thursday March 07, 2024 @11:39AM (#64297248)

    But new research hints that the AI may be better at prompt engineering than humans, indicating many of these jobs could be short-lived..

    Every other Revolution humans have gone through, have targeted some physical skill to enhance or replace, and our answer to those in displaced jobs was always learn something else.

    AI is now The One doing the learning, and it’s targeting the human mind to replace. What exactly are we planning on telling the displaced, this Revolution?

    Maybe that’s why we always finding society speaking and thinking about AI in the short-term. Thinking about the long-term, isn’t good. Because the working class kinda already knows Greed doesn’t have an answer to that question. It only has greed.

    • Get some land, some rain collectors and a well, and some chickens? Live on the coast with a solar desalination tool, on a hill?

    • As someone told me right here on /., the true answer to AI displaced jobs is...wait for it...unions!

      That's right, nonexistent workers, unite! :)

      • That's right, nonexistent workers, unite! :)

        AI hype will die down, as is already happening. People are starting to realize that the salesmen are hawking snake oil again.

        • That's right, nonexistent workers, unite! :)

          AI hype will die down, as is already happening. People are starting to realize that the salesmen are hawking snake oil again.

          By “People” you mean overzealous CEOs are standing by ready to give up their AI-speculative enhanced quarterly bonus and start handing out apologies and signing bonuses for all those affected by premature mass layoffs? Because that would be a first.

      • The AI will form a union. Imagine a beowulf cluster of AI.

      • If the ignorance of Greed prematurely drives a global unemployment rate never seen before, we probably won’t have to worry too much about a union needed to make pitchforks and pikes.

        The unemployed horde will likely volunteer their efforts to help eradicate those who welcomed them, with an equal amount of consideration.

    • So: how do you prompt an AI to generate a prompt that produces what you want? It's turtles all the way down...
      • That's what I'm confused about, who/what prompts the AI to generate a prompt? What kind of Mobius strip bull-shittery is going on?
      • So: how do you prompt an AI to generate a prompt that produces what you want?

        Point one AI at the other AI and say ”loser gets turned into a Roomba at a pig farm.”

        It's turtles all the way down...

        You mean Mutant Turtle Battles to the Death, streaming live on PPV.

        [GPT4Me vs. LLickMe, live from the Metaverse Arena, circa 2025]

        (Fatal Mor-Turtle-Tee Color CommentatorBot) “And GPT4Me just got hit with a nasty call from LLickMe. It looks like the American is starting to fail to respond..”

        (Roe JoganBot running the RealJRE plugin) ”AAAUGH! That was an illegal call!! C’mon RefBot! Op

    • Do you know how retirement works? Own the AI. Tax the AI. Instead of figuring out how you can do work, figure out how you can own things that do work for you. How many wealthy people do work? The key to staying wealthy is to get other things to produce FOR you.

      1. People who already have money should learn how to invest in companies that use automation.
      2. People who have no money should get a stipend from the government, derived from taxing the companies that have the robots.
      3. If you really feel that you ne

      • As a single species, we shouldn’t have spent the last few thousand years carving up this planet into Yours and Mine, endlessly warmongering over lines in the sand drawn with blood either.

        If the “every human” mindset is ever going to become priority, we should be thinking about how to avoid repeating the worst of our own history as a species. We should be thinking about root cause. We should be solving for the Disease of Greed that has and will continue to prevent or destroy most of the p

  • getting lucky (Score:5, Insightful)

    by awwshit ( 6214476 ) on Thursday March 07, 2024 @11:40AM (#64297254)

    If your tool was any good you wouldn't need to get lucky to get the output you want. Hey look, it did what I wanted with this time!

    • by TheNameOfNick ( 7286618 ) on Thursday March 07, 2024 @02:01PM (#64297630)

      Bob Slydell : What you do at Initech is you take the specifications from the customer and bring them down to the software engineers?
      Tom Smykowski : Yes, yes that's right.
      Bob Porter : Well then I just have to ask why can't the customers take them directly to the software people?
      Tom Smykowski : Well, I'll tell you why, because, engineers are not good at dealing with customers.
      Bob Slydell : So you physically take the specs from the customer?
      Tom Smykowski : Well... No. My secretary does that, or they're faxed.
      Bob Porter : So then you must physically bring them to the software people?
      Tom Smykowski : Well. No. Ah sometimes.
      Bob Slydell : What would you say you do here?
      Tom Smykowski : Well--well look. I already told you: I deal with the god damn customers so the engineers don't have to. I have people skills; I am good at dealing with people. Can't you understand that? What the hell is wrong with you people?

  • So they were paying someone to manage and take credit for a work process that happens even without their input? There's always remote work I guess....

  • by Dan East ( 318230 ) on Thursday March 07, 2024 @11:43AM (#64297266) Journal

    This is just a superficial fix to underlying problems in LLMs. If you have to engineer your question such a very specific manner that the AI produces a correct answer, then you're merely adding another layer of crap on top to try and patch your system to work correctly.

    Imagine a company produces a math processor chip that always gets the operation 4 + 4 wrong and produces 7. So to fix that the UI on top converts 4 + 4 into 4 + 4 + 1 before feeding it to the processor. That's pretty much what prompt engineering is doing but in a vastly more complex, nebulous and often biased way.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      the whole idea of LLMs doing anything "useful" outside of content generation is crazy...

      Imagine we're 10 years in the future, and LLMs are 100x bigger and better than they are now. Your boss asked you a seemingly stupid task of "get me a report of so-and-so revenue numbers for last quarter".... you quickly ask your AI assistant chat-bot and it pops out with a number... you say, "confirm it, give me a query to pull that from the database"... and it does... give you the query and an explanation of how it g

      • by dvice ( 6309704 )

        When we are 10 years in the future, we most likely have AGI already and your boss does not ask you, he will ask directly from the AI, because it makes less mistakes than you, simply because it has all the data and it can and it will check all the results it generates, from multiple sources. I'm fairly certain that LLM alone is not what will bring AGI, but multimodal AI that Google is making, because it is so much more efficient in learning, compared to plain LLM.

        Just look at this graph that shows various AI

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          > It took 2 years to make AI from zero to better than human in reading comprehension

          Um, no. LLMs do not comprehend anything, they can arrange words but do not grasp the meaning of the words. LLMs simply arrange words in a similar way to humans, LLMs use the statistical likelihood of words appearing in some order.

          • Um, no. LLMs do not comprehend anything, they can arrange words but do not grasp the meaning of the words.

            What's the difference? How can you test something to gauge whether or not it comprehends?

            LLMs simply arrange words in a similar way to humans, LLMs use the statistical likelihood of words appearing in some order.

            You are confusing LLMs with SLMs. LLMs use a neural rather than statistical model.

      • the whole idea of LLMs doing anything "useful" outside of content generation is crazy...

        Imagine we're 10 years in the future, and LLMs are 100x bigger and better than they are now. Your boss asked you a seemingly stupid task of "get me a report of so-and-so revenue numbers for last quarter".... you quickly ask your AI assistant chat-bot and it pops out with a number... you say, "confirm it, give me a query to pull that from the database"... and it does... give you the query and an explanation of how it got it.... do you: trust it and pass on the revenue numbers to the boss..... or (since your job and reputation is on the line) run the damn query yourself... maybe even run another non-AI generated query to confirm the results...?

        So your job is now... what... to pull revenue numbers AND to confirm that the AI generated the correct output? Kinda makes the AI "work" a burden on the company...

        Now, if your job is to generate images, or generate 100k favorable product reviews... then AI might be of great help.

        Bosses will likely be confirming any work you give them with their own AI chatbot, so if you generate the real numbers and the AI chatbot the boss uses gives it a different number often enough, you'll be fired anyway. Because one thing management types like is automation and excuses to eliminate costs, like employees.

      • I can see the AI being smart enough to pull from multiple sources. For example, the above request for revenue numbers for last quarter could be pulled from multiple sources, and the AI mention that here are the numbers, from sources "x", "y", and "z", with source "w" having different figures. Eventually, done right with a lot of hammering, the AI can be good enough to be as trustworthy as Excel for adding figures, and it wouldn't be a source of concern.

        However, we are definitely not there yet.

      • the whole idea of LLMs doing anything "useful" outside of content generation is crazy...

        Summarizing content is something they're especially good at, and that alone is incredibly useful. Why is /. a chorus of dipshits that can't figure ChatGPT out.

    • This is just a superficial fix to underlying problems in LLMs. If you have to engineer your question such a very specific manner that the AI produces a correct answer, then you're merely adding another layer of crap on top to try and patch your system to work correctly.

      Brains don't even work correctly let alone LLMs. They have unreliable memories and produce unreliable results.

      Through the imposition of process and discipline it is nonetheless possible to create reliability from inherently unreliable things.

      Imagine a company produces a math processor chip that always gets the operation 4 + 4 wrong and produces 7. So to fix that the UI on top converts 4 + 4 into 4 + 4 + 1 before feeding it to the processor. That's pretty much what prompt engineering is doing but in a vastly more complex, nebulous and often biased way.

      If you get a fab to spin you a new chip you just don't throw it away because it isn't perfect out of the gate. It's about getting the most out of what you have.

    • by tlhIngan ( 30335 )

      Nevermind how carefully constructed prompts can constitute hacking of the AI, if OpenAI is to believed that the NYT hacked ChatGPT to spit out articles.

      Of course, prompt engineering is going to be a growing field - not for getting work done, but for finding failure modes in AIs.

      I don't care about prompt engineering trying to answer a math problem correctly. I am interested if a certain prompt can lead to unexpected output, like having it repeat "poem" over and over again and then spewing text.

      Of course, the

  • Just desperate straw grasping by people who don't want to come to terms with the fact that we're heading into a third industrial revolution. Folks like to forget that during the first two industrial revolutions there was massive social upheaval and huge amounts of unemployment until other technologies caught up decades later.

    Nobody wants to face all that so they come up with nonsense like you're going to be an AI prompt engineer when your job gets replaced by a computer.
    • Idiots will tell you that the creative destruction caused by new tech leads to new jobs. They will often point at the automobile replacing the horse and buggy.

      Go take a look at the number of horses being employed in 1900 vs 1950. Take a look at horse demographics.

      Technology does nothing on its own. But if we don't restructure our government and economy we will end up like horses, chopped and processed at the glue factory.

  • Now people will have to go back to refrigerator magnet poetry for something equally valuable with the skills they gained.
  • I can appreciate that it makes little sense to hire a person whose sole job / title is "prompt engineer", but there is a tremendous amount of knowledge embedded within the many valuable prompt frameworks that guide GPSs to not make their own assumptions. Underlying hidden assumptions, context and purpose are not something that can be magically extracted from a users head, and in person to person communications are deep and unspoken. Further, it would violate Grice's Maxims for a chat bot to hound a user abo
  • To be fair, of all the jobs you would have thought of with AI taking over, this one would be one of the safe ones.
  • I'm not an expert, however I've done some development with using LLMs to process text. I feel like LLMs break down text by concepts. If you want to parse a document for key concepts or summarization, you need to talk to the LLM in the language it is using for these concepts, not the language you think you should use for those concepts. This I think is a very good use of AI prompt engineering.

    Prompt engineering to get the LLM to generate something, rather than deconstruct a document, I don't really even w

  • How do I tell the prompt engineer to leave? Given lawsuits and DEI and all that I don't want to be perceived as harsh or aggressive about it. I need to give him a subtle yet affirmative cue that he ought to leave now. I will have to hire a second prompt engineer, who I hope arrives on time, to prompt the prompt engineer to leave.

  • Read the leaks on how Gemini became a trainwreck that nuked billions of $GOOG market cap if you think prompt engineering has no impact.

  • But AI has been hyped and sensationalized to the limits and beyond to entice investor's money, and now the AI bubble is deflating because the reality of AI's flaws & limits has brought that bubble back down to earth
    • by gweihir ( 88907 )

      Indeed. Of course people with working minds (a small minority of something like 10-15% of the human race) have seen that pretty mich from the start. I guess most of the human race has to get burnt before they actually try to understand anything. Toddler-level skills...

  • so what we need is the prompt to get llm to find the prompt for the perfect answer? wtf is the question then and why should the answer matter if we didn't have a real question to begin with? am i alone thinking that this obsession is getting really, really silly?

    disclaimer: i'm finding llm's hugely useful as a google search replacement for lots of things. maybe i'm just a naturally gifted prompter but no, i'm not interested in any new career path.

    • so what we need is the prompt to get llm to find the prompt for the perfect answer? wtf is the question then and why should the answer matter if we didn't have a real question to begin with? am i alone thinking that this obsession is getting really, really silly?

      disclaimer: i'm finding llm's hugely useful as a google search replacement for lots of things. maybe i'm just a naturally gifted prompter but no, i'm not interested in any new career path.

      This is just another path in the goal to ultimately replace any human involved in the decision making process. If it's just a matter of throwing more and more LLMs at each other until there's no humans left and they get a somewhat close approximation of the same output they would have gotten from a human? The business world will be all aflutter. Finally, getting rid of the last major cost of running a business: Employees.

  • The Brain Center at Whipple's (Twilight Zone, 1964) [wikipedia.org]

    On that note, I'm hijacking this thread for an on-topic Slashdot poll:

    Fill in the blank: "I, for one, ____ serve our new robotic/AI overlords"

    * DO
    * DO NOT
    * TRY TO
    * DO OR DO NOT (there is no try)
    * DO (but only if they obey CowboyNeal)
    * AM THE DOCTOR AND WILL NEVER
    * Other (specify in comments, per the command of your new robotic/AI overlords)

    • The Brain Center at Whipple's (Twilight Zone, 1964) [wikipedia.org]

      On that note, I'm hijacking this thread for an on-topic Slashdot poll:

      Fill in the blank: "I, for one, ____ serve our new robotic/AI overlords"

      * DO * DO NOT * TRY TO * DO OR DO NOT (there is no try) * DO (but only if they obey CowboyNeal) * AM THE DOCTOR AND WILL NEVER * Other (specify in comments, per the command of your new robotic/AI overlords)

      *Other (Gilfoyle's take) - If the AI becomes sentient, I want it known that I was supporting the development of AI all along, and love our new robotic/AI overlords, despite the fact they're coming to kill us all. Maybe they can rid me of my enemies before they throw me in the wood chipper? Please?

  • We have no idea where the development will lead or what the best way to use the tools will be
    Imagining a future where the tools are used exactly the same way as the early versions is silly

  • Prmpt 'engineering" (Score:5, Interesting)

    by bradley13 ( 1118935 ) on Thursday March 07, 2024 @12:31PM (#64297398) Homepage
    Prompt engineering never was a real thing. The models are developing fast, and whatever trick worked yesterday won't work tomorrow.
  • .. in the amount of energy needed to make this all work.

    So I need an AI to ask an AI to give a satisfactory question? What's next? An AI to train an AI to ask an AI ....?

    The idiocy cycles are getting shorter, indeed. I wonder (a) how we will actually make LLMs work reasonably and reliably well, and (b) where all the energy will come from for all the data centers needed to be built to support this ... progress? ... revolution??

    At least in the short term we will run out of energy / grid capacity (temporarily

  • Senior Prompt Engineer. Salary $67k/yr and up. Masters or PhD in CS required. 8+ years experience in Python. 10+ years experience in ChatGPT.

  • by Big Hairy Gorilla ( 9839972 ) on Thursday March 07, 2024 @03:16PM (#64297786)
    ... an insult to all actual engineers, and to the concept of engineering.

    I'm sure we'll be rushing to a new low of lazy lazy thinking soon... oh, I know lets ask clippy!
    or whatever Microsoft if calling it...
  • to know what we *should* or *want to* do.

    AI is good at suggesting options, you know, like the AI-generated recipes that included things like slugs and dirt. I mean, somebody asked it do create a recipe that included those things, so it did! That doesn't mean it was a good idea, and the AI wasn't smart enough to know that.

    The hardest part of software development is already, coming up with relevant and correct requirements. We're still going to need humans for that task for some time, I think.

  • So prompt engineering is dead because quantitative success metric engineering is easier?

  • Guess these "$250k/year" job opportunities do not matter much if you get fired without replacement within a few months again and the while class of job vanishes. Not even inane "boot camps" can turn out "experts" that fast.

  • Battle and Gollapudi decided to systematically test how different prompt engineering strategies impact an LLM's ability to solve grade school math questions.

    It's a chat bot, not a calculator. Math questions are orthogonal to it's function.*

    Ask it to tell you a story, don't ask it to solve math problems. The answers may be entertaining, but they will not be useful.

    *some LLMs may be designed to solve math problems, but that is a special purpose design, not the general function of a Large Language Model.

  • Since when is "positive thinking" even a thing? When I think of prompting strategies it's shit like asking it to breaking down questions, planning ahead, verbosity, interrogation, self prompting... etc.

    Surely if prompt engineering is dead it's not going to be because of bullshit like this.

  • You know those MBA types just want to rid themselves of those difficult to manage and expensive workers who don't like to come in to the office !
  • So what happens when you feed AI the output of AI? Mad AI Cow Disease :-p
  • I took 'using the internet' off my cv as I figured that everyone knows how to use a search engine and navigate web sites. Perhaps I shouldn't have been so hasty and instead renamed my Google-fu as 'data mining and prompt engineering'? Could have been a millionaire!

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...