Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming AI

Thanks to AI, the Hottest New Programming Language is... English (analyticsindiamag.com) 114

"Generative AI is transforming software development by enabling natural language prompts to generate code, reducing the need for traditional programming skills," argues Analytics India magazine. Traditionally, coding was the bastion of the select few who had mastered mighty languages like C++, Python, or Java. The idea of programming seemed exclusively reserved for those fluent in syntax and logic. However, the narrative is now being challenged by natural language coding being implemented in AI tools like GitHub Copilot. Andrej Karpathy, senior director of AI at Tesla predicted this trend last year.... English is emerging as the universal coding language.

NVIDIA CEO Jensen Huang believes that English is becoming a new programming language thanks to AI advancements. Speaking at the World Government Summit, Huang explained, "It is our job to create computing technology such that nobody has to program and that the programming language is human"... He calls this a "miracle of AI," emphasising how it closes the technology divide and empowers people from all fields to become effective technologists without traditional coding skills... "In the future, you will tell the computer what you want, and it will do it,"â Huang commented. Large language models (LLMs) like OpenAI's GPT-4 and its successors have made this possible...

Microsoft CEO Satya Nadella has been equally vocal about the potential of English for coding. Microsoft's GitHub Copilot, an AI code assistant, enables developers to describe their needs in natural language and receive functional code in response. Nadella describes this as part of a broader mission to "empower every person and every organisation on the planet to achieve more".... In a discussion earlier last year, Stability AI CEO Emad Mostaque claimed, "41% of codes on GitHub are AI-generated"...

In 2024, the ability to program is no longer reserved for a few. It's a skill anyone can wield, thanks to the power of natural language processing and AI

"No longer is the power to create software restricted to those who can decipher programming languages," the article concludes. "Anyone with a problem to solve and a clear enough articulation of that problem can now write software."

Although the article also includes this consoling quote from Nvidia's Huang in March. "There is an artistry to prompt engineering. It's how you fine-tune the instructions to get exactly what you want"

Thanks to AI, the Hottest New Programming Language is... English

Comments Filter:
  • by gweihir ( 88907 ) on Sunday December 08, 2024 @05:01PM (#64999877)

    Does nobody write real code that does exactly what you want it to do anymore?

    Ah, I see they interviewed some of the usual asshole CEOs. So this "story" is actually without meaning.

    • I take it that you haven't yet written up a few prompts to tell an AI what code you want it to write. Which would take it maybe 15 seconds. You might be surprised.

      • Before I am finished to tell an AI what code I want to write: I am finished with the code.

        Unless I write a bash script, then I use vim: I use an IDE. Autocompletion in an IDE writes the code I want. And does not give me odd variable names, that I have to refactor to get readable code.

        But if you have a good hint for an "coding AI" I might try it out :P

      • by Anne Thwacks ( 531696 ) on Monday December 09, 2024 @03:42AM (#65000465)
        As someone who was into programming in the 1960's, I can tell you that, in the early days of computing, everyone thought that "Natural Language Programming" would enable "ordinary secretaries" and even their bosses to query databases.

        Unfortunately, natural languages had to be ambiguous or the squeamish would be offended by the tiniest mistake, so ambiguousness was a required feature of natural languages as it had to be possible to wriggle out of your blunders.

        "Talking" to computers was different. Computers must not be allowed to wriggle out of their blunders, as no one would buy them if they could. Preferably there should be no wriggle room for blunders in the first place.

        It took relatively few monster screw-ups for key people to realise that, if the data was to have an impact on real life, something must be done - That something was SQL!

        • It's pretty interesting to wonder what the code would look like. As far as I'm aware, the LLM just uses the language it's told to. Ie if I asked for a calculator app I'd be given an answer in a high-level language. If future coding is done in English, what does the code look like? I'm assuming there is a readable form, not just machine code? I pity the poor testers.
        • by gweihir ( 88907 ) on Monday December 09, 2024 @09:46AM (#65000905)

          Ah, yes. That mess caused by absolutely frigging idiots. The same morons believed that you do not need to know how to code to solve complex problems in COBOL. This stupidity is incidentally still around, in the form of "no code" "coding" environments. Does not work, did never work, cannot work. Insight into the computing mechanism and the algorithms used is not optional.

      • by Junta ( 36770 ) on Monday December 09, 2024 @08:34AM (#65000743)

        I feel like there's a bunch of people gaslighting me.

        After many iterations of the post GPT 3.5 world of folks saying "it can code what I want" trying it, it failing to impress, and then people telling me "well of *course* that old model didn't work well, but this *new* model works well" and getting the same result.

        If it is a prompt that would pop up a ready to go stackoverflow answer in an internet search, or a library that exactly does what I was looking for, then the LLM code generation can do a passable job... sometimes. However in those cases, I could have just done an internet search and been in better shape anyway.

        Problems beyond the obvious failing to work or not being able to do much:

        It loves to invent convenient sounding attributes/functions in existing libraries that would be awfully convenient, except they don't actually exist. I've started fielding quite a few questions from people asking why their attempt to call a non-existant function fails, but their LLM said that was how to do it.

        There were security implications in one attempt, and if the code it suggested had worked as described (it had invented a non-existant attribute, as above), then it would have had a glaring security vulnerability.

        When it would suggest use of a library, it had to be double checked. It has the "stack overflow" problem of tending to suggest "the hot way to do it in 2013", without realizing that by 2020 that library was defunct and no longer is compatible with the use case in 2024.

        Related to that, when it bakes in logic that you could have gotten from a library, *particularly* when dealing with third party internet served APIs, then you have a problem where no one is maintaining your client "library", and changes by that third party will catch you by surprise at the worst time.

        It's like dealing with an offshore junior developer that scammed their way in without any actual training but worse, because you can't really teach it to do better.

        • by gweihir ( 88907 ) on Monday December 09, 2024 @09:53AM (#65000919)

          Yep, that is a nice selection of some of the problems with letting AI "write" "code".

          But these people are not gaslighting you. They are really too stupid to see how bad the quality of AI "generated" code a actually is and they deeply, deeply want to believe it can code well. Why? Clearly they are too incapable to code well by themselves, so they now want this one magic tool that will make then and their code not suck. Of course that is stupid in the second degree, because if AI could actually code well, these people would simply lose their jobs.

        • Maybe you just weren't articulating what you want properly.

          • by Junta ( 36770 ) on Monday December 09, 2024 @10:38AM (#65001025)

            "And don't make up function calls that don't exist"

            "Don't create glaring security holes"

            "Double check that the cited libraries are actually still usable"

            "Please commit your suggestions to a library on github and maintain it properly so I don't have to rewrite my code"

            I don't think LLMs can follow those instructions, and to the extent they could, why would those not be the default behaviors?

            Also, at some point, trying to get an LLM codegen to do what I want is more trouble than just doing it.

            Now I could see potential for LLM handily augmenting code completion, as it becomes more seamlessly integrated. Current code completion is pretty good, with the LSP implementations out there to help, but there are occasions where even a braindead person would know where a segment of typing is going beyond what current code completion can really help with, and that's an area I could believe in LLM providing value.

            • by gweihir ( 88907 )

              Also, at some point, trying to get an LLM codegen to do what I want is more trouble than just doing it.

              And you know what? Doing so makes you better at it and you may even learn some things on the side! One of the severe problems I see with AI code generators is that they make it far harder for people to learn coding. At the same time, they cannot actually replace that skill.

            • You can easily tell the coding assistant to generate a test case that exercises the code. I find it is particularly useful when working on code related to data science.

              "Write a class that takes a data frame as input, swizzles it in the following way, provides methods that expose these aspects of the data, and be able to generate these kinds of plots that show the various distributions of the data."

              Obviously it isn't going to check your code into github and respond to pull requests, that's your job as the su

              • Write a class that takes a data frame as input, swizzles it in the following way, provides methods that expose these aspects of the data, and be able to generate these kinds of plots that show the various distributions of the data.

                Any AI assistant is going to fail spectacularly with a prompt like this. And worse, if you rely on it, there's fuck all you'll be able to do when it inevitably doesn't work right.

                Let's start at the beginning: A class takes a dataframe as an input. An input to what? One of its fields? One of its setters? One of its constructors?

                Oh wait, you probably mean a Python style "class" instantiation, which only has one constructor. Which is fine, I guess, until you run into run into one of the many libraries (e.g. sq

                • >> A class takes a dataframe as an input. An input to what? One of its fields?

                  I didn't include the actual prompt here, but pandas dataframes have a standard format and can be accepted as-is by many utilities. You can easily dump a database into a dataframe with a few lines of code for example, and pass that right along to various analysis engines. I've used coding assistants to generate complete classes that do this kind of thing successfully on several occasions and they are in my toolbox.

                  >> Le

                  • I didn't include the actual prompt here, but pandas dataframes have a standard format and can be accepted as-is by many utilities. You can easily dump a database into a dataframe with a few lines of code for example, and pass that right along to various analysis engines. I've used coding assistants to generate complete classes that do this kind of thing successfully on several occasions and they are in my toolbox.

                    You're not doing software development in this case, you're scripting. In other words, you're stitching together existing code somebody else wrote without having any understanding of how it works, and then you're relying on the AI to do very basic flow control. Pandas in particular is pretty inefficient, by the way. At its core it's not even Python, it's C. But I'm guessing efficiency isn't a goal, otherwise you'd opt for something like Polars. Which means you're also unlikely to be doing anything particular

                    • >> you're stitching together existing code somebody else wrote without having any understanding of how it works

                      You are blabbering nonsense. The generated code is in plain sight and I fully understand it. Any developer is going to be looking at code other people wrote on a regular basis and debugging or enhancing it. But as I said, don't use this tech. Who cares?

                      Meanwhile "92% of U.S.-based developers are already using AI coding tools both in and outside of work."
                      https://github.blog/news-insig... [github.blog]

      • by gweihir ( 88907 )

        I have had a whole class of students with relevant experience do it, repeatedly.

        Your "take" is bereft of insight and so are you. Pathetic as well. Also, you might want to look up what a "fallacy" is. Here is a hint: It just makes you sound dumb.

      • I take it that you haven't yet written up a few prompts to tell an AI what code you want it to write. Which would take it maybe 15 seconds. You might be surprised.

        I would not be surprised if it could handle some data structures class quiz questions. Write a few lines of code based on some well documented algorithms, Given an array of foo objects, Perform an insertion sort using the member foo.z.

        Then again, did it just copy a solution from Stack Overflow or did it correctly understood the instructions and "reason" out a solution itself.

        • >> Given an array of foo objects, Perform an insertion sort using the member foo.z.

          Nothing wrong with that, you saved yourself some time. In my experience the existing code assistants can write a very useful class for you if you can accurately describe what you want. It will be well documented and there will be a test case.

          • by drnb ( 2434720 )

            >> Given an array of foo objects, Perform an insertion sort using the member foo.z.

            Nothing wrong with that, you saved yourself some time. In my experience the existing code assistants can write a very useful class for you if you can accurately describe what you want. It will be well documented and there will be a test case.

            Many of us have certainly been familiar with code assistants with respect to GUI editors for decades. Design your views, place buttons, checkboxes, etc ... the assistant generates all the code to implement the GUI. Compile, run, you can navigate around. It does no work, but there any empty functions for all those buttons, checkboxes, etc waiting for you to add functionality.

            Now we are getting tools to help with the simpler parts of that functionality.

  • Problem domain (Score:4, Insightful)

    by PPH ( 736903 ) on Sunday December 08, 2024 @05:05PM (#64999885)

    Generative AI is about 30 years behind the curve. We were writing automated functional tests 30 years ago .... in English. And we didn't need a stinkin' warehouse full of NVIDIA chips to do it. A '386 or 68000 series processor was good enough.

  • by electroniceric ( 468976 ) on Sunday December 08, 2024 @05:18PM (#64999903)

    a clear enough articulation of that problem can now write software

    Most programming languages are now at a high enough level that programming with them is pretty much this same exercise. The key to software engineering is figuring out what the code ought to do in excruciating detail.

    So the fact that there is some sort of prompt language that emulates a programming language is like saying there's a new high-level language/low-code approach around... not really earth-shattering. There have been many low-code platforms around for decades. They mostly work fine for small-scale solutions that don't really require reorganization how people interact with one another, and without complicated questions about how to represent what people are doing in code and data.

    Not only is the code generated by AI of unclear reliability and maintainability (a typical problem with automatically generated code), there is essentially no concept of problem definition or design in it whatsoever.

    This is typical later-stage-tech-hype-cycle kind of stuff. Color me unimpressed.

    • Problem with GPTs is that we don't really know what we're telling them to do.
    • Ok Chat-GPT, code me a relational database management system with full transaction support, indexing, constraint management, stored procedures, triggers, user security management compatible with windows active directory, and full support for ANSI SQL.

      • Wouldn't that be something.

        My fantasy use is telling the AI to write a nwn:ee module with my described story. Ideally, it would ignore copyright as well and I could tell it to take the Wheel of Time or LoTRs and create a module out of it from the perspective of each character. Now that would be incredible. Or do that but make it a game that I could use one of those 3d headsets for. Really feel like you were in that world described on the pages.

        I want to be able to tell AI to take my favorite book and make i

      • Ok Chat-GPT, code me a relational database management system with full transaction support, indexing, constraint management, stored procedures, triggers, user security management compatible with windows active directory, and full support for ANSI SQL.

        Currently, with a more modest goal, the following iterative approach works:
        Iteration 1: "Write a python program that can play wordle - i.e. play the guessing player".
        Iteration 2: "Now modify the code to allow me to pass on a guess".
        etc.

        This is similar to the Agile mode of development where the programmer plays a BA role, shows the product owner working code, and then receives feedback for more modification, iterating towards something which satisfies the owner.

        • That is not how agile programming works.
          Facepalm.

          Ok Chat-GPT, code me a relational database management system with full transaction support, indexing, constraint management, stored procedures, triggers, user security management compatible with windows active directory, and full support for ANSI SQL.
          No one would start coding on something like this.

          Iteration 1: "Write a python program that can play wordle - i.e. play the guessing player".
          Iteration 2: "Now modify the code to allow me to pass on a guess".

          No ne

        • by Junta ( 36770 )

          That is a fine example of two things:

          a) Something that I fully believe is within the reach of LLM codegen solutions.

          b) Something that has a ton of ready-to-go projects providing that already on github, and you don't need an LLM to generate it.

          That has been my experience, that in experiments it *can* handle little demonstrations like you describe, but anything it can wrangle is readily offered in the very first link of an internet search.

    • Everything old is new again. For 20+ years we've had CAM software and post-processors generating lovely toolpaths for CNC machining that will absolutely destroy your half-million dollar machining center if you just mindlessly plug that thing in and expect it to run. Generative AI is another, even sloppier, layer on top of that.
  • by OrangeTide ( 124937 ) on Sunday December 08, 2024 @05:22PM (#64999909) Homepage Journal

    Formalization is tough in any language, especialy English. There are functional languages that offer some concise syntax for doing very common types of definitions that can be a real pain in the ass to correctly define in a natural language. In a way that AI lets us use the wrong tool for a job is similar to the revelation that a cresent wrench is also a hammer.

    • Speak for yourself, my crescent wrench is not a hammer.
      But it is a wrench, bug killer, 'whack-a-mole' device, 'just give it a tap' device, and a hand-held or thrown object for self defense.

      The latter usually against spiders.
  • Cool, but then what? (Score:5, Interesting)

    by Jeremi ( 14640 ) on Sunday December 08, 2024 @05:23PM (#64999911) Homepage

    Assume that I have successfully managed to get WhateverGPT to make me a nice program and shipped it to customers.

    Now someone files a bug report or a feature request... how do I go about debugging and/or extending this program?

    Hopefully the answer is not just "tweak your prompts and regenerate from scratch" (which seems just as likely to generate new bugs as to fix existing ones) or "read and understand the generated codebase so that you can manually fix/update the code yourself" (which would largely defeat the purpose of using AI-generated code, since then you're back to needing to be a Real Programmer)

    If I can assign the new Jira cases to the AI and have it read the reports and modify its existing codebase appropriately to address them.... that would be genuinely useful.

    • by StormReaver ( 59959 ) on Sunday December 08, 2024 @05:38PM (#64999933)

      If I can assign the new Jira cases to the AI and have it read the reports and modify its existing codebase appropriately to address them.... that would be genuinely useful.

      It would be, but the generative error generators can't do that (and never will) because they can't extract meaning. As a software developer, writing syntactically correct code is the least important part of my job. The most important parts are understanding requirements and the logic of why and wherefore.

      Writing the code is trivial. Know why the code needs to written is far more important. And despite NVidia's sleezy profit motives*, no computer generative error generators can reason.

      * Being motivated by profits is not sleazy. Lying and deceiving to any degree necessary to make such profits, like NVidia's CEO does, is sleazy.

      • I am not sure most humans can extract meaning either, we're also generative error generators.
        • In my situation, the PHBs normally extract whatever meaning was there before the paperwork reaches me.
        • Those people are trained differently. They are instructed to ask, "do you want fries with that?" They are not tasked with writing software. And if they are, they quickly earn the title, "Unemployed."

    • You don't necessarily have to ship it to customers. Much programming happens internally. Like many businesses rely on kludges written in Excel. I know people without programming knowledge who managed to have ChatGPT make a GUI in python/tk, for some internal workflow. Something that previously they would hire a few hours of someone to do. The "someone" was making easy money writing simple python for noobs. Now, a ChatGPT subscription does this for a fraction of the cost.

      • by Jeremi ( 14640 )

        I know people without programming knowledge who managed to have ChatGPT make a GUI in python/tk, for some internal workflow.

        In that case, their "customers" are their co-workers, or even just themselves, but the problem otherwise remains the same. In particular, if their "customers" find the program useful, it's very likely they will want to make it better, and will ask them to make that happen. What will they do then?

  • by Gavino ( 560149 ) on Sunday December 08, 2024 @05:38PM (#64999931)
    "NVIDIA CEO Jensen Huang believes that English is becoming a new programming language thanks to AI advancements"
    It's as if creating more buzz around AI somehow benefits nVIDIA's share price. Who knew? /sarcasm. Take this with a big grain of salt. Microsoft too. Gotta keep the AI-fuelled share price bubble going, but keeping AI constantly in the news.
  • Does it matter whether it is English or some other spoken language for this discussion? I can imagine that some languages would be easier to program with. But if the language doesn't matter, I would think it is likely that Mandarin will be the new programming language given that is the most common language in the world and China's influence is clearly growing..
    • Really? I rarely read technical documents in Chinese. And most of the Chinese translations of English I see are full of errors. I'm not worried about that. People have been predicting that Chinese would overtake English on the Internets for about 20 years. It's not happening. The world speaks English, not Chinese.
      • And most of the Chinese translations of English I see are full of errors.

        Such as this one [imgur.com]? (it's supposed to be jerked chicken)

      • AI has to translate whatever language you use into computer code doesn't it? So one question is which language is easier for the programmer to use, their native Mandarin or their second language. In general, I think most people would be more comfortable using their native language. Another is does a particular language give better results.
    • by ukoda ( 537183 ) on Sunday December 08, 2024 @06:00PM (#64999963) Homepage
      The structure of some languages may be better that others, but the problem with Madarin is Hanzi. There are over 100,000 different Chinese characters. Also speaking tonal languages is a huge barrier to native speakers of non-tonal languages. If you look Mandarin and Japanese at first glance they appear similar with both have simpler grammar than English, mostly gender-less and using logographs. However for an English speaker Japanese is is way easier to learn with no tones and hiragana and katakana to fall back on. And don't get me started on languages that have a genders for non-living objects!

      To be fair English is a bastard language that must cause a lot of learners to think WTF where they thinking at some it.
      • The problem with Mandarin (and a few other languages) is they're still writing in pictographs.
        • That is not a problem. That is a benefit.
          Every pictogram has its own unicode ... you do not even need to parse anything to know what it is.
          If you are scared about Hanzi/Kanji watch the videos linked there: https://www.chineasy.com/ [chineasy.com]

      • There are over 100,000 different Chinese character

        In terms of clear communication with AI I would think that could be an advantage. But I suspect most people are going to use their native language to communicate with AI if that is an option. Its not clear to me whether that will be an option with AI trained to specific languages.

      • English is the lowest common denominator, highly illogical but extremely efficient, with characteristics, characters, words, and sounds familiar to and incorporated from so many other languages. There is almost no concept that English cannot convey. I donâ(TM)t see much parallel with Chinese and i donâ(TM)t expect the machines would prefer the language or the character set. Also, what was the ai trained on, because it must understand idioms, nuance, humor, and other cultural, regional, and linguis
      • Only 4000 Hanzi/Kanji (actually less) are in use.
        There are three types of Hanzi:
        Pictogram/Pictograph: even if it is "far sketched" it is a picture of what it means, example: one, two, three, mountain, river, sun, moon, human, mouth
        Ideogram: often two pictures combined, giving an idea of something, that is not necessarily obvious. example a human in a mouth, well mouth in this case is just a fence: a prisoner. Or a king inside of a much bigger mouth/fence: kingdome/empire.
        logographs: "Logos" - greek for word

        • by ukoda ( 537183 )
          I'm studying Japanese and find Kanji both difficult and fascinating. Some, like river, make sense but others are like male, which is apparently a rice paddy above strong. How does that work? I gather that when graduating school in Japan you are expected to know about 6,000 Kanji.

          Well my reply did get distracted from the original issue of using human language for programming and if different human languages would factor into that idea. I have spent several years trying to learn Russian, Mandarin and no
      • by vbdasc ( 146051 )

        German language has genders for non-living objects. Yet, for a native English speaker, learning German is orders of magnitude easier than either Mandarin or Japanese.

        Hell, as someone whose native language is not English, I've noticed than even English has genders for non-living objects... for example, ships.

    • Esperanto or Japanese would be more easy.
      I am not super sure, but I thin same for Korean/Hangul.
      I would think it is likely that Mandarin will be the new programming language given that is the most common language in the world
      For an AI, perhaps. As most programming languages itself are English based, I do not think to much about that.
      Keep in mind: Mandarin is only the "government language", in daily life, Chinese speak and use their local language. For random "Chinese" from all over China, using Mandarin as

  • Words matter (Score:5, Insightful)

    by ukoda ( 537183 ) on Sunday December 08, 2024 @05:45PM (#64999947) Homepage
    The problem is with the innocent statement "a problem to solve and a clear enough articulation ". Clearly articulating any non-trivial problem is a non-trivial exercise.
  • Code is a liability, not an asset. I've seen these tools remove error control and significant failsafe logic from my code. if they write bugs, how will they know to fix them? How does anyone know that this technology hasn't been compromised to silently work against its users? Just be glad that we got to live through the good decades, because the shitocalypse is here.
    • Re: (Score:3, Insightful)

      by commodore73 ( 967172 )
      Meant to add a corollary to the liability comment: knowledge of code is invaluable. These tools have no knowledge, they're just fast statistical models trained on large data sets. With some procedural logic wrapping that, just to prove that AI doesn't actually exist as people think.
  • by ewhac ( 5844 ) on Sunday December 08, 2024 @05:57PM (#64999955) Homepage Journal
    Those few words are doing a metric fsckton of lifting there.

    Anyone with any experience in programming will tell you that's where most of the effort lies -- articulating the problem. Even with constructed languages -- COBOL, FORTRAN, LISP, Smalltalk, C, Pascal, Modula-2, Java, C#, OCaml, Haskell, Rust -- all of which were designed to eliminate ambiguity and force the programmer to articulate operations very precisely, we still spend most of our time discovering where the holes in our mental models are. Those are the holes where bugs creep in, or where the user is unpleasantly surprised by unexpected behavior, which may be logical from the program's perspective, but isn't what the user expected/wanted -- because the problem wasn't articulated precisely enough.

    "Programming" in English will not help. It might get you to a crude tech demo, but that's about it. It's not a tool. It's a parlor trick.

    • LAWYERS. maybe they can do it. Just think of all the issues we have with laws, regulations, and policies all written and resolved with English...

      "Youâ(TM)ll never find a programming language that frees you from the burden of clarifying your ideas."
      -Randall Munroe
      https://xkcd.com/568 [xkcd.com]

      • by ewhac ( 5844 )

        LAWYERS. maybe they can do it.

        (*derisive snort*)

        I thought of putting an aside in my original comment touching on that very idea -- that, even with centuries of practice, laws written in English still have uncovered edge cases and exploitable loopholes. Indeed, if all laws were articulated perfectly clearly, unambiguously, and covered all edge cases, we'd need far fewer lawyers.

        But I chose not to put that in, because I thought it was already blazingly obvious. (Indeed, one could argue that the current

        • oh no, even among college students the obvious frequently gets missed by a few. What passes for "Reporters" today are far worse and their readers who barely skim an old tweet worth of depth while distracted miss so much they'd elect a con-man rapist failure, twice.

    • we still spend most of our time discovering where the holes in our mental models are.
      Exactly.

  • Prompting is a horrible way to make anything .. IF they could make it good at generating TEMPLATES that would be good. For example, instead of generating an image, generate some photoshop layers. Or, a 3D file containing the objects and animation paths that can be modified. The biggest flaw with having AI do things other than generating programming code is the lack or difficulty of post-generation modifiability.

  • 248 years and you still can't use English correctly.
  • "Computer, make a holonovel in the style of Dixon Hill." A truly astonishing similarity to how this stuff works, 40 years ago. But....you don't rely on that interface for critical Enterprise functions, only for fun and games.
  • "... nobody has to program ...programming language is human ..."

    I've found the real product: Any high-school drop-out can give a computer orders and the computer "will do it". Businessmen have been promising exactly that for 40 years, and 'easy' answers, since computers were invented.

    Inventing and using; block-structured, object-orientated, functional languages was wrong and a waste of man-power: We just need a computer that can take orders.

    Those languages exist; because giving a computer orders, is difficult, because treating everything as a number is not easy t

  • Future or now (Score:5, Insightful)

    by phantomfive ( 622387 ) on Sunday December 08, 2024 @07:06PM (#65000047) Journal

    NVIDIA CEO Jensen Huang believes that English is becoming a new programming language

    Microsoft CEO Satya Nadella has been equally vocal about the potential of English for coding

    Andrej Karpathy, senior director of AI at Tesla predicted this trend last year.... English is emerging as the universal coding language.

    These are all quotes about what AI could do in the future. They are not quotes about what AI can do now. The difference is important, because when AI marketers talk in BS mode, they frame it in the future (because then they are technically not wrong, because they don't have to talk about current capability).

    Meanwhile, a quote from Alan Perlis [yale.edu]:

    93. When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop. COBOL did the same thing.

    • by vbdasc ( 146051 )

      Why English, guys? Why not Mandarin, Mr. Jensen Huang? Why not Telugu, Mr. Satya Nadella? Why not Slovak, Mr. Andrej Karpathy?

  • by mukundajohnson ( 10427278 ) on Sunday December 08, 2024 @07:08PM (#65000051)

    Oh those precious non-programmers and their thoughts.

  • by madbrain ( 11432 ) on Sunday December 08, 2024 @09:32PM (#65000193) Homepage Journal

    Spent nearly a half hour last night trying to get ChatGPT to stop referencing Java classes in Javascript code it generated. Completely futile attempt . It just doesn't understand what's being asked of it, period. Even for small stuff for which it usually does OK like she'll scripts and Python, you better use source control and check every single diff. It often brings back old problems or creates new ones, along with sometimes fixing the one you asked it to.

    • Well,
      perhaps the AI was WaySmarterThanYou?

      Java and JavaScript are interwoven. It is completely legal to refer to Java from JavaScript, and if you know how to do: vise versa.

      You probably missed a simple check box: exclude Java. Or something similar.

  • Dick Picks âoeRealityâ OS and ENGLISH programming language. Surely some slashdot readers know a bit about computer historyâ¦

    https://en.m.wikipedia.org/wik... [wikipedia.org]

  • Its very simple (Score:4, Insightful)

    by ZipNada ( 10152669 ) on Sunday December 08, 2024 @11:41PM (#65000273)

    If you can't articulate in English what you want to do then you probably won't be able to write the code properly yourself.

    There are a number of AI code generators available these days that can be embedded in your everyday IDE. You can pull up a panel and tell it what code you are planning to write and it will generate a reasonable facsimile in seconds. You can even just write a block of comments that explains your intentions and it will do it all as a code completion. Will it be perfect? Probably not but you can then tweak it however you want, which is an awful lot faster that writing the whole thing yourself.

  • The article reminds of when Hypercard was released in 1987

  • Strictly speaking they just could use AppleScript or a similar X-Talk language.

    However it proofed that "expressing something" in "plain English" still requires you to know odd quirks of the interpreter. And those quirks are often difficult to memorize. So you spent more time to google the "proper English" for AppleScript than you would need to write it in JavaScript (or Swift, or whartever), right away.

    I guess this "new AI" stuff, are just clever search engines. As probably 90% of any code anyone ever will

  • Perhaps to build on the idea of using English, some key words could be defined to make it easier to obtain unambiguous results? Maybe different applications may in time demand slightly different subsets of English? You see where I'm going... ? Anyway, the author mentions prompt engineering and what are those if not patterns? Does the average Joe know how to ask for something explicitly and even worse, what if two Joes have similar but overlapping demands? I see LLMs as a useful addition to WordPress but the
  • Is asking a computer search to write code really coding? It's like OpenAl being really open.. or non-profit.

    - Stop lying.
  • I support an SDK... The other day I had a customer make a case where they said they tried having AI generate code to accomplish a task but this particular class was "missing" a particular method.

    Dude our class isn't missing the method, your AI code generator it the digital equivalent of a cargo cult..

    Of course I phrased it more .. tactfully - but I may have in fact quoted his "I tried using AI to generate the needed implementation but it didn't work" and responded "I am not at all surprised" and then explai

  • by GuB-42 ( 2483988 ) on Monday December 09, 2024 @11:13AM (#65001107)

    Asking someone to do something with precision using plain English is hard. It takes skill on both sides, and it usually involves followup questions and supervision. English is not very precise without being verbose and full of technical terms, which require a lot of expertise, more than mastering a programming language.

    What many people don't realize is that programming languages are simple orders of magnitude simpler than any natural language. A beginner can learn the basics of a programming language in a few days, for an experienced programmer, it can be less than an hour. It is one of the easiest part of programming.

    Programming is hard, and programming languages make the task easier, not harder. This is because what programming really is is to make a machine do exactly what we intend it to do, and machines are dumb, so programmers have to be extremely precise. With current-day AI, machines are still dumb, just less obviously so. They can do some simple tasks right without precise instructions, but they will screw up at the first slightly unusual thing, and do so without telling you. So if you want the machine to do a good job, you still need precision, and this is best achieved by someone skilled in that task (a programmer), using the right tools (including a programming language).

  • It irks me that people who do not understand what a job is think AI can do it. A developer probably spends about 10% of their time actually writing code. Rest of the time is spent figuring what the users need/want, or modifying/debugging existing code.

    The joke is "AI will replace developers, as soon as users are able to tell it exactly what they want."

    Also, wasn't COBOL, the "english-like" language, supposed to make programmers obsolete?

    IMHO we should create an AI to replace VCs and CEOs - should be

  • Ok, I don't code as part of my living any more. But when I do need some small amount of code, and need help, an AI has been very helpful. HOWEVER, the provided code never works. I'm the thing that figures out WHY it doesn't work, although once I do, I have been able to get the AI to fix their code.

    This is not going to work for anything complicated, though. If you have to consider how you are going to trap 50 possible errors that might be thrown, I don't think an AI is going to save you. Nor if you have

  • They wanted to create a programming language that was very English-like. They succeeded neither in building a good programming language, nor making it English-like. Though AI might be better at interpreting English, the issues remain, such as the necessity to be explicit and build a lexicon of terms that have specific meanings.

    "Make me a program that calculates debt-to-income ratio." Buried in that sentence is a ton of understanding of legal regulations, economics, and use case specifics. It's still going t

  • This is about attention grabbing headlines so those shilling Ai can justify outrageous prices. By convincing uneducated C-levels AI actually does the work of 20% of your staff, management achieves ROI by buying something that costs just under the price of the 20% of the bodies that get fired/replaced. Since it's new and hype, stock reacts positively. Which in itself justifies the transition since CEO pay is tied to stock price and there's fewer bodies around to "complain" or threaten to strike over wage

Term, holidays, term, holidays, till we leave school, and then work, work, work till we die. -- C.S. Lewis

Working...