Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming

Ask Slashdot: What Happens After Every Programmer is Using AI? (infoworld.com) 127

There's been several articles on how programmers can adapt to writing code with AI. But presumably AI companies will then gather more data from how real-world programmers use their tools.

So long-time Slashdot reader ThePub2000 has a question. "Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?" Let's posit a couple of things:

- First, your AI responses are good enough to use.
- Second, because they're good enough to use you no longer need to post publicly about programming questions.

Where does AI go after it's "perfected itself"?

Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that code comes at a price?

This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Happens After Every Programmer is Using AI?

Comments Filter:
  • by xonen ( 774419 ) on Sunday July 23, 2023 @07:41AM (#63708446) Journal

    #$!

    • by jd ( 1658 )

      Never! :)

      Seriously, I see Slashdot gaining support for UTF-32 because to hell with what everyone else uses.

  • by Opportunist ( 166417 ) on Sunday July 23, 2023 @07:53AM (#63708462)

    Remember when the internet was young and we all thought that within a few years, maybe decades, we'd all have access to all the information in the world and it would be awesome? Never again would it be possible to bullshit people into believing lies because they now can easily see just how they're being deceived. We thought that we'd all become accomplished philosophers, because we'd engage in meaningful discussions and the marketplace of ideas would sort out all the bad ones because people would latch onto those that mean progress and reject those they identify as superfluous.

    Why do you think this is different, I have to ask? You think AI is any better at spotting people trying to fill it with false information, bully it, troll it and play havoc to its learning model?

    • Remember when the internet was young and we all thought that within a few years, maybe decades, we'd all have access to all the information in the world and it would be awesome?

      Yes.

      Never again would it be possible to bullshit people into believing lies because they now can easily see just how they're being deceived. We thought that we'd all become accomplished philosophers, because we'd engage in meaningful discussions and the marketplace of ideas would sort out all the bad ones because people would latch onto those that mean progress and reject those they identify as superfluous.

      I never heard of this claim, but I will accept that some people believed it at the time. You are coming to a conclusion which may or may not be accurate, from the original statement. We have POTENTIAL ACCESS to all sorts of information. If people are drawn to misinformation, then that is, by definition, LACK of information. I agree that you need to some sort of filter to determine the difference, but that is the purpose to this thought experiment. What happens when the AI is smart enough? I suspec

      • by Opportunist ( 166417 ) on Sunday July 23, 2023 @11:13AM (#63708836)

        Well, my time with the internet dates back to when it was mostly an academic's toybox. Back in the days before AOL and before the Eternal September. The average internet user had a considerably above average IQ.

        AOL should have been a warning. Our main failure is that we ignored it. We let the masses in. We have nobody to blame but ourselves.

        • Our main failure is that we ignored it. We let the masses in. We have nobody to blame but ourselves.

          I blame ourselves and thank ourselves every day. I'm beginning to wonder if you're in fact an AI yourself, "hallucinating" a rosy past with such a narrow point of view that you think it was better.

          The internet was ... more accurate back then, but not even remotely as useful. Letting the masses in undoubtably changed the entire world for the better, regardless of what you think when you forget to take your meds.

          • The internet was ... more accurate back then, but not even remotely as useful. Letting the masses in undoubtably changed the entire world for the better, regardless of what you think when you forget to take your meds.

            What's better about the world today because of the Internet? I would say free Information but students are still dropping an easy G on textbooks. Perhaps social media has had positive effects on the mental health of the masses... oops... apparently just the opposite.

            I know being online is better than mindlessly watching the idiot box! It will promote thinking and raise the standard for discou... oh never mind.

            Free international calling? I guess that's mostly true.

            Gaming! Because playing with AIM bottin

            • This is the same sort of pearl clutching commentary that was made back in the 1950s when television was rapidly replacing radio in the home. It holds no more relevance now than it did back then.

              The Internet is a tool. As with any tool, how it's used is dependent entirely on the user. Yes, it does get used for things that are of no benefit to society and, in some cases, to the detriment of society. But it also gets used for the good of society as well. That you are unable to see the good side of it does not

              • This is the same sort of pearl clutching commentary that was made back in the 1950s when television was rapidly replacing radio in the home. It holds no more relevance now than it did back then.

                Best to stick to the facts rather than rely on unfalsifiable purely inductive arguments.

                The Internet is a tool. As with any tool, how it's used is dependent entirely on the user. Yes, it does get used for things that are of no benefit to society and, in some cases, to the detriment of society. But it also gets used for the good of society as well.

                I don't subscribe to the notion tools are inherently neutral. Capability enabled by the presence of a tool results in attractors that influence the world affecting the balance and distribution of power and the operation of society. This influence is measurable and to some extent predictable.

                That you are unable to see the good side of it does not indicate that the tool itself shouldn't exist.

                I posed an open question "What's better about the world today because of the Internet?" you could answer it instead of putting wor

    • Who thought that? I was online via telnet in 1991 and I don't remember anybody drawing your conclusions about the future. I don't recall anybody describing a factual utopia of unassailable truth. Ever.

      I do, however, remember lots of spooked people who didn't much care for the direction this would lead.

    • I'd expect large chunks of unmaintainable code, especially if the AI that created the code goes out of business or no maintenance is needed for several AI generations.

    • I'm not sure why programmers would be surprised at systems degrading over time, that's kind of their domain! Anything that goes mainstream is going to suffer from higher entropy, including actively negative usage that impacts other users.
  • Of course having AI write your code might mean sharing your code/ideas with company that provides the AI service..
  • by Anonymous Coward on Sunday July 23, 2023 @07:55AM (#63708470)

    I'm not saying AI will never be good enough to be used like this, but what people are currently calling AI certainly is not.

    ChatGPT and its ilk are not AI they are merely predictive text generators - more sophisticated, certainly, but not much different than the spellcheck/suggestions on your mobile phone. They can generate code, given specifications, but the quality of the code is generally abysmal. Even if the generated compiles without error (and that's certainly a rare case) it is frequently full of logical errors that will generate incorrect results or just explode at runtime. ChatGPT is much like an outsourced developer - you'll spend most of your time reviewing and fixing the garbage code that was returned to you.

    Is it good enough to be generally useful? No, not currently. In five years, or ten years? Maybe. In the meantime only the very worst programmers have to worry about losing their jobs to AI.

    • by Entrope ( 68843 )

      Indeed. https://www.johndcook.com/blog... [johndcook.com] (along with its predecessor post on ChatGPT and Bard) shows how badly these systems fail on anything that requires logic. The cases on Slashdot where a LLM hallucinated court cases are other examples, and the case where an LLM defamed a professor who shared a first and last (but not middle) name with a criminal. Code generation is not different in substance than those kinds of tasks.

      • > LLM hallucinated court cases

        the more I think it over, the more I doubt that it concocted court cases.

        Rather, I suspect that it was a complete failure to recognize, perhaps even a complete inability to do so, that the cases cited were real, unchangeable items which it had to rely upon.

        Instead, I think it simply took that as a kind of text, and blithely generated.

        And this takes us back to not being a type of intelligence, but rather predictive text.

        [you also have the problem that it takes a *complete* id

    • by FudRucker ( 866063 ) on Sunday July 23, 2023 @08:57AM (#63708562)
      yup AI & ChatGPT has been a big pump & dump scheme to get investers and venture capitalists with deep pockets interested
      • by PDXNerd ( 654900 )
        AI is a marketing term to explain generative text and predictive text to non-technical people, much like Hacker was re-used and ab-used to describe someone who broke into computer systems in the 80s.

        I think you underestimate the power of generative text. It is NOT AI. When its used as a tool to generate text, its amazingly powerful. It can save a ton of time getting a framework and outline in place prior to you putting the meat on the bones.

        I also think the proper way to use this tool will be akin to
        • by kmoser ( 1469707 )
          Harping over the fact that LLMs are not technically AI is besides the point. Once we do have actual AI, you can be sure that the first generation will be just as crappy as today's LLMs (maybe even worse). Anything that is trained to think like a human, e.g. "true" AI, will always contain human-like biases. Heck, even if we manage to eliminate human-like biases from true AI (whatever that means), you will find other biases remain. There is no perfect tool.
        • by cstacy ( 534252 )

          Business is going to be very very interested in a tool that can detect more accurately things generated ML, and attorneys more so.

          Detecting AI-generated text is in the same problem area as putting "guard rails" on ChatGPT to detect incorrect (ie. factually wrong) output.

          If it were possible for a program to look at the output of ChatGPT and do those things, the program would be better than ChhatGPT itself. And there would be no need for them, because the AI could do it by itself in the first place.

          What I'm trying to say is: "No, sorry. Can't be done."

    • AI based computer programming is a problem which needs AGI (artificial general intelligence). If you think programming is all about APIs and while loops, then you don't understand what programmers do. APIs are simply the tools programmer use.

      What programmers actually do is take an ill conceived description of a problem and then transform that poor description into a series of logical steps needed to solve the actual problem. One of the harder parts of programming is figuring out exactly what problem you are

    • Have you actually used ChatGPT to generate code? I Work in a large enterprise code covering a large portfolio of applications. While I wouldnâ(TM)t use it to produce unmonitored code for production, itâ(TM)s a game changer in terms of productivity. No more need to Google and review search results for details. I get tailored responses on the use of even obscure APIs. Is it perfect? Certainly not. I regularly get responses that are sub-optimal or even wrong. However I can typically spot that and ask
    • Is it abysmal? Really? I've only used it maybe ten times, and I've always had to tweak the output to get exactly what I wanted. But the code structure was generally okay.

      ChatGPT is better at using Java generics than most Java programmers I know.

    • I'm not saying AI will never be good enough to be used like this, but what people are currently calling AI certainly is not.

      ChatGPT and its ilk are not AI they are merely predictive text generators - more sophisticated, certainly, but not much different than the spellcheck/suggestions on your mobile phone. They can generate code, given specifications, but the quality of the code is generally abysmal. Even if the generated compiles without error (and that's certainly a rare case) it is frequently full of logical errors that will generate incorrect results or just explode at runtime. ChatGPT is much like an outsourced developer - you'll spend most of your time reviewing and fixing the garbage code that was returned to you.

      Is it good enough to be generally useful? No, not currently. In five years, or ten years? Maybe. In the meantime only the very worst programmers have to worry about losing their jobs to AI.

      I think there's a few ways in which it works well:

      - Integrated assistants like CoPilot give a significantly improved autocomplete.
      - Best results come from clearly describing the code/concept in a comment and letting the assist write out the code.
      - When working with an unfamiliar API it can give you a big head start

      Now, it can still be awful and frustrating for a few reasons:
      - If it was trained on the wrong version of the API it can give you bad results.
      - When trying to integrate with your existing code it c

    • I'm not saying AI will never be good enough to be used like this, but what people are currently calling AI certainly is not.

      Who said AI had to meet a certain arbitrary qualification to be considered AI? What definition of AI have you used to arrive at this conclusion?

      ChatGPT and its ilk are not AI they are merely predictive text generators - more sophisticated, certainly,

      Decision trees and even simple feedback loops have been widely regarded as "AI" for decades. The term "AI" without qualification is extremely nebulous. I find it a bit strange the thing with a remarkable ability to process language, ingest and process complex natural language instructions and carry on discussions on a wide range of topics should not be considered

  • AI may be asked to generate problems, write code to solve them, test this code, and learn from the results. The good thing with code is that either it works or not.
  • It is entirely useless and anyone using it should be considered a liability and fired.

    I don't understand why it is even a thing.

    • by znrt ( 2424692 )

      think of it as a streamlined version of stackoverflow, which is what a huge lot of junior (and not so junior) programmers have routinely been using already.

      as a sort of sophisticated google search it can be a nifty tool ... as long as you can interpret the results correctly. then again, stackoverflow answers have been used straight away with very little or no understanding too, and made it happily to production. this is just more of the same.

      • think of it as a streamlined version of stackoverflow

        Yep. Definitely better than sitting for half an hour on stackoverflow trying to find useful answers.

    • by fazig ( 2909523 ) on Sunday July 23, 2023 @08:27AM (#63708518)
      It's fairly easy to understand why it "is a thing".
      Investors and shareholders who've got more greed than brains (thus a lot) seeing an opportunity to finally cut out the specialized and costly egg heads in order to maximize profits.

      There's that weird notion among some people that scientists and engineers are just being lazy if they don't easily do what was requested, and that explanations of why it's not feasible or even possible are just excuses.
      • Scarcity drives salaries up and by using tech at or near the bleeding edge, companies are limiting the number of experts they can easily employ... rather than scale back expectations though, bosses are constantly looking for loopholes - how to get something usable as cheap as possible. For a while the magic bullet was outsourcing and now it could be AI. Bosses don't care who does the work as long as it's cheap!
      • by sapgau ( 413511 )
        +1
        "There's that weird notion among some people that scientists and engineers are just being lazy if they don't easily do what was requested, and that explanations of why it's not feasible or even possible are just excuses."
        Oh this is so true.
  • ""Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?"

    Where's the "generative leaps" even if they do, and who cares?

    A tool is a tool, if you have it then you have it. Why is there an assumption that we also need "generative leaps"?

    "Where does AI go after it's "perfected itself"?"

    What if it's nowhere? So what?

    "Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that cod

  • A lot of them are outdated or were wrong in the first place.

    Like how to do a gallery picker, you can use like 3 lines of code to get the bytes OR you can copy like 80 lines of code that still doesn't work with all uri sources. And thats only after you find the proper new way to receive the result for the launched intent.

    So stuff like that will happen and they're already training the ai with such solutions.. Now would it be nice if the relevant docs just had the right way explained and how the permissions in

  • People are lazy.
    Programmers are lazy.
    Great programmers are extra lazy...
    Tools allow programmers to be extra extra lazy.
    AI tools allow programmers to become morons.. or "prompt engineers"

    How we got here:
    Nobody can spell anymore or even use words they WANT to use because....
    spellcheck is easier to use and when it changes the word you WANT to use to another word, it's often easier to just use that word, than it is to correct it.

    So: no need to know how to spell or do grammar (Grammarly, anyone?)

    By definition, a
  • Because it ain't looking like coming even close so far.

    It just regurgitates online examples that, at best, still needs a skilled hand to fit a use. No better than existing search engines really.

    • by Zarhan ( 415465 )

      This. The way I see it, AI tools (copilot etc) help in the use case where you would have previously copypasted code from stackoverflow. In essence, you are a junior programmer starting out and are just learning tricks (or a mediocre one who never learned anything), the AI might shine in the sense that it provides better "search results" than googling with site:stackoverflow.com.

      For experienced ones, not so much - perhaps as a verifier it might work. For me personally, I've drawn up some pretty complicated S

    • I think the point is to at least discuss this BEFORE it comes true. Not afterwards. Even if it takes 50 trillion years, it is worthwhile to discuss. Waiting until after it occurs is the normal human thing to do (reactive instead of proactive), but having thought experiments have their purpose.
  • It is a probabilistic language model - for a given prompt it generates the "most likely" text according to its training data. It has no actual understanding of the problem you are solving, or programming in general. It has no common sense and no reasoning abilities. I can only see it being useful for simple discrete problems someone has already solved, or for generating large amounts of boilerplate. But in the former case not only are you reinventing the wheel, you are also not learning anything. In the lat

    • by HiThere ( 15173 )

      It's not that straightforwards. It hasn't (AFAIK) been trained to do so, but it *could* consider problems like "extract the BTree module from SQLITE and insert it in my code *here*.

      Now that "BTree module" is definitely a discrete module, but it's not all that simple. One can do a remarkably huge amount by composing existing things and then optimizing them. To a large extent that's what program libraries are. It's just that the libraries aren't properly selected to be a "universal set of opcodes".

      This is

  • by mrsam ( 12205 ) on Sunday July 23, 2023 @09:24AM (#63708612) Homepage

    The push and hype for AI-based developing mostly comes from 2nd-rate developers who see AI as their great hope for finally mastering the craft and becoming rock star uberhackers. They also watch Star Trek too much, and are having multiple orgasms at the thought of saying "Computer: write me a node.js web server that implements a shopping cart", and have the code appear instantly before their eyes.

    All the AI tools I've seen are little bit more than glorified predictive type-ahead tools. I can see how they can save quite a bit of time with rote typing, but that's pretty much it, nothing more beyond a glorified menu. But, guess what: you still have to use your brain to figure out which option to pick. An AI is not going to make the choice for you.

    The remaining AI tools boil down to nothing more than producing a fill-in-the-blanks templates. The starting template is nothing special, and nothing that requires a lot of intelligence to write. But it does take the same amount of intelligence to figure out what goes into all the blank spots.

    So, sorry, all of you who hope that AI will turn you into a supercoder. It's not going to happen, sorry. And the few of you who are worried about the Ai taking your job: there's nothing to worry about. It should not take a great thinker to conclude that before an AI can surpass a human brain in some measurable way, someone has to actually demonstrate that an AI surpasses a human brain, in some measurable way. Where's the evidence?

    (Disclaimer, I also watch Star Trek too much, but I also watched Bill Shatner's SNL skit)

  • Let's posit a couple of things:
    - Libre code is good.
    - Free as in speech doesn't need to be free as in beer.

    As programmers, we want to use any code we want. Code that's shared means that we don't have to reimplement things just because of some copyright or "copyleft". Because we can't do this, a lot of programming time is invested in reinventing the wheel. Sometimes we also need to waste a lot of time just getting some esoteric API to work because there's not enough documentation or help.

    Imagine if we could

  • ChatGPT is being trained on Stackoverflow, not quality code bases that have been annotated to hell and back. The level of effort to simply create a training set out of large Apache Java products, to say nothing of large C/C++ code bases like the Linux kernel, BSDs, KDE, GNOME, LibreOffice, etc. is something no one in their right mind would ever set out to do without having a highly compensated, full time job.

    That's why ChatGPT and such are awesome at generating boilerplate code, but you don't see them going

    • ChatGPT is not good at boilerplate e.g. having it write a Hadoop program where you have to say the same thing over and over (because of all the generics used the key and value types for each stage have to be mentioned all over the place and it's not great at being consistent). What it is good at is annotating code.

      Or, in other words, the data scientists would gesture vaguely towards the wikipedia pages for "semi-supervised learning" and "pseudo-labeling"

      I gave ChatGPT 3.5Turbo a couple Racket macros and i

  • The use of AI is in principle not much different than what one has done before (dinosaur here speaking). When I started programming, I would use core language specifications and lecture notes of classes (profs would actually still write language specifications on the board). There was no internet, just the manuals. I started to program on programmable calculators and in Basic with just the language specifications. Then we used to consult books like "Pascal in 100 examples" or "Kernighan and Pike" and the
  • 1. has a cell phone. We thought work will reduce, instead it went up because your boss can call you any time. People can call you all the time and ask you to do stuff.
    2. has a record player. We thought nobody will learn to play the piano anymore.
    3. has a car. We thought we would spend less time on the road and going on trips.
    4. has a calculator. We thought engineering would be a breeze.

    • by HiThere ( 15173 )

      Yeah, we're pretty bad at predicting 2nd and higher order effects. But engineering *IS* a breeze compared to equivalent projects in earlier times. It's just that now we're optimizing a lot more and taking on much more complex tasks.

      Also, currently just about nobody learns to play the piano. A few do, but the percentage is trivial compared to what it was, even though pianos are a lot cheaper and more portable. (I'm not sure what your baseline is, or I'd say "but we didn't predict the rise of the studios"

      • Regarding engineering .. yes we're taking on more complex tasks .. that's what will happen with AI. If we're going to build rockets and biodomes to colonize the solar system we're going to have to.
        Regarding piano .. I don't know the statistics of piano specifically, but I believe the number of people making music overall has increased. Whether it's an instrument or via synths/computers.

        • by HiThere ( 15173 )

          The number of people making music may have increased, but the proportion certainly hasn't. It used to be EXPECTED that everyone would play some instrument or other, though not necessarily that well. This became a lot less true after phonographs became common.

    • by jbengt ( 874751 )

      1. has a cell phone. We thought work will reduce, instead it went up because your boss can call you any time. . . .

      Fortunately, my bosses pretty much know not to call me when I'm off work, with few exceptions.

      2. has a record player. We thought nobody will learn to play the piano anymore.

      I can't speak specifically to pianos, because those are and have been expensive, but I'm sure that far fewer people learn musical instruments than did be fore phonographs, movies, TV, and the like made entertainment more o

  • Assuming that LLMs continue to propagate and become the basis for the majority of dissemination of information, one logical result is that less new and valuable information will be made available to the public because of the desire to have one's model contain information that's not in others' models. Therefore the only publicly available training data which is not poisoned will be whatever the information-wants-to-be-free crowd makes available. This means that the reputable sources will narrow.

    This is alrea

  • Using chatbots to write simple code that is simply a remix of existing code it was trained on is not an exciting advance. It's just a tool for endlessly regurgitating the code of the past

    The real exciting development will be when some sort of future AI can help us manage complexity by finding security vulnerabilities, unintended interactions, rare edge cases and hidden bugs

    Designing reliable complex systems is hard, really hard, especially when they are too big to fit into a single mind. Managing complexity

    • I would have thought a more interesting use case for AI would be to write comments and analysis into existing code. and to write test cases.

      Maybe even accept written specifications and highlight deficiencies and contradictions

  • What Happens After Every Programmer is Using AI?

    Conversations with rideshare drivers and complaints about food delivery are going to get really pedantic.
  • Markov processes were first used in two domains: text generation and weather prediction. Does it bother you that the typical meteorologist uses Markov methods to predict the weather?

  • Ten years ago you could've asked "what if every programmer subcontracted their work to cheap overseas labour?". I expect the answers and outcome would be the same.
  • It costs money to use, at least the coding specific AIs do. Also, if the AI can't explain to me why the code works then I don't really want to use it. I don't like putting code into my programs that I don't understand why it works.

  • Commodity programming of endless completely derivative functions stops being a thing? You don't have to pay somebody huge money to modify the logo on a page? Small businesses at the right point in their growth can afford software that works well for them?

    Rather than making progress in making software faster, cheaper, and easier to develop, we have regressed. The idea that a crud application needs a general purpose language is just fucking stupid. When all you have is a hammer everything looks like a nail.

    An

  • Comment removed based on user account deletion
  • It's not as if the error mesaage that software spits out at you is always the root problem vs. a symptom, AI isn't going to know that your adblock is getting in the way of your test script, or that you accidentally installed the 32 bit version or a compiler.Not unless the AI is on your comp..
  • Ex falso quodlibet
  • Hrumph. Been writin' code for decades now. Still use a text editor (jed, np++). Even for the Arduino IDE, I prefer to write outside and just debug inside. Don't like/need tooltips, autofill, or any of the other stuff that interrupts my mental flow. I'll keep track of the braces and parens for myself, thank you. We had to learn positions and parens coding rpg and lisp. Grateful we don't have to do that anymore.

    Back when I started in '81 in industry I was coding bal360 on paper forms. These would go to a Data

  • we live in a world where code is free but the book about it is $50. So AI just transfers the income from the book publisher/author to the service.

  • MAD happens. https://futurism.com/ai-traine... [futurism.com] There seems to be some evidence that feedbacks like this will magnify flaws in the learning models, resulting in degraded outputs. It's like inbreeding, sort of.
  • It also reads official documentation and digests it. Presumably, that won't go away. So even if nobody posts on Stack Overflow any more (which is unlikely near term), there will still be plenty of source data form AI to regurgitate.

  • Where does AI go after it's "perfected itself"?

    Given that ChatGPT is actually getting worse, not better [searchenginejournal.com], I wouldn't hold my breath.

    There's been other studies showing that the "quantum leap" that ChatGPT claims only exists when you very carefully pick your statistics parameters, and objectively looking it's merely a linear increase brought upon by large numbers. Or, in other words, there's no "perfection", it's simply a matter of throwing money at a problem.

    That also means there's no "perfecting itself". On the contrary - as all these LLMs are fed essent

    • by ledow ( 319597 )

      All AI plateaus.

      It's been the same since the early days, back in the 60's where it was mostly just ideas.

      There's always an assumption that it got better the first day, so it must get better every day, and it's simply not true. There's also always an assumption that when it was on one computer it was okay, two computers made it slightly better, so 10,000,000 computers must make it a genius. Also not true.

      We are missing a critical element for AI (inference) and rather than seek it out, we think that throwin

  • Existing AI writes abysmal code. However, a solution to this has been proposed for many decades now - 5th Generation programming languages, whereby people write a SPECIFICATION for a program (as opposed to a prompt) and the AI use the specification to write the code.

    A specification would lead to superior code, at least in theory, and would guarantee an actual match between the "prompt" and the code generated, at least in theory.

    This would seem to be the correct approach to AI-generated code. And I do see th

  • Handcrafted, organic, mono-type, getters and setters for any need. Syntactically complete functions, hewn from only the finest ionic pulses with fully descriptive long-form variables, each one declared and instantiated to your most exacting need and purpose. Macros and headers all drawn from real synapses and responsive muscle memory. Level up from achieving syntax sugar zen and go assembly! Delve deep into the soul of the box, let us add your bits and move data into your registers, and then out again, and
  • What happens when everybody travels via flight

I program, therefore I am.

Working...