Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Programming

Can a Code-Writing AI Be Good News For Humans? (indianexpress.com) 90

"A.I. Can Now Write Its Own Computer Code," blares a headline in the New York Times, adding "That's Good News for Humans. (Alternate URL here.)

The article begins with this remarkable story about Codex (the OpenAI software underlying GitHub Copilot): As soon as Tom Smith got his hands on Codex — a new artificial intelligence technology that writes its own computer programs — he gave it a job interview. He asked if it could tackle the "coding challenges" that programmers often face when interviewing for big-money jobs at Silicon Valley companies like Google and Facebook. Could it write a program that replaces all the spaces in a sentence with dashes? Even better, could it write one that identifies invalid ZIP codes? It did both instantly, before completing several other tasks.

"These are problems that would be tough for a lot of humans to solve, myself included, and it would type out the response in two seconds," said Mr. Smith, a seasoned programmer who oversees an A.I. start-up called Gado Images. "It was spooky to watch." Codex seemed like a technology that would soon replace human workers. As Mr. Smith continued testing the system, he realized that its skills extended well beyond a knack for answering canned interview questions. It could even translate from one programming language to another.

Yet after several weeks working with this new technology, Mr. Smith believes it poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will end up boosting human productivity. It may even help a whole new generation of people learn the art of computers, by showing them how to write simple pieces of code, almost like a personal tutor.

"This is a tool that can make a coder's life a lot easier," Mr. Smith said.

The article ultimately concludes that Codex "extends what a machine can do, but it is another indication that the technology works best with humans at the controls."

And Greg Brockman, chief technology officer of OpenAI, even tells the Times "AI is not playing out like anyone expected. It felt like it was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it is replacing no jobs. But it is taking away the drudge work from all of them at once."
This discussion has been archived. No new comments can be posted.

Can a Code-Writing AI Be Good News For Humans?

Comments Filter:
  • The article ultimately concludes that Codex "extends what a machine can do, but it is another indication that the technology works best with humans at the controls."

    Autonomous vehicles.

  • by joe_frisch ( 1366229 ) on Saturday September 11, 2021 @11:46AM (#61785335)
    Can I say: "generate code that will give the phi and theta coordinates of a disk of angular radius R, centered at theta0 and phi0 in spherical coordinates with a resolution of Tn points in theta direction, and Pn points in phi direction?. This is the most recent bit of programming I had to do.

    Or, "create a database with a user friendly gui to track systems composed of the following hardware, firmware and software options for each of the following n subsystems". (the problem here is that by the time I've defined the problem, I've 90% solved it

    If all it can do is fill in code for a few types of problems it already knows how to solve, then it really is just a high level language with poorly defined syntax (which will surely cause problems when some doesn't ask for exactly what they want).
    • Couldn't Wolfram Alpha [wolframalpha.com] be thought of as a precursor to this system?

      • by Anonymous Coward

        Couldn't Wolfram Alpha be thought of as a precursor to this system?

        How about Stackoverflow? Isn't "the cloud" as magical as AI?

        • Code-writing is intense and laborious. It also often leads to insecure code, because humans are often error prone. Also, because (good) code writing is difficult, it is in short. Code writing AI could theoretically be designed to write better code faster. That would increase the supply, drive down the cost, and increase the quality. If you consider code like any other marketable good or service, more quality, more supply, and lower costs are all better for people overall (save for the guy who used to be abl
          • by Jeremi ( 14640 )

            Code writing AI could theoretically be designed to write better code faster.

            I don't see how. Either you specify the task in 100% full detail, in which case you've just written the code, and all the AI is doing (at best) is recompiling your high-level code into a lower-level language; or you specify only a rough high-level outline of what needs to be done and let the AI fill in all the details, in which case you'll invariably end up with a program that does something like what you want but needs to be modified to match the implicit requirements that you didn't include in the outlin

            • You don't specify the task in full detail. You give the AI results and it comes up with the functions to deliver said results.
              • by Bengie ( 1121981 )
                Make it pass these unit tests? Then all you need is someone who is good at writing unit tests. I could see this as a net win.
                • by dvice ( 6309704 )

                  You need someone who is PERFECT at writing unit tests. If you are missing even one condition in the tests, you will get broken application that passes the tests.

                  It usually takes me 5 minutes to write code and 2 hours to write tests.

                  • It usually takes me 5 minutes to write code and 2 hours to write tests.

                    I like to test everything, to make sure it works, but this will never be 100%. My field is mainly electronic design, rather than software. I test all my designs at prototype stage, which can be quite a slog. But I have a theory about how the design works, so what I test is the extremes, according to my theory. Otherwise, I would never get the job done,

                    The basic point is that you can't just write tests completely blind.

                    There was a software testing method years ago, where you inflict your creation on the offi

                  • by Bengie ( 1121981 )
                    I already have this problem with real humans. Few people truly understand their code. It's common for me to get pulled into a project complex enough that a team of people can't figure out a problem, and I take a read through the code only to find more bugs than what was found from code review, QA, and testing put together. And I consider our company above average.
                    • I already have this problem with real humans.
                      Few people truly understand their code.

                      That's why I insist on hiring unreal humans.

                • by Jeremi ( 14640 )

                  Make it pass these unit tests? Then all you need is someone who is good at writing unit tests. I could see this as a net win.

                  The problem is, most non-trivial software would require an infinite number of unit tests to verify the AI's output. For example, for AI-written Microsoft Word you'd need to test every possible Microsoft Word document to make sure the program handles them all correctly. (With human programmers, you can get away with spot-checking just a few examples, to some extent, because your human programmers [hopefully] understand what the intent of the program is and code accordingly... an AI, on the other hand, does

            • ...it is handled by going back to the programmer with an explanation of what you really want (vs what you got), and the programmer then sits down and modifies the program to suit, and you repeat the cycle until you're happy.

              That's called the "Boeing Effect".

              That's when you give the customer a finished deliverable built to spec and they say, "Yes, that's what I asked for, but it's not what I wanted."

              • by dvice ( 6309704 )

                It usually takes a day for a customer to write A4 of specifications. It then takes me 2 hours to write questions from those specifications. And then it takes 2 weeks for customer to answer those questions. It then takes about an hour or two for me to write code based on that and about a week to write tests for it. Then testing engineers spend another week writing their own tests for it. After this comes few hours of testing, few hours of bug fixing and test adjustments and meetings where specifications are

            • . .Either you specify the task in 100% full detail, in which case you've just written the code, and all the AI is doing (at best) is recompiling your high-level code into a lower-level language; or you specify only a rough high-level outline of what needs to be done and let the AI fill in all the details,

              Probably the most difficult job in writing software is to define the problem you intend to solve. AI does nothing to help with that, as far as I can see. Some costumers have a rather vague idea of what they want the magic software to do, so you have to flesh that out with practical proposals, and see if the customer agrees. That does not sound like the kind of thing an AI algorithm can do.

              When it comes to automatic code generation, C++ templates have a mixed reputation. They bloat executables, by generating

            • Code writing AI could theoretically be designed to write better code faster.

              A real AI system could theoretically solve all of humankind's problems.
              Too bad there is no such thing.

    • I've seen these kinds of "AI" for over thirty years. You have to define the problem well enough and give it some structure and context and then it will "program" the solution. You might as well code it yourself. It will take less time. No one is asking for spaces to be replaced by dashes. That's a one-liner in many languages. Determining if a zip code is valid is also trivial. These are solutions no one is looking for.

      At best these kind of fake AI attempts will raise the level at which we program, bu

      • You have to define the problem well enough and give it some structure and context and then it will "program" the solution.

        Kind of the role of a software architect. [ncube.com]

        • If you spend a lot of times doing things that have been done before, writing boilerplate code, you are programming wrong. That is what functions are for. Most of what we do should be novel (at least, situation specific. Otherwise you could just buy a package that already does what you need).

          • I think the real goal is to give a set of inputs and outputs, ans have the computer write the function as needed. Really though, the issue is the decentralized nature of a lot of what we code. I spend half my programming time integrating APIs.
            • I think the real goal is to give a set of inputs and outputs, ans have the computer write the function as needed.

              This part is easy, the hard part is getting the computer to extrapolate correctly. Neural networks are extremely bad at extrapolating because they don't understand what is implied.

      • by Bengie ( 1121981 )
        The bar will raise over time, but there is a lot of yak shaving that could benefit from this. Coding it the act of defining a problem in a complete and unambiguous way. Libraries and frameworks have cause the abstract level at which we code to go up, but until an AI can identify a problem and implement a solution to it entirely on its own, there will always be a "programmer".
      • by dvice ( 6309704 )

        Valid zip code is REALLY hard. First you need to answer these questions:
        - Which country zip code are we talking about? Or should it be global?
        - If zip code was terminated last year, should it be considered valid? Should the function tell what zip code they should use now, auto-correct it or just reject? Or should we take the date as a parameter and validate data based on the date, assuming user can input historical data.
        - Do we allow special syntaxes that are often used in the local culture or do we accept

      • by Bengie ( 1121981 )
        This could be useful for a one-off throw-away project that needs to do a single simple thing. This happens a lot when supporting large complex systems where something went wrong and you have to clean up the mess.
    • Re: (Score:1, Informative)

      by Anonymous Coward

      I hope you shot whoever gave you that first problem specification, because it's terrible. Spherical coordinates only make sense in three dimensions, but a disk is two dimensional, which means you need orientation information and radius-of-sphere (more likely ellipsoid) to determine the angular coordinates.

      Plus you need to know how the disk is projected or mapped onto the three-dimensional surface, at least if you want to be very precise in your answer.

      Finally, "resolution" doesn't make sense like that. Fo

      • Its a problem *I* need to solve for my project, You are in fact correct that it not simple to specify EXACTLY what i want in a way that any AI could possibly interpret: I have a radio telescope survey map of the sky in coordinates that are easily translated into theta and phi. (we really are looking at angle, distances are all astronomical). That map has some input resolution in phi and theta (something like 0.05 degrees) I have a radio telescope like device that observes some part of the sky in a few deg
    • It's important that we understand its output.. Otherwise the machine will invent its own language that no human can understand, and then, when the machines start their own little gossipy social networks, watch out!

      • Already happened:

        Jul 31, 2017
        https://www.forbes.com/sites/t... [forbes.com]

        "Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input."

    • Every single computer program ever written has been made with the purpose of affecting the user's mind. This is no different: as long as the process of AI-generated code can be shaped by the mind of someone who understands the end user's mind, human-to-human, it can create something useful. Left to its own devices it will create nonsense.

  • It depends on how it actually works, and also how well.

    • by bussdriver ( 620565 ) on Saturday September 11, 2021 @12:01PM (#61785405)

      Your job may not be replaced but your productivity will replace the need to keep your coworkers. May the best employee survive until we replace them with a cheaper youngster from anywhere in the world that has internet.

      • by joe_frisch ( 1366229 ) on Saturday September 11, 2021 @12:05PM (#61785431)
        In the short term technology displaces jobs, but in the long term the increased productivity means that the company can afford to hire more workers. At least in idealized economics, each worker is paid the marginal amount that the contribute in productivity - more productivity -> more pay and more workers.

        This seems to work in real life - while the auto industry put a lot of people out of work who used to deal with horses, it created a huge number of new jobs. Same for computers - people whose lives were spent doing sums by hands, lost their jobs, but a whole new industry was created.
        • increased productivity means that the company can afford to hire more workers.

          This is known as Jevon's Paradox [wikipedia.org].

          When productivity improvements allow a resource to be used more efficiently, demand for that resource often counter-intuitively goes UP.

          Labor-saving technology usually leads to higher wages rather than unemployment.

          • Yes but what if we consider automation to be the resource in question? Wouldn't demand for automated processes coincidentally increase? This does make perfect sense as, after all, whatever a human can do, a machine will eventually learn to do better.

            Ultimately, the great problem of automation isn't that it increases human productivity. This "new" kind of automation makes human workers completely obsolete. The AI in the article isn't doing the software engineering am doing. It's writing the code I would hire

            • If automation develops to the point where it can replace humans in most jobs - then you are correct. So far though its nowhere near that level, and I would expect automation to create more jobs in building, training and repairing automation, than it destroys.

              Imagine its 1980 when someone might worry that computers can "think" and will soon replace all human jobs that involve thinking. Instead they just created an enormous computer industry that employs a lot of people.
            • Yes but what if we consider automation to be the resource in question?

              Jevon's Paradox is not universally applicable. It is most common when a resource is a bottleneck. In most human endeavors, the primary bottleneck is the availability of skilled labor.

              Wouldn't demand for automated processes coincidentally increase?

              Possibly. Which will increase demand for people who can create and maintain those automated processes. These people are called "programmers".

              This "new" kind of automation makes human workers completely obsolete.

              No. Not at all. That is complete nonsense.

          • But this is not considering more complex models - the workers are not unskilled assembly line workers, the skills and jobs are not fungible, and so forth. In the real world, we are still unable to measure productivity effectively, especially with software. I've seen too many instances where someone really bad is praised for productivity, because he churns out bad code very fast, or churns out useless and unnecessary code, or gets out the features fast but also has a long stream of bug fixes afterwords.

            Ie,

        • during the Industrial Revolution before tech caught up (and WWII blew up enough infrastructure that there were jobs rebuilding).

          So yeah, after 2 generations of poverty and war the survivors will thank us. Doesn't help us now.

          Meanwhile automation killed about 70% of the middle class jobs since the 80s [businessinsider.com] (strickly speaking it wasn't just automation, process improvement played a role). This is why we've got 57 million people in the gig economy [forbes.com], they can't find stable, full time work that pays for rent
          • I'll ask you the same thing I ask everyone who makes this point: what jobs will replace those ones automated away?

            This machines replacing labour has been going on for centuries. But there were still jobs. They were different jobs, So there was not so much of a demand for nailing iron shoes on horses hooves, but more of a demand for gas stations, because mechanised transport replaced horse-drawn transport.

        • Same for computers - people whose lives were spent doing sums by hands, lost their jobs, but a whole new industry was created.

          It is something of a conundrum that that there are so many ways to use machines to avoid drudgery, and yet most people are working their butts off.

      • Often they get replaces by 2 or more cheaper workers, who end up being less productive overall; more code checked in, but ultimately it is a churn of fixing their own bugs, making bad designs, unable to debug customer issues, etc. Good programmers are not merely assembly line workers. And an "AI" that does this is not helping the matter.

        Now, give me the AI that tells me how to shove 1MB of object code onto a 20KB code space, I'll pay attention then. Or if it can analyze a protocol and tell me what's wron

  • As long as it also debugs its own code.
  • They can use this as an excuse to lower tech wages. Which is why this story is coming out now.
    • Excuses only work if they live up to results. That's not this story.

      • Not exactly. Corporations have been scaremongering about automation for decades. It impacts policy as well as makes workers feel insecure. For example, oftentimes it's argued raising the minimum wage will force firms to automate low paying jobs. Most politicians only need the emptiest excuses to make policy that favors the rich, because they do it without excuses already.
    • They can use this as an excuse to lower tech wages.

      They don't need an excuse.

      If companies could pay lower wages and still recruit and retain the employees they need, they would already be doing so.

  • by ITRambo ( 1467509 ) on Saturday September 11, 2021 @12:00PM (#61785393)
    Science fiction has led some people to believe that AI does its own thinking. It doesn't. It's a software program that does exactly what a programmer wrote. There is no magic in it. Write code that does a tasks that is tedious to a human and call it AI. Done.
    • by Anonymous Coward

      Write code that does a tasks that is tedious to a human and call it AI. Done.

      Children are AI.

    • No, we can make programs that take in data that no human brain can handle, and then have it write software. You can't call that resultant code "exactly what a programmer wrote." There is code produces that way that no human understands.

      • The "AI" is still doing only what it was coded to do. All computers run tasks that are difficult, or impossibly time consuming, for humans to do. It is not thinking on its own when it does the complex tasks. That's its only job.
        • by Kremmy ( 793693 )
          The AI engine is only doing what it is supposed to do, which involves creating a massive web of constraints from arbitrary input and producing a solution. It's a model of a neural network, a literal simulation of a purpose-built brain. The lines are a lot blurrier than you'd like.
        • You miss the point, that was only the first step. What about the program written by a program, from data no human could understand and with code no human understands? No human could write that code, no human understands that code. A human only set things in motion, but the result was a system beyond comprehension.

          • by hazem ( 472289 )

            Reminds me of a passage from Asimov's short story, Evitable Conflict:

            [have some people check out the "machines" anyway]
            "No, he said that no human could. He was frank about it He told me, and I hope I understand him properly, that the Machines are a gigantic extrapolation. Thus- A team of mathematicians work several years calculating a positronic brain equipped to do certain similar acts of calculation. Using this brain they make further calculations to create a still more complicated brain, which they use again to make one still more complicated and so on. According to Silver, what we call the Machines are the result of ten such steps."

        • by Jeremi ( 14640 )

          The "AI" is still doing only what it was coded to do.

          Well, yes and no. In modern AI, the AI is coded to learn, and (if successful) it learns how to do something. But at the end of the training process, it can now do something that no programmer ever coded it to do [economist.com].

    • AI systems might not be thinking, ok. But when you say "It's a software program that does exactly what a programmer wrote." It's not a good description of such systems. AI systems can recognize objects after being trained, yet, nowhere you will find a line of code written by the programmer that tells it exactly how to recognize these shapes. Then you can re-use the same system to recognize other type of objects. It's quite different than a program that calculate the sum of 2 numbers for example.
    • It's a software program that does exactly what a programmer wrote.

      The behavior of a DL system is determined far more by the training data than by what the programmer wrote.

      Claiming that an AI's behavior is "just programming" is as silly as claiming that human behavior is "just DNA".

    • Science fiction has led some people to believe that AI does its own thinking. It doesn't. It's a software program that does exactly what a programmer wrote.

      I think the problem is that some people believe that brains are computers, and so all you have to do is create a sufficiently powerful computer, and it will be as good as a brain, But this is like saying motor cars are improved horses, because cars go faster and for longer than horses. I am pretty sure, by introspection, that most of what my brain does is not computation. I am currently thinking of a recipe for leek and potato soup, that might involve Stilton cheese, and maybe some butternut squash. Is this

  • It's a tool. (Score:4, Insightful)

    by Gravis Zero ( 934156 ) on Saturday September 11, 2021 @12:03PM (#61785417)

    Anything that assists in the completion of the program (without actually understanding the goal) is a tool for a programmer. Call it a "code-writing AI" if you like but it lacks the ability to cognate the context of the code it spits out, so it's heavy on the 'A' and light on the 'I' in AI. This could be a useful tool but it could also be a disastrous tool, all depending on how good the programmer is that using it. You can give someone a CNC machine which does all the machining for you but if the craftsman doesn't know it's capabilities and limitations then you could end up with a shoddy product. It's not the tool, it's the craftsman that matters because the tools just make it easier.

    • If a tool causes a disaster then the logical conclusion is you are using the wrong tool....but my main point is this; technology is supposed to make things easier by taking a set of decisions out of our lives be it socially, logically, morally, i don't know(ly),...The problem is we do not use said "ease" of decisions to make better decisions. We instead create paradoxes, failures of logic that point to the technology which obviously failed to "revolutionize" or lives. Think of this; we no longer have to kno
  • by srichard25 ( 221590 ) on Saturday September 11, 2021 @12:26PM (#61785495)

    Writing the code is normally the easy part. The tougher parts are understanding the requirements, digging deeply enough into them to identify all the possible edge cases, and then figuring out the best way to break up the solution so that it's easy for other humans to maintain. Writing code that replaces all the spaces in a sentence with dashes is fairly trivial, which is why we normally just use existing libraries to do that kind of stuff instead of building it from scratch.

    • Right on the money, give the user exactlr what he/she asked for, they go, "yeah, but this is not what I meant ...".

      Till AI can figure out what the user means high end coding is safe.

      Routine SQL query jocks pretending to be coders will be out of their jobs. Lower grade imported H1Bs for example. Higher end will survive.

    • by Tom ( 822 )

      My thoughts when I looked at OpenAI Codex:

      Yeah, impressive. When a human has done the hard work of splitting the whole up into easily describable pieces, the AI can turn their verbal description into code.

      That's a nice amount of language processing. But it doesn't even touch what actual software development work is like.

    • Mod parent up!

      Abstraction layers and fully inclusive specs are HARD

  • It's asking the right questions of the person requesting the software solution that is hard.

  • Not for those employed as programmers!
  • Does it solve the age old problem of interpreting human expression? Computer languages are all about allowing humans to translate our thoughts into machine code. Writing that code is helpful but doing a better job understanding humans is even more helpful.

  • Which bathroom is he/she/it going to use?

  • ... with all the ensuing problems

    Including cybersecurity problems - "GitHub Copilot AI Is Generating And Giving Out Functional API Keys" - https://fossbytes.com/github-c... [fossbytes.com]

    Legal and copyright problems - "Analyzing the Legal Implications of GitHub Copilot" - https://fossa.com/blog/analyzi... [fossa.com]

    Code quality problems - "GitHub's Copilot may steer you into dangerous waters about 40% of the time" - https://www.theregister.com/20... [theregister.com] .. and more.

    Github Copilot is a piece of crap and a complete legal minefield - and n

  • by lamer01 ( 1097759 ) on Saturday September 11, 2021 @03:34PM (#61786081)
    Also, the obligatory: https://en.wikipedia.org/wiki/... [wikipedia.org]
  • 1. Programmer makes analog computer
    2. Programmer codes bits directly into a digital computer
    3. Programmer create a language
    4. Programmer creates a standard library
    5. Programmer creates AI
    6. Programmer tells AI what to code
    7. Programmer tells the AI what problem to solve
    8. Programmer identifies areas for the AI to improve
    9. Programmer monitors AI
    10. AI is self monitoring

    It's going to take a bit before getting rid of programmers. Phase 10 is a weee bit out. And until phase 10, the output of the AI

  • The best thing this will be able to do is write and algorithm for me to process some inputs and deliver the output.
    so for example choosing the best sorting algorithm
    or traversing a network of nodes

    yeah ok, so what. we already have libraries for that.

    the A.I. is not going to be able to set up the whole stack of junk needed from the funky DNS name, buy it, load balance it, secure it with my favorite Oauth, pick a proper UI framework, use it to create the human specific workflows, stand up and API in front o
  • "Could it write a program that replaces all the spaces in a sentence with dashes? Even better, could it write one that identifies invalid ZIP codes? It did both instantly, before completing several other tasks.

    "These are problems that would be tough for a lot of humans to solve, myself included,"

    "tough for a lot of humans to solve..." Seriously? These seem like two very easy problems to solve, especially the first one- a line or two of code would do it in most programming languages.

    Checking zip codes is onl

  • by Tom ( 822 ) on Saturday September 11, 2021 @08:33PM (#61786741) Homepage Journal

    My day job is security, and a good part of it is secure software development (and I co-published a whitepaper on secure AI development recently).

    From that perspective, I don't want any code written by an AI, thank you.

    Human coders are sloppy, they make mistakes, they are often not trained as well as they should be. But I can question them, I can teach them, I can audit their dev process and review their code.

    AI is incomprehensible. The explainable AI approaches are in their infancy and are likely to be left in the dust by the rapid development or new AI systems.

    From a security perspective, whatever exploits the AI puts in the code won't be found until a creative attacker rolls out his 0-day.

    I would like AI to check code and point out coding issues, to assist the human developers and code reviewers. I'd like AI to help in making judgements, by adding its capability to access vast amounts of data and patterns. I'd like AI to tell me when I write code that there's a library function for that or that my loop can go outside the range or if it thinks I didn't sanitize my input properly.

    I've seen the code-writing AI examples and they're fun to watch - but I most definitely wouldn't want to run a business on whatever they create. There's something to be said for human judgement, accountability and responsibility. The AI doesn't know what's behind it and what depends on it.

  • Entity Framework is an ORM that is supposed to intelligently, automatically "write code" in SQL without the programmer even needing to know SQL. For a lot of trivial CRUD operations and even some more advanced database programming, it works OK. But when performance is important, SQL engineers can testify that the SQL code written by Entity Framework is _awful_ and often is written in a badly-performing manner. Worse, when it does write bad SQL, it's almost impossible for the programmer to change the C# code

  • Oh it can't just "do that" for you? Hm. Strange.

  • by gweihir ( 88907 )

    Because it does not exist, will not exist any time soon and it is unclear whether it is even possible.

    Please stop the AI bullshit.

    • I am with you in spirit, but we have lost that war.

      Anything that does jobs that we think of as requiring intelligence is now considered AI.

      Maybe this is for the best, because that's a fairly valid way to look at things, even if it isn't what was originally meant by the phrase.

  • What about license? (Score:4, Interesting)

    by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Sunday September 12, 2021 @02:59AM (#61787305)

    So what's the license of the code the AI generates?

    If it was trained with GPL code, does the GPL apply since it was GPL derived? What happens if it was trained with code under incompatible licenses? What happens if it was trained with 3-clause BSD code, requiring the advertisement?

    This is likely going to be the biggest hindrance than anything because the last thing anyone wants is accidentally tainting their code.

  • Now just hide the malicious code in the ML model, and any non-programmer can ask the AI to write code for a specific problem and this will have some addition of code for malicious practices. Maybe a bit simply stated but I see possibilities.
  • Take the blue pill, go to sleep, and wakeup believing whatever you want to believe..

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...