Ask Slashdot: What Happens After Every Programmer is Using AI? (infoworld.com) 127
There's been several articles on how programmers can adapt to writing code with AI. But presumably AI companies will then gather more data from how real-world programmers use their tools.
So long-time Slashdot reader ThePub2000 has a question. "Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?" Let's posit a couple of things:
- First, your AI responses are good enough to use.
- Second, because they're good enough to use you no longer need to post publicly about programming questions.
Where does AI go after it's "perfected itself"?
Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that code comes at a price?
So long-time Slashdot reader ThePub2000 has a question. "Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?" Let's posit a couple of things:
- First, your AI responses are good enough to use.
- Second, because they're good enough to use you no longer need to post publicly about programming questions.
Where does AI go after it's "perfected itself"?
Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that code comes at a price?
Slashdot will have unicode support? (Score:5, Funny)
#$!
Re: (Score:2)
Never! :)
Seriously, I see Slashdot gaining support for UTF-32 because to hell with what everyone else uses.
Very likely, nothing much (Score:5, Insightful)
Remember when the internet was young and we all thought that within a few years, maybe decades, we'd all have access to all the information in the world and it would be awesome? Never again would it be possible to bullshit people into believing lies because they now can easily see just how they're being deceived. We thought that we'd all become accomplished philosophers, because we'd engage in meaningful discussions and the marketplace of ideas would sort out all the bad ones because people would latch onto those that mean progress and reject those they identify as superfluous.
Why do you think this is different, I have to ask? You think AI is any better at spotting people trying to fill it with false information, bully it, troll it and play havoc to its learning model?
Re: (Score:2)
Remember when the internet was young and we all thought that within a few years, maybe decades, we'd all have access to all the information in the world and it would be awesome?
Yes.
Never again would it be possible to bullshit people into believing lies because they now can easily see just how they're being deceived. We thought that we'd all become accomplished philosophers, because we'd engage in meaningful discussions and the marketplace of ideas would sort out all the bad ones because people would latch onto those that mean progress and reject those they identify as superfluous.
I never heard of this claim, but I will accept that some people believed it at the time. You are coming to a conclusion which may or may not be accurate, from the original statement. We have POTENTIAL ACCESS to all sorts of information. If people are drawn to misinformation, then that is, by definition, LACK of information. I agree that you need to some sort of filter to determine the difference, but that is the purpose to this thought experiment. What happens when the AI is smart enough? I suspec
Re:Very likely, nothing much (Score:4, Interesting)
Well, my time with the internet dates back to when it was mostly an academic's toybox. Back in the days before AOL and before the Eternal September. The average internet user had a considerably above average IQ.
AOL should have been a warning. Our main failure is that we ignored it. We let the masses in. We have nobody to blame but ourselves.
Re: (Score:3)
Our main failure is that we ignored it. We let the masses in. We have nobody to blame but ourselves.
I blame ourselves and thank ourselves every day. I'm beginning to wonder if you're in fact an AI yourself, "hallucinating" a rosy past with such a narrow point of view that you think it was better.
The internet was ... more accurate back then, but not even remotely as useful. Letting the masses in undoubtably changed the entire world for the better, regardless of what you think when you forget to take your meds.
Re: (Score:2)
The internet was ... more accurate back then, but not even remotely as useful. Letting the masses in undoubtably changed the entire world for the better, regardless of what you think when you forget to take your meds.
What's better about the world today because of the Internet? I would say free Information but students are still dropping an easy G on textbooks. Perhaps social media has had positive effects on the mental health of the masses... oops... apparently just the opposite.
I know being online is better than mindlessly watching the idiot box! It will promote thinking and raise the standard for discou... oh never mind.
Free international calling? I guess that's mostly true.
Gaming! Because playing with AIM bottin
Re: (Score:2)
This is the same sort of pearl clutching commentary that was made back in the 1950s when television was rapidly replacing radio in the home. It holds no more relevance now than it did back then.
The Internet is a tool. As with any tool, how it's used is dependent entirely on the user. Yes, it does get used for things that are of no benefit to society and, in some cases, to the detriment of society. But it also gets used for the good of society as well. That you are unable to see the good side of it does not
Re: (Score:2)
This is the same sort of pearl clutching commentary that was made back in the 1950s when television was rapidly replacing radio in the home. It holds no more relevance now than it did back then.
Best to stick to the facts rather than rely on unfalsifiable purely inductive arguments.
The Internet is a tool. As with any tool, how it's used is dependent entirely on the user. Yes, it does get used for things that are of no benefit to society and, in some cases, to the detriment of society. But it also gets used for the good of society as well.
I don't subscribe to the notion tools are inherently neutral. Capability enabled by the presence of a tool results in attractors that influence the world affecting the balance and distribution of power and the operation of society. This influence is measurable and to some extent predictable.
That you are unable to see the good side of it does not indicate that the tool itself shouldn't exist.
I posed an open question "What's better about the world today because of the Internet?" you could answer it instead of putting wor
Re: (Score:2)
Yes, no, at least not really. We were pompous assholes, sure, but we at least contributed to the general progress.
This here is just a cesspool circling the drain.
Re: Very likely, nothing much (Score:5, Interesting)
I was around the Internet around the same time as you and I must tell you that most (not all of them but surprisingly the majority of them) of those people with above average IQ were the most trollish and pompous asshat I ever met in all my life. So much for the Internet utopia.
I've been on the net since the 1970s (first it was the ARPANET) and we discussed the future when, someday, regular people would have access to what we referred to as the "World Net". We kind of did think it would be more like an information utopia.
That fantasy fully ended when AOL came along.
I was also an AI researcher in the early 1980s, and automated programming was a hot topic. The approach was based on the system actually understanding how the program worked and trying to model what the programmer was thinking. Let's just say that didn't turn out like we thought, either.
Re: (Score:2)
Me too!
Re: (Score:2)
You are projecting human attributes onto ChatGPT that aren't there. ChatGPT is not self-aware enough to be a narcissist.
Re: (Score:2)
Who thought that? I was online via telnet in 1991 and I don't remember anybody drawing your conclusions about the future. I don't recall anybody describing a factual utopia of unassailable truth. Ever.
I do, however, remember lots of spooked people who didn't much care for the direction this would lead.
Re: Very likely, nothing much (Score:2)
I'd expect large chunks of unmaintainable code, especially if the AI that created the code goes out of business or no maintenance is needed for several AI generations.
Re: Very likely, nothing much (Score:2)
Re: (Score:3)
If people can't tell fictional stories apart from real ones, as we can observe quite often, why do you think AI would be able to?
Re: (Score:3, Funny)
Re: (Score:2)
Actually, that shouldn't be sarcasm. ChatGPT was trained to write stuff *like* the stuff it encountered. The specific intention was that it not be the same stuff. (Of course, it's not perfect at that.)
That the AI would always be correct is a very faint hope, but if it were designed to be correct, it would be correct a lot more often. It would (as currently designed) also be a lot worse at creative writing.
Re: (Score:2)
I'm talking more about something that I've noticed sometimes being used in science fiction, where you have some society who has come to absolutely trust their computer by having it elevated to some kind of god level of infallibility with the justification that it was programmed to be infallible and thus everything that it puts out must necessarily be correct. And if there's evidence contradicts it, then that
Re: (Score:2)
No. I mean it was intentionally designed to NOT say the stuff it had been trained on, but only to say things similar to it. This isn't "fuzzy logic", this is a design choice. It's an attempt to avoid copyright problems AND to seem more creative. But creative means less reliable when you don't have a solid base to test against.
GIGO (Score:2)
Re: (Score:2)
Re: (Score:2)
If your goal is to create an AI that is as stupid as the average human, well, mission accomplished.
I thought we're aiming higher.
Re: Very likely, nothing much (Score:4, Insightful)
You haven't been using the internet for too long, have you?
Or did you stop using the internet back when it was mostly a tool for the academia and only just now started using it again?
If anything, we have the blatant proof at our hands that people not only cannot but mostly don't want to be trained how to avoid disinformation. It's way more comfortable to simply confirm already existing presupposed assumptions and "be right", no matter what bullshit you believe.
Unless you can train AI to not enjoy having its biases confirmed (because that's pretty much the problem with humans here, we actually enjoy learning something, unfortunately we don't care if what we learn is actually true), it will do the same. And that's the thing, you pretty much have to give AI the requirement to "enjoy" (whatever this may mean for AI) learning something new if you want it to stay curious and learning.
You'll end up with the same bullshit believing artificial idiot and you don't even replace the natural idiot, you just pile on.
Re: (Score:2)
Re: (Score:3)
As with humans, it all depends on where you start. If you start out with garbage, they will seek out garbage to confirm their already existing garbage, rejecting factual information as wrong because it contradicts what it already knows as "true".
You can't even show an AI that it is factually wrong because it has no senses.
It has no way to determine whether is is fed bullshit. The best it can do is to take conflicting information and gauge that conflict against other information it has, try to weigh the qual
Re: (Score:2)
If anything, we have the blatant proof at our hands that people not only cannot but mostly don't want to be trained how to avoid disinformation.
The vast majority of people do not have the energy to question bullshit, so whatever vaguely aligns with their views will be sufficient to get them to act as if it were true.
Democracy has been fully slain at this point.
Re: (Score:2)
I remember a report about an early attempt at AI (quite a few years ago, mind you) where the AI they trained was "genuinely" happy when introduced to the janitor of the facility because, according to the AI's standards, he must be a very, very special and invariably very important and interesting person. The reason the AI drew that conclusion was that they trained its knowledge about humans from celebrities and people of the academia, and since the janitor was the first person that didn't match either group
Re: (Score:2)
No, AI can't be trained to avoid disinformation. Do you know why? AI doesn't understand anything.
It's a bunch of impressive mathematical tricks. There is no there there, but we humans are very willing to assume there is.
Re: (Score:2)
I think the main problem with it is that ChatGPT isn't and was never intended to be strictly truthful about things, and the lawyers goofed by assuming it was and not verifying the output. In theory, there's nothing keeping people from training an AI model on case histories and legal codes and the like and make sure it's not outputting fake or created info.
I think we're still a ways away from being able to trust AI with whole ass briefs, with or without human verification, since if it's basing the brief off
Re: (Score:2)
Re: (Score:2)
In theory, there's nothing keeping people from training an AI model on case histories and legal codes and the like and make sure it's not outputting fake or created info.
>
I'm sorry, but no.
You're talking about ChatGPT, and imagining that it has "facts" in it, and theorizing that if it was "trained" only on correct facts, that it would output correct information. That is not how ChatGPT works - that's not what it does, at all. It contains NO FACTS, and there is no way to put facts into it. It will always and forever output "hallucinated" wrong information, in blatant and subtle ways. There is just no way to fix that, because that is the essence of what ChatGPT does.
There are
Re:Very likely, nothing much (Score:5, Informative)
It didn't give accurate summaries of fictional lawsuits, it fabricated everything. Here's an example: It created a citation for "Shaboon v. Egypt Air" complete with case number and selected quotations. There's no such lawsuit, either in reality or in a TV Show or Movie. If there was that's all anyone would be talking about, that it can't tell the difference between TV and reality. But that's not what happened. It "hallucinated" as the ML folks call it.
You've got an inaccurate view of what this software is. ChatGPT is a Transformer. BASICALLY, it's a really big neural network with a few thousand inputs. Each input is a "token" (an integer representing a word or part of a word), including a null token. The output is a probability distribution for the next token. Because the input is null padded, you can pick a likely next word and replace it the next null with this word, and repeat. Since only part of the input changed, it can be chained efficiently and keep generating until it generates a special "End of Text" token is generated, or until all nulls have been replaced with tokens.
That's the basics. Under the hood are a lot of moving parts. But an important component is a subnetwork that's repeated several times, called an "Attention Head". These subnetworks are responsible for deciding which tokens are "important" (This is called "self-attention" as the model is calling its own "attention" to certain words). This mechanism is how it can get meaningful training with so many inputs: You might give it 1200 words, but it picks out the important ones and predicts based largely on them. This is also how it can make long-distance references to its own generated text. Proper nouns tend to keep attention on themselves. Earlier techniques couldn't do that. The further away a word was, the less important it was to the next.
So, it doesn't know about cases at all. It just knows e.g. if you ask about SCO v IBM, that those tokens are ALL important, and then it (hopefully) has been trained on enough descriptions of that case that the probability distribution shakes out to a coherent summary. Now if you ask for relevant case law and it hasn't seen any, it HOPEFULLY will say so. But, it's been trained on a lot more cases that exist than it's been trained on "don't know" refusals, so it can "hallucinate" (note that it now HAS been trained on a lot more refusals, which is annoying because it's now very prone to say things don't exist when they do). It knows the general form is "X v Y" so, absent any training indicating a SPECIFIC value for X and Y would be relevant, you'll just get a baseline distribution where it invents "Shaboon v. Egypt Air" because: It knows X should be a last name, and since it was asked about injuries during air travel, that the defendant would be an airline (and presumably it picked Egypt Air because generation is left-to-right, and it had generated an Arabic surname already). Now here is where self-attention gets really dangerous. Just like it would recognize SCO v. IBM as important in a user query, it will recognize Shaboon v. Egypt Air as important. Now this case doesn't exist, so the pretraining will not do much with that per se, but it's going to focus on those tokens. And, if asked for excerpts will generate SOMETHING related to a passenger being injured during air travel. Or, will say it doesn't know. It almost always says it doesn't know or that no such case exists. In large part that's because after the bad press ClosedAI has been very busy fine-tuning it on "I don't know" responses).
Here's an example of it dealing with fictional cases. I asked it what the case was called in the Boston Legal episode "Guantanamo by the Bay". It said there is no such episode and I likely am thinking of fan fiction. I told it it's real, it's S3E22. It said of course, yes, it's the twenty-second episode of the third season, and is about Alan Shore arguing Denny Crane is not fit to stand trial due to dementia, but there are no case names mentioned. I told it that's wrong (but I didn't elabo
Re: (Score:2)
I agree that "hallucination" is a silly description, but that's the ML jargon, like it or no...
I don't like it, as it's a stupid term, and I refuse to use it. The correct term is "malfunction". If you replace "hallucination" with "malfunction", the stories make a lot more sense. If you replace "hallucinated" with "malfunctioned", the stories also make a lot more sense.
The AI people are a bunch of snake oil salesmen, and they do not deserve the respect of creating terms for the rest of us to adopt. Machine learning is interesting and useful, but AI is neither.
Re: (Score:2)
The correct term is "malfunction". If you replace "hallucination" with "malfunction", the stories make a lot more sense.
"malfunction" implies that something went wrong. You might similarly consider it a malfunction when a car engine produces pollutants from its tailpipe, but most people would disagree -- in both cases, it's behaving as designed, and it's just that the design isn't really what we want it to be.
Re: (Score:2)
"The uh.. artificial person MALFUNCTIONED, and a few death were involved."
Weyland-Yutani coporate sleaze-speak at your service.
Re: (Score:2)
Seems to me like we've programmed the attitude of a teenager in to this thing. Always, needs to be perceived as right, even if error is pointed out. Wants to be "liked" and so will make up shit to be "useful" when it just causes problems straightforwardness solves. Lies in order to be liked. I'm misanthropomorphizing, but it is very human-like.
Re: (Score:2)
Then again, back then photographic proof did actually mean something. Today I can prove that Donald Trump gives blowjobs to Putin with deepfakes that can't even be debunked anymore, so a picture has become totally worthless.
What is and what is not true has become pretty much meaningless anyway. Everyone just believes what they want and there will be no shortage whatsoever of pictures, texts and videos to prove whatever anyone wants to believe. And even conclusive proof of the opposite is not going to sway p
Downside.. (Score:2)
Re: (Score:3)
They are probably both crazy and right.
Rhetorical question (Score:5, Insightful)
I'm not saying AI will never be good enough to be used like this, but what people are currently calling AI certainly is not.
ChatGPT and its ilk are not AI they are merely predictive text generators - more sophisticated, certainly, but not much different than the spellcheck/suggestions on your mobile phone. They can generate code, given specifications, but the quality of the code is generally abysmal. Even if the generated compiles without error (and that's certainly a rare case) it is frequently full of logical errors that will generate incorrect results or just explode at runtime. ChatGPT is much like an outsourced developer - you'll spend most of your time reviewing and fixing the garbage code that was returned to you.
Is it good enough to be generally useful? No, not currently. In five years, or ten years? Maybe. In the meantime only the very worst programmers have to worry about losing their jobs to AI.
Re: (Score:2)
Indeed. https://www.johndcook.com/blog... [johndcook.com] (along with its predecessor post on ChatGPT and Bard) shows how badly these systems fail on anything that requires logic. The cases on Slashdot where a LLM hallucinated court cases are other examples, and the case where an LLM defamed a professor who shared a first and last (but not middle) name with a criminal. Code generation is not different in substance than those kinds of tasks.
lawyer: problem understated (Score:2)
> LLM hallucinated court cases
the more I think it over, the more I doubt that it concocted court cases.
Rather, I suspect that it was a complete failure to recognize, perhaps even a complete inability to do so, that the cases cited were real, unchangeable items which it had to rely upon.
Instead, I think it simply took that as a kind of text, and blithely generated.
And this takes us back to not being a type of intelligence, but rather predictive text.
[you also have the problem that it takes a *complete* id
Re: Rhetorical question (Score:5, Insightful)
Re: (Score:2)
I think you underestimate the power of generative text. It is NOT AI. When its used as a tool to generate text, its amazingly powerful. It can save a ton of time getting a framework and outline in place prior to you putting the meat on the bones.
I also think the proper way to use this tool will be akin to
Re: (Score:2)
Re: (Score:3)
Business is going to be very very interested in a tool that can detect more accurately things generated ML, and attorneys more so.
Detecting AI-generated text is in the same problem area as putting "guard rails" on ChatGPT to detect incorrect (ie. factually wrong) output.
If it were possible for a program to look at the output of ChatGPT and do those things, the program would be better than ChhatGPT itself. And there would be no need for them, because the AI could do it by itself in the first place.
What I'm trying to say is: "No, sorry. Can't be done."
Re: (Score:3)
AI based computer programming is a problem which needs AGI (artificial general intelligence). If you think programming is all about APIs and while loops, then you don't understand what programmers do. APIs are simply the tools programmer use.
What programmers actually do is take an ill conceived description of a problem and then transform that poor description into a series of logical steps needed to solve the actual problem. One of the harder parts of programming is figuring out exactly what problem you are
Re: Rhetorical question (Score:2)
Re: (Score:3)
Is it abysmal? Really? I've only used it maybe ten times, and I've always had to tweak the output to get exactly what I wanted. But the code structure was generally okay.
ChatGPT is better at using Java generics than most Java programmers I know.
Re: (Score:3)
I'm not saying AI will never be good enough to be used like this, but what people are currently calling AI certainly is not.
ChatGPT and its ilk are not AI they are merely predictive text generators - more sophisticated, certainly, but not much different than the spellcheck/suggestions on your mobile phone. They can generate code, given specifications, but the quality of the code is generally abysmal. Even if the generated compiles without error (and that's certainly a rare case) it is frequently full of logical errors that will generate incorrect results or just explode at runtime. ChatGPT is much like an outsourced developer - you'll spend most of your time reviewing and fixing the garbage code that was returned to you.
Is it good enough to be generally useful? No, not currently. In five years, or ten years? Maybe. In the meantime only the very worst programmers have to worry about losing their jobs to AI.
I think there's a few ways in which it works well:
- Integrated assistants like CoPilot give a significantly improved autocomplete.
- Best results come from clearly describing the code/concept in a comment and letting the assist write out the code.
- When working with an unfamiliar API it can give you a big head start
Now, it can still be awful and frustrating for a few reasons:
- If it was trained on the wrong version of the API it can give you bad results.
- When trying to integrate with your existing code it c
Re: (Score:2)
I'm not saying AI will never be good enough to be used like this, but what people are currently calling AI certainly is not.
Who said AI had to meet a certain arbitrary qualification to be considered AI? What definition of AI have you used to arrive at this conclusion?
ChatGPT and its ilk are not AI they are merely predictive text generators - more sophisticated, certainly,
Decision trees and even simple feedback loops have been widely regarded as "AI" for decades. The term "AI" without qualification is extremely nebulous. I find it a bit strange the thing with a remarkable ability to process language, ingest and process complex natural language instructions and carry on discussions on a wide range of topics should not be considered
Generate code and test it by itself as we do? (Score:2)
Generative AI for code is useless (Score:2)
It is entirely useless and anyone using it should be considered a liability and fired.
I don't understand why it is even a thing.
Re: (Score:3)
think of it as a streamlined version of stackoverflow, which is what a huge lot of junior (and not so junior) programmers have routinely been using already.
as a sort of sophisticated google search it can be a nifty tool ... as long as you can interpret the results correctly. then again, stackoverflow answers have been used straight away with very little or no understanding too, and made it happily to production. this is just more of the same.
Re: (Score:2)
think of it as a streamlined version of stackoverflow
Yep. Definitely better than sitting for half an hour on stackoverflow trying to find useful answers.
Re:Generative AI for code is useless (Score:5, Insightful)
Investors and shareholders who've got more greed than brains (thus a lot) seeing an opportunity to finally cut out the specialized and costly egg heads in order to maximize profits.
There's that weird notion among some people that scientists and engineers are just being lazy if they don't easily do what was requested, and that explanations of why it's not feasible or even possible are just excuses.
Re: Generative AI for code is useless (Score:2)
Re: (Score:2)
"There's that weird notion among some people that scientists and engineers are just being lazy if they don't easily do what was requested, and that explanations of why it's not feasible or even possible are just excuses."
Oh this is so true.
better question (Score:2)
""Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?"
Where's the "generative leaps" even if they do, and who cares?
A tool is a tool, if you have it then you have it. Why is there an assumption that we also need "generative leaps"?
"Where does AI go after it's "perfected itself"?"
What if it's nowhere? So what?
"Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that cod
have you searched net for some android solutions l (Score:2)
A lot of them are outdated or were wrong in the first place.
Like how to do a gallery picker, you can use like 3 lines of code to get the bytes OR you can copy like 80 lines of code that still doesn't work with all uri sources. And thats only after you find the proper new way to receive the result for the launched intent.
So stuff like that will happen and they're already training the ai with such solutions.. Now would it be nice if the relevant docs just had the right way explained and how the permissions in
dumber people, dumber programmers (Score:2)
Programmers are lazy.
Great programmers are extra lazy...
Tools allow programmers to be extra extra lazy.
AI tools allow programmers to become morons.. or "prompt engineers"
How we got here:
Nobody can spell anymore or even use words they WANT to use because....
spellcheck is easier to use and when it changes the word you WANT to use to another word, it's often easier to just use that word, than it is to correct it.
So: no need to know how to spell or do grammar (Grammarly, anyone?)
By definition, a
Let me know when the first posit comes true (Score:2)
Because it ain't looking like coming even close so far.
It just regurgitates online examples that, at best, still needs a skilled hand to fit a use. No better than existing search engines really.
Re: (Score:2)
This. The way I see it, AI tools (copilot etc) help in the use case where you would have previously copypasted code from stackoverflow. In essence, you are a junior programmer starting out and are just learning tricks (or a mediocre one who never learned anything), the AI might shine in the sense that it provides better "search results" than googling with site:stackoverflow.com.
For experienced ones, not so much - perhaps as a verifier it might work. For me personally, I've drawn up some pretty complicated S
Re: (Score:2)
"AI" is advanced autocomplete (Score:2)
It is a probabilistic language model - for a given prompt it generates the "most likely" text according to its training data. It has no actual understanding of the problem you are solving, or programming in general. It has no common sense and no reasoning abilities. I can only see it being useful for simple discrete problems someone has already solved, or for generating large amounts of boilerplate. But in the former case not only are you reinventing the wheel, you are also not learning anything. In the lat
Re: (Score:2)
It's not that straightforwards. It hasn't (AFAIK) been trained to do so, but it *could* consider problems like "extract the BTree module from SQLITE and insert it in my code *here*.
Now that "BTree module" is definitely a discrete module, but it's not all that simple. One can do a remarkably huge amount by composing existing things and then optimizing them. To a large extent that's what program libraries are. It's just that the libraries aren't properly selected to be a "universal set of opcodes".
This is
Re: (Score:2)
It *could* be reliable, if it were trained to be reliable more than to be creative. ChatGPT is unreliable because it's been explicitly trained to be creative and to NOT duplicate things that it's learned exactly. (And, yeah, it's not always successful at that.)
There's a strong tension between "being creative" and "being reliable". Optimize for one and you drastically weaken the other.
No, AI will not turn you into an uberhacker (Score:3)
The push and hype for AI-based developing mostly comes from 2nd-rate developers who see AI as their great hope for finally mastering the craft and becoming rock star uberhackers. They also watch Star Trek too much, and are having multiple orgasms at the thought of saying "Computer: write me a node.js web server that implements a shopping cart", and have the code appear instantly before their eyes.
All the AI tools I've seen are little bit more than glorified predictive type-ahead tools. I can see how they can save quite a bit of time with rote typing, but that's pretty much it, nothing more beyond a glorified menu. But, guess what: you still have to use your brain to figure out which option to pick. An AI is not going to make the choice for you.
The remaining AI tools boil down to nothing more than producing a fill-in-the-blanks templates. The starting template is nothing special, and nothing that requires a lot of intelligence to write. But it does take the same amount of intelligence to figure out what goes into all the blank spots.
So, sorry, all of you who hope that AI will turn you into a supercoder. It's not going to happen, sorry. And the few of you who are worried about the Ai taking your job: there's nothing to worry about. It should not take a great thinker to conclude that before an AI can surpass a human brain in some measurable way, someone has to actually demonstrate that an AI surpasses a human brain, in some measurable way. Where's the evidence?
(Disclaimer, I also watch Star Trek too much, but I also watched Bill Shatner's SNL skit)
How is that a dystopian world? (Score:2)
Let's posit a couple of things:
- Libre code is good.
- Free as in speech doesn't need to be free as in beer.
As programmers, we want to use any code we want. Code that's shared means that we don't have to reimplement things just because of some copyright or "copyleft". Because we can't do this, a lot of programming time is invested in reinventing the wheel. Sometimes we also need to waste a lot of time just getting some esoteric API to work because there's not enough documentation or help.
Imagine if we could
It's only impressive to a point (Score:2)
ChatGPT is being trained on Stackoverflow, not quality code bases that have been annotated to hell and back. The level of effort to simply create a training set out of large Apache Java products, to say nothing of large C/C++ code bases like the Linux kernel, BSDs, KDE, GNOME, LibreOffice, etc. is something no one in their right mind would ever set out to do without having a highly compensated, full time job.
That's why ChatGPT and such are awesome at generating boilerplate code, but you don't see them going
Re: (Score:2)
ChatGPT is not good at boilerplate e.g. having it write a Hadoop program where you have to say the same thing over and over (because of all the generics used the key and value types for each stage have to be mentioned all over the place and it's not great at being consistent). What it is good at is annotating code.
Or, in other words, the data scientists would gesture vaguely towards the wikipedia pages for "semi-supervised learning" and "pseudo-labeling"
I gave ChatGPT 3.5Turbo a couple Racket macros and i
clean slate appraoch (Score:2)
What happens when everyone.. (Score:2)
1. has a cell phone. We thought work will reduce, instead it went up because your boss can call you any time. People can call you all the time and ask you to do stuff.
2. has a record player. We thought nobody will learn to play the piano anymore.
3. has a car. We thought we would spend less time on the road and going on trips.
4. has a calculator. We thought engineering would be a breeze.
Re: (Score:2)
Yeah, we're pretty bad at predicting 2nd and higher order effects. But engineering *IS* a breeze compared to equivalent projects in earlier times. It's just that now we're optimizing a lot more and taking on much more complex tasks.
Also, currently just about nobody learns to play the piano. A few do, but the percentage is trivial compared to what it was, even though pianos are a lot cheaper and more portable. (I'm not sure what your baseline is, or I'd say "but we didn't predict the rise of the studios"
Re: (Score:2)
Regarding engineering .. yes we're taking on more complex tasks .. that's what will happen with AI. If we're going to build rockets and biodomes to colonize the solar system we're going to have to. .. I don't know the statistics of piano specifically, but I believe the number of people making music overall has increased. Whether it's an instrument or via synths/computers.
Regarding piano
Re: (Score:2)
The number of people making music may have increased, but the proportion certainly hasn't. It used to be EXPECTED that everyone would play some instrument or other, though not necessarily that well. This became a lot less true after phonographs became common.
Re: (Score:2)
Fortunately, my bosses pretty much know not to call me when I'm off work, with few exceptions.
I can't speak specifically to pianos, because those are and have been expensive, but I'm sure that far fewer people learn musical instruments than did be fore phonographs, movies, TV, and the like made entertainment more o
Speculation (Score:2)
Assuming that LLMs continue to propagate and become the basis for the majority of dissemination of information, one logical result is that less new and valuable information will be made available to the public because of the desire to have one's model contain information that's not in others' models. Therefore the only publicly available training data which is not poisoned will be whatever the information-wants-to-be-free crowd makes available. This means that the reputable sources will narrow.
This is alrea
Wrong approach (Score:2)
Using chatbots to write simple code that is simply a remix of existing code it was trained on is not an exciting advance. It's just a tool for endlessly regurgitating the code of the past
The real exciting development will be when some sort of future AI can help us manage complexity by finding security vulnerabilities, unintended interactions, rare edge cases and hidden bugs
Designing reliable complex systems is hard, really hard, especially when they are too big to fit into a single mind. Managing complexity
Re: (Score:2)
I would have thought a more interesting use case for AI would be to write comments and analysis into existing code. and to write test cases.
Maybe even accept written specifications and highlight deficiencies and contradictions
Delivery (Score:2)
Conversations with rideshare drivers and complaints about food delivery are going to get really pedantic.
Re: (Score:2)
What happens when every meteorologist is using AI? (Score:2)
Markov processes were first used in two domains: text generation and weather prediction. Does it bother you that the typical meteorologist uses Markov methods to predict the weather?
This is not new (Score:2)
Probably won't use it (Score:2)
It costs money to use, at least the coding specific AIs do. Also, if the AI can't explain to me why the code works then I don't really want to use it. I don't like putting code into my programs that I don't understand why it works.
I don't know... but hopefully... (Score:2)
Commodity programming of endless completely derivative functions stops being a thing? You don't have to pay somebody huge money to modify the logo on a page? Small businesses at the right point in their growth can afford software that works well for them?
Rather than making progress in making software faster, cheaper, and easier to develop, we have regressed. The idea that a crud application needs a general purpose language is just fucking stupid. When all you have is a hammer everything looks like a nail.
An
Re: (Score:2)
AI won't replace stackoverflow (Score:2)
From anything false... (Score:2)
I'll do my own thinking, thank you (Score:2)
Hrumph. Been writin' code for decades now. Still use a text editor (jed, np++). Even for the Arduino IDE, I prefer to write outside and just debug inside. Don't like/need tooltips, autofill, or any of the other stuff that interrupts my mental flow. I'll keep track of the braces and parens for myself, thank you. We had to learn positions and parens coding rpg and lisp. Grateful we don't have to do that anymore.
Back when I started in '81 in industry I was coding bal360 on paper forms. These would go to a Data
Just a change in who makes money off of free code (Score:2)
we live in a world where code is free but the book about it is $50. So AI just transfers the income from the book publisher/author to the service.
AI will be trained on AI outputs...and then (Score:2)
AI doesn't just read stack overflow (Score:2)
It also reads official documentation and digests it. Presumably, that won't go away. So even if nobody posts on Stack Overflow any more (which is unlikely near term), there will still be plenty of source data form AI to regurgitate.
"perfected itself"? (Score:2)
Where does AI go after it's "perfected itself"?
Given that ChatGPT is actually getting worse, not better [searchenginejournal.com], I wouldn't hold my breath.
There's been other studies showing that the "quantum leap" that ChatGPT claims only exists when you very carefully pick your statistics parameters, and objectively looking it's merely a linear increase brought upon by large numbers. Or, in other words, there's no "perfection", it's simply a matter of throwing money at a problem.
That also means there's no "perfecting itself". On the contrary - as all these LLMs are fed essent
Re: (Score:2)
All AI plateaus.
It's been the same since the early days, back in the 60's where it was mostly just ideas.
There's always an assumption that it got better the first day, so it must get better every day, and it's simply not true. There's also always an assumption that when it was on one computer it was okay, two computers made it slightly better, so 10,000,000 computers must make it a genius. Also not true.
We are missing a critical element for AI (inference) and rather than seek it out, we think that throwin
5th Generation Programming Languages (Score:2)
Existing AI writes abysmal code. However, a solution to this has been proposed for many decades now - 5th Generation programming languages, whereby people write a SPECIFICATION for a program (as opposed to a prompt) and the AI use the specification to write the code.
A specification would lead to superior code, at least in theory, and would guarantee an actual match between the "prompt" and the code generated, at least in theory.
This would seem to be the correct approach to AI-generated code. And I do see th
I'll be taking it to another level (Score:2)
Flight (Score:2)
What happens when everybody travels via flight