Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Samsung Software Engineers Busted For Pasting Proprietary Code Into ChatGPT (pcmag.com) 65

Multiple employees of Samsung's Korea-based semiconductor business plugged lines of confidential code into ChatGPT, effectively leaking corporate secrets that could be included in the chatbot's future responses to other people around the world. PCMag reports: One employee copied buggy source code from a semiconductor database into the chatbot and asked it to identify a fix, according to The Economist Korea. Another employee did the same for a different piece of equipment, requesting "code optimization" from ChatGPT. After a third employee asked the AI model to summarize meeting notes, Samsung executives stepped in. The company limited each employee's prompt to ChatGPT to 1,024 bytes.

Just three weeks earlier, Samsung had lifted its ban on employees using ChatGPT over concerns around this issue. After the recent incidents, it's considering re-instating the ban, as well as disciplinary action for the employees, The Economist Korea says. "If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network," reads an internal memo. "As soon as content is entered into ChatGPT, data is transmitted and stored to an external server, making it impossible for the company to retrieve it."

The OpenAI user guide warns users against this behavior: "We are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations." It says the system uses all questions and text submitted to it as training data.

This discussion has been archived. No new comments can be posted.

Samsung Software Engineers Busted For Pasting Proprietary Code Into ChatGPT

Comments Filter:
  • Each company an LLM (Score:3, Interesting)

    by Njovich ( 553857 ) on Saturday April 08, 2023 @05:10AM (#63434628)

    After the mining bubble where miners were buying 2000 dollar RTX cards like they were hotcakes now NVidia gets to enjoy an AI bubble where companies buy 20k cards like they are hotcakes.

  • by gweihir ( 88907 ) on Saturday April 08, 2023 @05:31AM (#63434654)

    Quite an achievement to be sure.

    • by luvirini ( 753157 ) on Saturday April 08, 2023 @05:55AM (#63434664)

      No. If you have large number people in a group have them do things for a few days.. and then pick the three who did the most stupid things, you will get some incredibly stupid results.

      It does not necessarily mean that those three people who did the most stupid things in my example are more stupid than normal people(though they might be), as everyone does something stupid from time to rime.

      • by gweihir ( 88907 )

        Pasting confidential info into a website? You have to be _very_ stupid to do that.

        • by gbjbaanb ( 229885 ) on Saturday April 08, 2023 @07:34AM (#63434764)

          not at all.

          Remember, the summary says "Samsung allowed chatGPT to be used after deciding this wasn't an issue"

          So the employees used chatGPT as they were allowed by higher ups.

          the higher ups then decidied that this was an issue so punished the employees for doing what they had been told was OK.

          Its not stupid, its ignorance. Not everyone knows that ChatGPT keeps your prompts as part of its database, privacy and datamining are things and we expect websites (which chatgpt is effectively) to not steal your private data and use it as if it belongs to them. Expect more lawsuits over this.

          • by gweihir ( 88907 )

            Well, ignorance is certainly a part of it. But the reason for that ignorance really needs to be either stupidity or apathy.

        • by malx ( 7723 )

          Really? Companies allow - even require - their employees to paste confidential information into cloud services like Gmail, Office365 and Salesforce. There’s no reason in principle why generative AI is any different, except Open AI’s rapacious privacy policy.

          • Well "no reason" if one ignores the differences between the two, but that's too damned nuanced this early in the morning.

          • by gweihir ( 88907 )

            Any sane employee IT security policy will explicitly forbid that and things like Office365 become only allowed with a specific exception.

    • You're going to use every tool you can to get ahead. When CEO pulls this crap by cheating on regulations for quick short-term profits they get away with it and they get a huge bonus. It's only a problem when the rank and file worker breaks the rules to try and make his job easier. When that happens to hammer comes down hard
      • by gweihir ( 88907 )

        Actually, it is always a problem, but I agree that we have decidedly too many CEOs that should be behind bars or at the very least get fired immediately and then be made permanently unemployable.

        That said, I recently had the pleasure to give a regulated financial processor a nice big "red" audit finding because they did not require their C-levels to use 2FA when travelling, but everybody else had to do it. They did not even try to argue that one, I wonder why.

  • by sonoronos ( 610381 ) on Saturday April 08, 2023 @05:54AM (#63434662)

    Did OpenAI tell Samsung that the code was pasted into the chat window?

    Did Samsung itself detect that an employee was pasting code into a window?

    • Did OpenAI tell Samsung that the code was pasted into the chat window?

      Did Samsung itself detect that an employee was pasting code into a window?

      It's possible the employees told someone, which then setoff the firestorm. They probably thought it was OK since Samsung had lifted the ban; and Samsung probably just assumed employees would know what not to put into ChatGPT.

      • deletion.
        ripped from open a i documentation.
        Can you delete my data.
        yes.
        please follow the data deletion process

    • by Entrope ( 68843 )

      I would bet money that Samsung itself detected this. It is pretty common for companies to have mandatory web proxies -- direct HTTP and HTTPS traffic outside the company is blocked unless it goes through a proxy that logs both directions of transfer. This protects against going to suspicious or unauthorized web sites, allows for malware scanning, and lets the company monitor for exfiltration of sensitive data.

  • "We are not able"? Impossible is not a word that judges are going to take seriously.

    If someone enters someone's personal data into a GPT system and it comes out elsewhere or it's all part of some longwinded revenge doxxing troll, and then a lawsuit prevails where a judge orders OpenAI to remove someone's personal data from the system, OpenAI -are- going to do it. They don't get a choice.

    • by ac22 ( 7754550 )

      Do you suppose that Samsung have the time and resources to file a lawsuit in the US every time one of their engineers posts some random chunk of code into a chat window? Would they be happy with the publicity that dozens of lawsuits created?

      And if a huge multinational like Samsung won't do it, who will? Remember, it was Samsung that said "data is transmitted and stored to an external server, making it impossible for the company to retrieve it" - not ChatGPT.

    • and then a lawsuit prevails where a judge orders OpenAI to remove someone's personal data from the system, OpenAI -are- going to do it. They don't get a choice.

      Maybe. The system doesn't actually store someone's personal data, so they can't be expected to remove it. It stores information about how their personal data is similar to other data, but it doesn't retain the data itself.

    • "We are not able"? Impossible is not a word that judges are going to take seriously.

      If someone enters someone's personal data into a GPT system and it comes out elsewhere or it's all part of some longwinded revenge doxxing troll, and then a lawsuit prevails where a judge orders OpenAI to remove someone's personal data from the system, OpenAI -are- going to do it. They don't get a choice.

      While I agree they would make a good faith effort to comply, it may be easier ordered than done. Depending on how the algorithm combines data, much of the information used in the response may not be directly related to the doxxing target. OpenAI could easily remove information such as names, addresses, etc. but it is conceivable the information could be created without actual personal data of the target, based on inferences and public data sources. Directly linked data, such as X Y lives at Z, could be d

    • I think if the training was that direct/verbatim then ChatGPT would have become Tay already.

  • Company lifts ban on employees doing dumb things. Employees proceed to do dumb things.

    You couldn't make this shit up.
  • I don't know how anyone can think pasting any proprietary code into any web service you don't control is a good idea. You don't know their data retention policy or implementation. Did they not understand that web services do not run inside the web browser, and that there are browsers? What kind of software "engineer" doesn't understand this?
    • Maybe it depends on the code and the person writing it. I'm guessing that people asking ChatGPT for code are not good at writing it themselves, TFA mentions finding bugs and optimizing code. Maybe ChatGPT is not the best mentor in this case.

      The last noob I worked with wrote terrible code, and his variable and function names had nothing to do with anything (literally stuff like $thing =). You could paste his 'proprietary' code into ChatGPT, you are not leaking anything or helping anyone.

      • It doesn't matter what the web service is called. It literally doesn't matter that it was ChatGPT in this case.

        You can be using wandbox or godbolt, and you STILL shouldn't paste any company code on there. It's called professional responsibility.
        • Not arguing *for* this kind of activity, you're right people should not be pasting private code into any browser. I'm just suggesting that its kind of a noob move and might reveal less than you think.

          I suspect Samsung does some kind of DLP to detect this.

    • "It's right there in the EULA. You did read the EULA in its entirety, right? Just like you do for every web site that you use? Really???? You don't???? What kind of moron..." Frankly, it isn't moronic behavior to use a tool you've been told you can use, especially when most of the other cloud tools that can be used at work have "enterprise" versions that properly isolate and protect a company's data. Unless the access to ChatGPT came with some serious training on data handling, this was a pretty predictable

      • No, it literally isn't.

        And no, you don't need to read the EULA. You ASSUME that there's nothing that will protect you.

        I write C++ for a living. wandbox and godbolt are the go to tools of most C++ programmers. I don't paste any company code into wandbox or godbolt, and I don't read their EULA. I don't care about what their EULA says. I only ever either type new code, or some code from StackOverflow, into it.

        especially when most of the other cloud tools that can be used at work have "enterprise" versions that properly isolate and protect a company's data.

        I literally said, in the FIRST SENTENCE:

        I don't know how anyone can think pasting any proprietary code into any web service you don't control is a good idea.

        Learn to read.

        • Most people are not going to be able to distinguish which services the company does not control from the ones the company does control. It used to be really simple to just say "do not post our proprietary information online". But now it is all online, and without training, it's impossible to expect most people to unravel which is safe or not, so we block things at the firewall and whitelist the systems that are ok. Samsung whitelisted ChatGPT. This result should have been expected even from completely compe

          • by flux ( 5274 )

            > Most people are not going to be able to distinguish which services the company does not control from the ones the company does control.

            I seem to have more faith on the mankind than you do. Companies like Samsung probably don't hire a lot of people below the center of the bell curve. There are various ways to identify if a service is company-provided or not, such as is it company-branded, is it hosted under company domain, does it have company login.

            Not blocked by the proxies? I guess that makes it look

            • > I seem to have more faith on the mankind than you do.

              Heh. Perhaps. My particular faith is backed by a lot of evidence. My company hires really high on the skill level for each particular job, but to maintain security, we've had to go to explicit allow lists for pretty much the whole Internet. Sites are banned by default until review. There's some unexpected loopholes (Stack Overflow being the big one), but mostly it's pretty hard to upload company data to a non-approved site without really being intent

  • by Required Snark ( 1702878 ) on Saturday April 08, 2023 @07:33AM (#63434758)
    Any AI that crunches massive amounts of data will inevitably be misused. No set of unenforceable operation rules or promises by corporate spokes-liars will make any impact. It will leak personal data or corporate information, and no technological fix is possible. And it will produce flawed results at random, and these will end up causing real world horror stories.

    The current version of AI has been sold as a magic fix for every problem, and laziness and stupidity do the rest. Technical types are some of the worst offenders because they are true believers who can't imagine that something so high-tech and cool is fundamentally flawed and untrustworthy. You don't think so? Just look at the drivel the permeates every topic here on Slashdot.

  • To think that AI is going to be anything but hijacked by businesses to promote their business is hopelessly naïve.

    A much simpler but similar effect is Keyword gaming to make certain the results you want show up first in the search window. Things like ChatGPT will just have the intentional pecuniary or political bias hid deeply and no where the average user can see or understand it.

  • Pretty inevitable (Score:5, Insightful)

    by cascadingstylesheet ( 140919 ) on Saturday April 08, 2023 @07:46AM (#63434776) Journal

    Code monkeys gonna do what code monkeys gonna do.

    "Please debug this for me, and figure out why the include file with the API password isn't getting loaded properly. I'll paste the whole file so you have all the info."

    • Exactly. HR better start requiring in-person coding tests for prospective hires.

      • Hehe. I did that once. It was six stapled pages of intentionally dumbed down and buggy c++ that was peripherally related to the task we were hiring specifically for.

        Some guys got most of it, some made arithmetic mistakes or didn't quite meet the reqs of the prompt, but one dude took a look at it and proudly declared, "This is terrible code. You can get someone at half my rate to write this kind of code!"

        He was an older white dude who I infer from context didn't actually know how to code, but maybe he jetted

        • by tlhIngan ( 30335 )

          Which is odd, since Samsung generally makes good stuff. Oh well.

          Samsung makes good hardware, but generally speaking, the software is of questionable quality.

    • Samsung would be better setting up their own local version just like some set up their own personal cloud. Administered the same way for both.

  • by davide marney ( 231845 ) on Saturday April 08, 2023 @08:16AM (#63434790) Journal

    I've looked more at images and conversations rather than code, but to me the difference between model output and human output is really quite obvious. Model output is all corporate-speak. It's entirely like reading something written by Marketing. I suppose code might be more human-like, given that code is such an intentionally dumbed-down language.

  • So the same crime as putting the code into Microsoft Word or sending it with Gmail?

  • by Visarga ( 1071662 ) on Saturday April 08, 2023 @09:54AM (#63434944)
    I thought "Open"AI promised not to use the chat user data for training anymore.
  • "As soon as content is entered into ChatGPT, data is transmitted and stored to an external server, making it impossible for the company to retrieve it."
  • I wonder if ChatGPT would be able to fix, or at least find proper workaround for the #GSOD problem [youtube.com], since @Samsung @SamsungMobile #Samsung #SamsungMobile have long forgotten what a client is, and won't acknowledge their responsibility.

  • I'm pretty sure a lot of company coders didn't read the disclaimer and went ahead with using ChatGPT for fixing/doing their code. I've never used ChatGPT yet, but I'll bet my boss wouldn't be happy if I shared our code like that.
  • by dsgrntlxmply ( 610492 ) on Saturday April 08, 2023 @06:06PM (#63435728)
    If Samsung leaked any of their Android tablet application UI code into ChatGPT, the result would best be modeled by the movie "Brainstorm", where brain recordings allowed one person's experiences to be replayed into another person. One category of recordings were labeled "Toxic": taken from the brains of psychotics, and would induce insanity when played back.
  • You don't know your changes have fixed the bug.

    I had a programmer on my team once who would get frustrated trying to find the source of a bug. Once, he was writing code that spit out PDF documents. He was trying to fix a problem that sometimes caused the top margin to be 1 inch *above* the top of the page. So he added a line that said "If the margin is -1 inches, make it +1 inch." But fixed, right? After all, the problem stopped happening in his test case!

    Nope, the problem was still there, it was just burie

  • How is that going to help anyone?

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...