Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Meta Using OpenAI's GPT-4 in Internal Coding Tool Despite Llama Push (fortune.com) 11

Meta is using OpenAI's GPT-4 alongside its own Llama AI model in Metamate, an internal coding assistance tool, Fortune reported Tuesday. The dual-model approach has been in place since early 2024, despite CEO Mark Zuckerberg's public promotion of Llama as a leading AI model.

Metamate, previously known as Code Compose, serves Meta's developers and employees with coding support. The Chan Zuckerberg Initiative, Zuckerberg's philanthropic organization, is separately developing an educational AI tool using OpenAI's technology, with OpenAI CEO Sam Altman joining CZI's AI advisory board.

Meta Using OpenAI's GPT-4 in Internal Coding Tool Despite Llama Push

Comments Filter:
  • Dual-pathing feels like a really clear cut way to compare your product against the competition.
    • A focused comparison would be useful, but I can't see getting two parallel suggestions every time I want to do some little thing and bothering to read the second if the first looks alright.
  • "despite"? Fortune really likes to skewer companies.

    Does this dual support really imply that Meta has no confidence in their own model and that it's not good enough for work? Maybe they want to be able to easily compare the two models' results to see how they're doing.

    • Given Google's recent issues and many other countries imposing fines for not allowing alternatives, any company that isn't ensuring that something else could be substituted for their own implementation is asking for trouble down the road.
  • It makes perfect sense to use a competitor's product along with your own. Comparisons are very useful for development.

  • This is how competition works, and presumably we all benefit in the future.

  • by substance2003 ( 665358 ) on Wednesday December 04, 2024 @02:18PM (#64990925)
    The real losers are the coders and I don't mean in the sense of job loss. I mean in the sense that they will see their competence drop as they no longer retain all manners of code tricks and obscure APIs to pull the most out of the hardware. I don't care how good AI is at coding, it doesn't have the drive create of its own and I suspect that while code quality and legibility will increase but creative ways to program will plummet.

    Maybe someone will tell me I'm wrong about the latter but for the former, it's gonna happen.
    • Couldn't you make a lot of the same arguments about improvements in compilers and programmers losing their understanding of hand-tuning whatever the compiler spit out to optimize it?
  • Llama has some code finetunes, but it was never trained specifically or that purpose.
    Even among the open-weight LLMs, there are much better code oriented LLMs out there, such as DeepSeek and Qwen-2.5-Coder.
    I would not judge them if they used Claude Sonnet 3.5 either. Don't waste the time of $250-$400K engineers by giving them anything but the current best one, which anyway is a rapidly shifting title.

Remember: Silly is a state of Mind, Stupid is a way of Life. -- Dave Butler

Working...