Mojo, Bend, and the Rise of AI-First Programming Languages (venturebeat.com) 26
"While general-purpose languages like Python, C++, and Java remain popular in AI development," writes VentureBeat, "the resurgence of AI-first languages signifies a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs... designed from the ground up to address the specific needs of AI development."
Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow's machine learning capabilities...
At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation...
Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.
According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.
Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.
The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion...
The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more.
The future of AI development and computing itself are being reshaped by the languages and tools we create today.
In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.
At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation...
Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.
According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.
Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.
The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion...
The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more.
The future of AI development and computing itself are being reshaped by the languages and tools we create today.
In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.
usual BS justification chocked full of lies (Score:4, Insightful)
"the resurgence of AI-first languages signifies..."
Not a resurgence because there has never been any such thing as "AI-first languages".
"... a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs..."
What demands are those and why do they require specialized languages? What are these domain-specific needs and how has AI done without solutions so far?
"... designed from the ground up to address the specific needs of AI development."
Funny how the rest of the summary is about these tools not being "ground up" designs at all. It's just another grift.
Re:usual BS justification chocked full of lies (Score:4, Informative)
Not a resurgence because there has never been any such thing as "AI-first languages".
Prolog, at least, was specifically AI related language.
Re:usual BS justification chocked full of lies (Score:4, Informative)
Re:usual BS justification chocked full of lies (Score:5, Informative)
lisp not really, it was general purpose with a focus on maths and high level abstractions. it quickly became very popular for ai experiments because of this and because there weren't any other such high level languages around.
prolog was indeed specifically created with logical inference in mind, which was one of the two main fronts of ai research at the time (expert systems). the other was neural networks which couldn't really have benefited much from a special purpose language at that moment.
then again symbolic processing was soon found to be a dead end (for pursuing ai, it did have some industrial application), so prolog slipped quickly into a niche and oblivion. lisp otoh, thanks to its flexibility and high level as a general purpose model found and spurred many applications (e.g. autocad and emacs) and implementations, still to this day.
Re: usual BS justification chocked full of lies (Score:1)
Re: (Score:2)
"the resurgence of AI-first languages signifies..."
Not a resurgence because there has never been any such thing as "AI-first languages".
"...
What about LISP and Prolog? No doubt Logan might have developed along appropriate lines.
Not AI (Score:4, Informative)
Downside, it takes an absurd amount of time to compile.
It's also not designed exclusively for AI, it's just a language that breaks down every little expression or thing that can be run parallel, into code that can run on either CPU threads or on GPUs directly (which are massively parallel).
Wal-mart programming language effect (Score:4, Insightful)
Following the old "buy the new consumer technology when it's available at Wal-mart" from anywhere may hold for programming languages.
Treat the programming language as a toy until there are dozens, hundreds (?) of jobs listing it as a required primary skill in online job posts. Only consider the programming language for production environments once it gets that many job listings.
For checks: Search for online job postings for languages trendy during 1997-2010 and see the low number having more than 5 job listings.
It's cool, it's a toy, don't rely on toys for production systems.
Specialised? (Score:2)
I'll bet it can all be decompiled to C code, so doesn't that indirectly make C also an AI specialised language?
Hype (Score:1)
Programming languages do math and access memory. That's all they do and that's all they need to do.
Re:Hype (Score:4, Insightful)
I think new languages (or at least language features) make sense for new kind of hardware. Plain old C doesn't lend itself very nicely to modern parallel computing hardware such as GPUs, where you don't just loop over vector components. Fortran designers understood this already in the 80s/90s, and added native matrix/vector syntax for SIMD operations.
As for GPUs, we've had GPGPU for about 20 years, and of course the current AI craze relies on these tools already. I don't see anything in the summary that isn't useful for GPU programming in general.
Re: (Score:1)
Fortran designers understood this already in the 80s/90s, and added native matrix/vector syntax for SIMD operations.
That is not correct.
At that time FORTRAN compilers recognized the structure of the loop, and from that they constructed SIMD instructions.
No idea if modern FORTRAN has new syntax for SIMD instructions.
Disclaimer: I was member in a Vector computing research group at KIT from 1987 till ca. 1992.
Re: (Score:2)
Fortran designers understood this already in the 80s/90s, and added native matrix/vector syntax for SIMD operations. That is not correct. At that time FORTRAN compilers recognized the structure of the loop, and from that they constructed SIMD instructions. No idea if modern FORTRAN has new syntax for SIMD instructions. Disclaimer: I was member in a Vector computing research group at KIT from 1987 till ca. 1992.
I first learned Fortran around 2000, it was the Fortran 90 version, and it had the native matrix/vector syntax. I figured the "90" refers to the year 1990, so it would have been under development in the 80s. Wikipedia confirms [wikipedia.org] that it was released in 1991, and it also mentions "Ability to operate on arrays (or array sections) as a whole".
Re: (Score:2)
Fortran 90 became standard 1992.
The vector machines I worked on (at that time) did not support Fortran 90.
All programming languages converge to Lisp (Score:3)
Re: (Score:2)
Yeah, just like Algol 68. It introduced a plethora of new ideas that no subsequent language ever dared to touch. And so we ended up with boring languages like C, Pascal and Ada.
Re: (Score:2)
Lisp is just too damned hard to read. The ugly syntax of "real" languages provides visual cues, just like ugly buildings make the best landmarks to help your memory when driving (or will that make any sense to GPS-addicted youngbies?)
Really?!? (Score:2)
Swift for TensorFlow (Score:2)
My understanding is that this project is dead as a doornail, last commit was over three years ago, so I am surprised that it gets lumped in here.
Hype (Score:2)
Change the channel, Marge.