Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Python Programming

'Codon' Compiles Python to Native Machine Code That's Even Faster Than C (mit.edu) 124

Codon is a new "high-performance Python compiler that compiles Python code to native machine code without any runtime overhead," according to its README file on GitHub. Typical speedups over Python are on the order of 10-100x or more, on a single thread. Codon's performance is typically on par with (and sometimes better than) that of C/C++. Unlike Python, Codon supports native multithreading, which can lead to speedups many times higher still.
Its development team includes researchers from MIT's Computer Science and Artificial Intelligence lab, according to this announcement from MIT shared by long-time Slashdot reader Futurepower(R): The compiler lets developers create new domain-specific languages (DSLs) within Python — which is typically orders of magnitude slower than languages like C or C++ — while still getting the performance benefits of those other languages. "We realized that people don't necessarily want to learn a new language, or a new tool, especially those who are nontechnical. So we thought, let's take Python syntax, semantics, and libraries and incorporate them into a new system built from the ground up," says Ariya Shajii SM '18, PhD '21, lead author on a new paper about the team's new system, Codon. "The user simply writes Python like they're used to, without having to worry about data types or performance, which we handle automatically — and the result is that their code runs 10 to 100 times faster than regular Python. Codon is already being used commercially in fields like quantitative finance, bioinformatics, and deep learning."

The team put Codon through some rigorous testing, and it punched above its weight. Specifically, they took roughly 10 commonly used genomics applications written in Python and compiled them using Codon, and achieved five to 10 times speedups over the original hand-optimized implementations.... The Codon platform also has a parallel backend that lets users write Python code that can be explicitly compiled for GPUs or multiple cores, tasks which have traditionally required low-level programming expertise.... Part of the innovation with Codon is that the tool does type checking before running the program. That lets the compiler convert the code to native machine code, which avoids all of the overhead that Python has in dealing with data types at runtime.

"Python is the language of choice for domain experts that are not programming experts. If they write a program that gets popular, and many people start using it and run larger and larger datasets, then the lack of performance of Python becomes a critical barrier to success," says Saman Amarasinghe, MIT professor of electrical engineering and computer science and CSAIL principal investigator. "Instead of needing to rewrite the program using a C-implemented library like NumPy or totally rewrite in a language like C, Codon can use the same Python implementation and give the same performance you'll get by rewriting in C. Thus, I believe Codon is the easiest path forward for successful Python applications that have hit a limit due to lack of performance."

The other piece of the puzzle is the optimizations in the compiler. Working with the genomics plugin, for example, will perform its own set of optimizations that are specific to that computing domain, which involves working with genomic sequences and other biological data, for example. The result is an executable file that runs at the speed of C or C++, or even faster once domain-specific optimizations are applied.

This discussion has been archived. No new comments can be posted.

'Codon' Compiles Python to Native Machine Code That's Even Faster Than C

Comments Filter:
  • Quant Finance (Score:5, Interesting)

    by igreaterthanu ( 1942456 ) on Sunday March 19, 2023 @12:05AM (#63381737)
    I work in the HFT world, and at my previous employer we had a system basically just like that that took Python code using numpy, etc. and compiled it to heavily optimized native binaries that could be called from C++. We had it years ago. Of course stuff like this is a competitive advantage so obviously not published.
    • That's it, that's all the HFT stuff is?
      Here I thought I'd be impressed. Should've hired gamedevs, who measure optimization times in microseconds. The original Last of Us was partially hand coded in assembly just to get it to run. Somehow I pictured the HFT guys as being equivalent because, surely, right?
      • > Should've hired gamedevs, who measure optimization times in microseconds.

        Many of the linux drivers were fixed or tuned by HFT firms to get faster order execution. At 10Gbps we're into nanoseconds if my tired-math is right.

        We all benefit from that work.

      • by tippen ( 704534 )

        That's it, that's all the HFT stuff is?

        Lol, no. Even years ago, serious HFT performance work moved down into FPGAs to do hardware acceleration.

      • Re:Quant Finance (Score:4, Insightful)

        by igreaterthanu ( 1942456 ) on Monday March 20, 2023 @12:03AM (#63384043)
        Typically there's a heavily optimized model written in Python by compiled into native code that generates prices, but even with all the optimization it's too slow to be used for trading. e.g. it might come up with a price every 50-2000ms depending on the model. The model will also create "greeks" which are just derivatives, in the mathematical sense, of price to another factor. Then it emits this price + the greeks to an FPGA as often as it can generate them, and the FPGA uses math no more complex than linear regression to compute a price to buy low and sell high at, until the model is updated. There's a lot of complexity in the FPGA, but all the magic happens in the pricing engine - they have some crazy smart people coming up with new ways to predict what will happen next.
  • no (Score:5, Funny)

    by Anonymouse Cowtard ( 6211666 ) on Sunday March 19, 2023 @12:19AM (#63381745) Homepage
    I'm waiting for the peer review. Nothing is faster than c.
    • I wonder if someone wrote a compiled BASIC if it would run faster than the interpreted version? /s

      • VMS had a compiled basic. It worked great.

        • There are lots of compiled basics. I even remember one designed for numerical analysis.
          There were even things like the Happaugue hardware/libraries for the old ISA machines.

        • I think you're missing his point, which is that of course a compiled version is going to run faster.

      • Yes. I vaguely remember one for the Apple ][ although I can't remember the name.

        I also remember playing Akalabeth and digging into the code. It was a combination of Integer BASIC and assembly. It was the first complex code I ever saw and I wonder what I would think of it now if I found a copy. Was Lord British a genius or a mad adventure game software developer?

        Whatever he is it was one factor in my desire to pursue a career in IT.

        • There were many BASICs for the Apple 2, some compiled, some not. In alphabetical order:

          * AppleSoft
          * Beagle Compiler
          * Blankenship BASIC
          * CBASIC (CP/M)
          * Hayden Basic Compiler
          * MD-BASIC (Morgan Davis)
          * Micol Advanced BASIC
          * Microsoft BASIC
          * TASC [microsoft.com] (Microsoft's The Applesoft Compiler)
          * ZBASIC Compiler

          > Was Lord British a genius or a mad adventure game software developer?

          Both. His Ultima series had a huge influence on Western RPG game design. Sadly lately he has turned into a grifter [reddit.com] with Shroud of the Avatar

      • I used TurboBasic Back in the day - it was a BASIC compiler and got pretty good performance. The cool thing about it was that you could also do inline assembler for when you really needed to tweak some speed. It was with TurboBasic I learned that data types really matter - going from the default (undefined) numeric type, which was a single float, to defined integers for some graphics codes enhanced performance by about 2000% or so.
        • request.
          that codon be a drop-in replacement.
          i like being able to debug inline.
          to convert my solution to a python callable binary would be like icing on the cake

      • by hawk ( 1151 )

        it was done on various systems in the 80s, and possibly before.

        Someone had an AppleSoft Compiler, and mircrosft itself had an almost-compatiblecompiler for MBASIC.

        CP/M had CBAS2 which was not, iirc, compatible with anything else but the most generic BASIC.

        TOPS-20 (and I presume -10) compiled before executing, even in the interactive mode.

        On older BASIC implementations, just not having to scan through the lines on every GOTO or GOSUB was an easy performance gain. I think that MBASIC 5 started sticking refer

    • by Dwedit ( 232252 )

      Straight assembly doesn't have the overhead of conforming to someone's ABI. You have the freedom to have functions share registers directly.

      But once you call C code, you're back to following the ABI.

      • by drnb ( 2434720 )

        Straight assembly doesn't have the overhead of conforming to someone's ABI. You have the freedom to have functions share registers directly.

        But once you call C code, you're back to following the ABI.

        Whether it is C calling assembly, or assembly calling C, only the assembly at the interface needs to comply with the ABI, you own assembly is still free to interact however you want.

    • FORTRAN

    • Traditionally fortran is considered to be faster than C. As in all things though, it really depends on the application, the programmer, and whichever language is able to best express the particular problem being solved.

      That said, it wouldn't be that surprising if it was faster than C for some very specific things. The name codon in particular hints to me that the compiler maintainer probably have applications specific to biology in mind, so it might be best at optimizing around problems specific to biology,

      • I read through the summary, and from what I could gleam, their claim is that they have "domain-specific optimizations", which strikes me as talking about specific libraries being very specifically optimized, maybe even talking about utilizing GPU or other highly parallel code paths, etc. So in that *very* specific case, you might be able to claim "faster than C", but that's not a very apples-to-apples comparison, really.

        It really makes zero sense for them to claim "faster than C" in the general sense, beca

        • Re: no (Score:4, Informative)

          by real_nickname ( 6922224 ) on Sunday March 19, 2023 @03:12AM (#63381883)

          that's claiming to beat out modern C/C++ compilers and back-ends, which have decades and decades of collective optimization work going into them.

          They use LLVM so their native code generation has the same level of optimizations than clang for example. in the faq:

          Codon can sometimes generate better code than C/C++ compilers for a variety of reasons, such as better container implementations, the fact that Codon does not use object files and inlines all library code, or Codon-specific compiler optimizations that are not performed with C or C++.

          C++ has infinite container implementation(they compare to STL here I guess), linkers can also do global optimizations between object files and the codon-specific (ie gpu,openmp) which is irrelevant too(many c/c++ tools allow to do the same kind of semi-auto vectorization ).

          • They use LLVM so their native code generation has the same level of optimizations than clang for example.

            Thanks, I read the summary but missed that. Sort of an important point, as LLVM leverages a lot of existing micro-optimization work. Also, STL containers are famously not optimal. They're decent in the general case, but it's not hard to out-perform them.

            Reading more, the headlines are a lot more click-bait-ish (why am I not surprised). They tend to claim more modest speedups in general, and only claim "faster than C" in some very specific cases. The summary and headline, of course, make it sound like t

    • Re:no (Score:5, Informative)

      by v1 ( 525388 ) on Sunday March 19, 2023 @01:12AM (#63381785) Homepage Journal

      I'm waiting for the peer review. Nothing is faster than c

      Assembly can be faster than C. HOW much faster is entirely dependent on the compiler, and the structure of the C. If you don't know what you're doing, it's possible to write C that the compiler can't optimize well and will run substantially slower than assembly, but usually the C compiler does at least a pretty good job.

      I've written a lot of assembly in my time, and back then there were no optimizing compilers. 100% of the optimization was done by the programmer. And as a result, assembly can be screaming fast (and mind-bogglingly efficient on program size and RAM required) even on slow hardware. When you're "programming with sticks and rocks" as I used to say, you can squeeze out every last unnecessary CPU cycle. (while also making use of pretty much every bit of memory) There's a reason old programs were measured in kilobytes and ram was measured in megabytes.

      So this all depends on the quality of the compiler. MOST other languages nowadays compile to C, and let the (very old, VERY well optimized) C compiler generate the assembly for final compilation to machine code. IN THEORY this means they themselves don't have to do much optimization, they leave it to the C compiler to clean up the mess they make. So, eliminating that go-between has the potential to be faster, although I'm still a bit skeptical. Those C compilers have been around for so long, and have been tweaked so heavily, it's a tall order to make something that compiles directly to assembly that can match their optimization. The only way you're going to pull that off is if you know the source language and can optimize from one step back, leading to what could be more efficient and faster assembly. But it's a lot of work to get there, and you're up against decades of work that's been put into that C compiler. Not saying that you can't win, but the C compiler definitely has the home field advantage over you.

      I also don't buy that "we're optimizing for the specific hardware" benefit. You can already do exactly that with the correct compiler switches, and any good IDE will offer you the option to optimize those switches as it hands it over to the C compiler to generate your object code. Your IDE should have pages of switches and options you can tweak if you know your target platform and are willing to generate object that will only run on the exact platform you intend to use.

      tl;dr: I don't agree with you, but I also don't agree with them :P

      • by vyvepe ( 809573 )

        MOST other languages nowadays compile to C, and let the (very old, VERY well optimized) C compiler generate the assembly for final compilation to machine code.

        I doubt it is still so.
        Nowadays it is compiled to some kind of intermediate language which is specific for a compiler suite.

        E.g. LLVM frontends compile to LLVM IR (intermediate representation) which is a kind of strongly typed assembly where e.g. calling conventions are still abstract. GCC suite front ends compile to GENERIC, GIMPLE or RTL; all still higher level than assembly which is specific for some target architecture.

        The intermediate languages are then compiled to the specific target by backend par

      • Comment removed based on user account deletion
      • IBM had this sorted decades ago. Write in target language - it could be be C , COBOL or FORTRAN. See https://www.ibm.com/docs/en/op... [ibm.com]. Better yet, in the link phases there are memory pool options (IBM MVS/ZOS has hardware keyed memory) to pinpoint haha any memory leaks. Even better - MIT and IBM go back a LONG way before 1968. See https://en.wikipedia.org/wiki/... [wikipedia.org] so nothing new here.
    • by sageres ( 561626 )

      Pure assembler?

    • Re:no (Score:5, Interesting)

      by Dutch Gun ( 899105 ) on Sunday March 19, 2023 @01:58AM (#63381823)

      I'm waiting for the peer review. Nothing is faster than c.

      These claims often mean "faster than C if written naively, without even a basic eye for optimization, etc"

      I'm pretty dubious about the whole "hey, you don't have to worry about performance - it just magically turns into maximally optimal code", which seems wildly over-optimistic. Even when writing in C or C++, there's a vast difference between naively written code and hand-optimized code. Unless they've somehow managed to invent the world's most amazing optimizing compiler, beating the absolute pants off of, say, gcc, MSVC, and LLVM and ALL the myriad optimization work done on those over many, many years of work, this claim is hard to take seriously.

      It sounds like cool tech, but honestly, a little too breathlessly optimistic to be easily believed. Those of us who have written a lot of highly optimized native code tend to understand how many different factors must come together to make that happen. Maybe I'm being a little too cynical for my own good, but extraordinary claims need to be backed up with a LOT of proof, and "faster than C" is a hell of a bold claim.

      • The actual claim is "sometimes better than C", which is possible, because not all C programs were written for extreme efficiency.

        But every compiler for a dynamic language has claimed this, and the hype dies down when they can't actually beat real world workloads on a consistent basis.
        • But every compiler for a dynamic language has claimed this

          Oh boy yes they have, I think to the point where it's really poisoned things.

          I remember every year for about 15 years reading how THIS YEAR, Java was faster than C/C++. I like how they were always tacitly admitting that the previous year it wasn't in fact faster. And then it turned into, well, if you write the certain kind of algorithm that the JVM can optimize well (e.g. ignoring complex containers), and write it in a rather mangled, non idiomatic s

          • Even last year, I debated somebody here who claimed some fancy Java VM has put Java on par with C++.

            Surely, if Java were so fast, and safe dynamic languages were so productive, someone would have developed an all-Java, no native library, web browser with acceptable performance by now.
    • Statically typed languages are fast, dynamically typed languages are slow. CPython is ultra slow. Nothing new.
      • by strombrg ( 62192 )

        Statically typed languages are fast, dynamically typed languages are slow. CPython is ultra slow. Nothing new.

        Actually, for sufficiently large inputs, AOT implementations and JIT'd implementations are the same, perfomance-wise. Keep in mind that a JIT has access to runtime info an AOT optimizer (usually) doesn't.

        But there's nothing that prevents an AOT implementation from inserting a JIT, and there's nothing that stops a JIT from doing whole-program analysis.

        Again: for sufficiently large inputs.

    • I'm waiting for the peer review. Nothing is faster than c.

      Java's faster than C this year, every year since 1995. The only thing that ever reliably beats C is FORTRAN.

      • FORTRAN mainly beats C because the main, non-awkward way to use multi-dimensional matrices in C results in the matrix being allocated to non-contiguous memory. This is performance relevant, because this will cause the execution pipeline to stall.

        Fortran fixes this. If you use the right keyword in C#, it also fixes it. The NumPy library, which uses an external C library, also fixes it, but that library is built using the "awkward" method I mentioned.

    • I'm waiting for the peer review. Nothing is faster than c.

      What do you need a peer review for if you already know that nothing is faster than C?

    • Re:no (Score:5, Funny)

      by The Evil Atheist ( 2484676 ) on Sunday March 19, 2023 @05:26AM (#63382019)
      c is, after all, the speed of light in a vacuum, so yes, nothing is faster than c.
      • They recently found out that the speed of light in East Palestine, Ohio is greater than its speed in a vacuum. Apparently, even photons don't want to stick around and suffocate any longer than they must.
      • by kiore ( 734594 )

        c is, after all, the speed of light in a vacuum, so yes, nothing is faster than c.

        Sounds like time to introduce the tachyon language to the mix ... the worse the source code, the faster it runs.

    • Or you could read their README. It says "on a par with C and sometimes faster".

      The headline is a poor representation of the claim.

    • by Zobeid ( 314469 )

      Back in the Glory Days, Forth was reputed to be faster than C. I don't know how that would play out with today's hardware and compilers, though. I guess it's moot since Forth is now a footnote.

      • Forth code was supposedly smaller than C code back in the days where code size actually mattered. Guess that could have improved cache access or whatever. However in terms of raw computation Forth, to my knowledge, was considered slower than C, albeit not by much (about 10% I think?)

        • by Zobeid ( 314469 )

          Well, what I can vaguely remember is that those 8-bit processors had few registers to work with, but they were pretty good at stack operations, which is what Forth is built around. And since you, the programmer, were manipulating the stack deliberately and explicitly, you tended to think a lot about the best way to do it.

          By default, C also allocates variables on the stack, though all the stack manipulation is concealed from the coder. Then as processors with more registers arrived (68000, etc.) the "regis

          • The register keyword still exists, but I think it's rarely used now, since compilers have become smart enough to make those decisions for us.

            I feel like it's less "smart enough" and more x86 poisoning. When 32bit Intel ISA ruled the streets a register keyword wasn't going to achieve much. You didn't have many to work with and they had specific purposes and any attempt to hold a value in a register for more than a few instructions was likely counter productive. Better to let the compiler handle it, especially since the CPU would also be doing register renaming to pipeline instructions and IMHO it's definitely better to let the compiler optimise i

    • by HiThere ( 15173 )

      The system says it can do automatic multi-threading in some contexts, so in those contexts it may well be "faster than c" unless you put a HUGE amount of work into tuning that C.

      (Note "faster than C" was the headline claim, not the claim I read in the linked to docs.)

    • I'm wondering what the catch is? Does it forbid certain python instructions or idioms. Does it flail if an unexpected data type is presented to a compiled subroutine? Does it have cases where it produced a different answer than python?

      Having worked with Julia I can believe you can compile an untyped language to faster than C. Julia sometimes is faster than Fortran. Julia however was rigged from the start to do just in time compiling when new data type signatures are presented to a functions arguments.

      • by jma05 ( 897351 )

        This is more like a newer Cython + Numba.
        It is restricted Python, like Cython. It emits LLVM code instead of C/C++.

        > Does it forbid certain python instructions or idioms. Does it flail if an unexpected data type is presented to a compiled subroutine?
        > Does it have cases where it produced a different answer than python?

        Yes.

    • by gweihir ( 88907 )

      They probably compared incompetently written C to the output of their compiler. Speed of C code very much depends on who designs and writes it. A competent C coder can almost always beat a compiler from another language.

      • by strombrg ( 62192 )
        People used to say the same about hand-coded assembler.
        • by gweihir ( 88907 )

          Still true, but C compilers have gotten to a level where you only very rarely benefit from doing or embedding assembler.

    • by Dausha ( 546002 )

      Since c is the speed of light, you are correct, sir.

    • I'm waiting for the peer review. Nothing is faster than c.

      That has always been a myth. Fortran consistently outperforms C, for instance. (In both theory and practice. Theres a reason that crusty old language is still found extensively in high end simulation, HFT and so on.)

      And heres a kicker;- Under some circumstances Java *can* outperform it due to run-time optimization (although in practice thats usually not true)

      And then theres the whole wild world of GPU languages but thats a whole different conversatio

    • by hawk ( 1151 )

      >Nothing is faster than c.

      starships and gossip.

  • by mazinger ( 789576 ) on Sunday March 19, 2023 @01:05AM (#63381779) Journal
    Looks like it's for non-commercial use only. The license is some modified Apache(?) license.
    • That's an interesting one. Their licence is for non-production usage.

      Their FAQ claims that after three years for any given release, this reverts to apache. Still, with that licence, I think, it seems effectively unusable.

      • by butlerm ( 3112 )

        No doubt they are hoping to commercialize it and don't expect (almost) anyone to use it for anything other than an academic exercise until then. It is such a great idea though, someone should consider making an open source (i.e. OSI compliant) implementation or something along similar lines.

  • Marketing (Score:5, Insightful)

    by istartedi ( 132515 ) on Sunday March 19, 2023 @01:15AM (#63381789) Journal

    These guys are out there on some other tech sites too. I think there's some proprietary stuff going on, so it's not just regular Python. Aside from that, if it's a DSL that's embedded in Python it's no more Python than inlined assembly in C or C++, is C or C++. If that inlined code takes advantage of something non-standard like a GPU then, duh! Of course it's going to be faster than whatever it's embedded in. If you blast all the way down the stack from a scripting language to a GPU, then Duh! Of course it's going to do whatever the GPU can do which is faster than standard C or C++.

    Anyway, good marketing but if it's a proprietary development tool they've got an uphill battle and maybe this whole thing is a marketing push to see if they can actually generate enough sales to keep them going. They probably can't.

    • by HiThere ( 15173 )

      It's definitely not all of Python. E.g. the strings are pure ASCII, and there are several other limitations. Still, they aren't bad or unreasonable. It could probably (my wild guess) handle 95% of Python code. And it appears (at a first glance) that it could import Python code to handle functions that it couldn't compile....but that needs to be managed by the programmer, which is what they're trying to avoid.

      IIUC saying it's "a DSL that's embedded in Python" is a bad description. But it's "sort of" lik

    • by butlerm ( 3112 )

      It is not a DSL so much as a restricted dialect of Python. I suppose they could use the technology to make true DSLs though, and it no doubt reduces confusion to refer to this one as a different programming language rather than as a general purpose drop in replacement for Python (which it most definitely is not).

      The main reason I wouldn't consider it a domain specific language is because it is suitable for a number of different domains.

  • by jopet ( 538074 ) on Sunday March 19, 2023 @01:52AM (#63381817) Journal

    This is a major flaw. How can you take the step back to ASCII in 2023?

    • after all the work python did to fully support unicode, now we have a new python that take us back to the ascii incompatibility days again.
  • by The Evil Atheist ( 2484676 ) on Sunday March 19, 2023 @02:38AM (#63381831)

    Codon's performance is typically on par with (and sometimes better than) that of C/C++.

    We've heard this many times before. They often benchmark against some horrible bit of C/C++ code that goes out of its way to do the inefficient thing.

    Yet time and again, on real workloads, these wins never borne out.

  • Nope.. (Score:4, Funny)

    by SuperDre ( 982372 ) on Sunday March 19, 2023 @03:35AM (#63381917) Homepage
    It might compile it to fast native assemblies, but the biggest problem, python is still a fugly language. It's a shame it's getting so popular.
    • Python, fugly? Have you seen the extreme amount of stuff you need to type in java? Or go? Python is just so much easier for most things.

    • by jma05 ( 897351 )

      Compared to what? R? Matlab? Perl? Ruby?
      It was the best looking syntax, when it was designed.
      Usually, its just that some hate the indentation syntax like others hate parentheses of Lisp. It's subjective.

  • How do you identify bottlenecks, can you use inline assembly, can you use vtune?

  • The techno path is ok as already validated by Julia, Crystal (ruby compiler), or emacs-ELISP, for example.

    A subset of python ? What percentage of PIP compile with it ?

    My two complains with this announcement.
    1. How does it compares to Julia ? It is the main contender for "performance with simple syntax".
    2. They use OpenMP annotation for parallel computation. What is the level of expertise to use it correctly and how the programmer understands the memory management of their runtime, central part for HPC (eg.

    • I logged in for that question. How does it compare to Julia?
      • by jma05 ( 897351 )

        They are quite different, although both get their speed from LLVM.

        Julia is an excellent JIT language with a fairly large ecosystem, but not anywhere near as much as Python. It interops well with Python, but isn't syntax compatible with it.

        Codon is an AOT language with no real ecosystem as yet. Codon is a Python subset, minus the dynamic parts and also interops well with Python. It's standard library is a subset of Python's.

        If you want a better language for data science overall, use Julia. If you want a fast

        • Oh yeah I meant Codon vs Julia. More out of curiosity than anything.

          I fall exactly in the category you describe though, data-science type usage and I'm more of an SME than an expert programmer (although I do hold a CS degree). I learned Julia about 6 months ago and have been slowly moving my Python code to it. I found Julia to be more concise, more elegant, faster (aside from startup time), although I can occasionally get stuck (lack of 3rd party lib or community help). I now write Julia code faster than P

          • by jma05 ( 897351 )

            My experiences are about the same. 1.9 will fix the startup times somewhat.

            I have used Python over 2 decades. I would agree that if you need performance beyond what a smattering of Cython or Numba can give, its better to look for a faster alternative and Julia is a fine one. Cython and Numba do indeed complicate beyond a certain point.

            Julia is just superior for data science workflows since it was specifically designed for it. I use the entire Jupyter ecosystem. I think of them as complementing communities a

  • Does it support python c-api, so work with current extensions? If not, how can codon code call python extension code?
    • by butlerm ( 3112 )

      According to the documentation it is ASCII only internally for now, but supports Python extensions like NumPy with certain limitations. It has its own C FFI as well.

      I strongly suspect that if you want maximum performance you should use the native C FFI rather than the support for Python extensions (although that apparently all works as long as you stick to ASCII strings), because there is a conversion across the Codon to Python boundary and Python extensions are probably supported from the Python side. Of c

  • Some of the reason that many Python programs are slow is because the authors choose slow, but easy to understand, algorithms.

    An O(n^2) algo with still be slower than an O(n log n), no matter how its compiled or translated.

    I'm sure Codon can re-write the instructions to be faster, but I'm skeptical it can transform the underlying algorithm from a slow technique to a faster technique!

    If non-experts in programming (regardless of their Domain-Specific SME) keep choosing slow algorithms in Python, the resulting

    • by jma05 ( 897351 )

      In this space (data science, AI research, data pre-processing etc), those are non-problems.
      Domain experts don't care about optimizing code to perfection. It's the Pareto principle.
      They can hire a professional programmer in the rare event they need to optimize something. Usually though, slow is quite tolerable.

      Python can be 40-100 times slower when working outside native extensions, which isn't usually an issue. This can accelerate some of those parts.

    • by strombrg ( 62192 )

      An O(n^2) algo with still be slower than an O(n log n), no matter how its compiled or translated.

      That's mostly what I was taught in school - but there was a brief aside, one day, saying that sometimes a worse algorithm could be faster if it, for example, stayed all in memory instead of hitting disk.

      EG, Python has a list type, which is kind of like an array, but the types can be heterogeneous, and they resize automatically. They're much faster than a linked list in Python, even though many algorithms that repeatedly append to a list are amortized O(n). Underneath it all, they're O(n^2), because to re

  • by willkane ( 6824186 ) on Sunday March 19, 2023 @11:26AM (#63382561)
    The language-war is over: Python+Codon is the solution to all problems.

    Sorry for the Rust devs. You were near, though.
  • ISTR hearing that Codon cost money for commercial use. Also, ISTR that it doesn't support much of the python standard library.

    Also, it's not sounding that different from Shedskin and Cython, which do have at Least some standard library support.

    Also, when a program running on CPython is running too slowly (which isn't that common), you just profile and run the hot spot on something like Shedskin, Cython or C - it's very rare to need to rewrite the whole program in another language.
  • Comment removed based on user account deletion
  • Typically compilers like this have a catch, most commonly that only a subset of the high level language can be used if you're going to compile it. Will this compile any project written in Python, as-is, into an executable which will run faster?
  • Testing it (Score:4, Interesting)

    by Genrou ( 600910 ) on Monday March 20, 2023 @09:28AM (#63384709)

    I tried it to see what kind of results I could get with it. It happens that I have some different implementations of the Fast Fourier Transform that I use to benchmark these kind of things. What I found out is:

    It doesn't implement every Python module. I couldn't get the array module to work. But this might be in their future plans.

    It can get a little picky with variable types. For example, multiplying an integer with a complex won't work, or trying to print an integer using a floating point format. Maybe they're working on it too.

    It can take some time to compile and run.

    While the scripts indeed run faster, it was nothing close to 10x the speed, much less 100x. In general, I got a 2.5x speed up. It was outperformed by Pypy in every test I made.

    Just for the record, I have the same algorithms implemented in C, and Pypy performs comparably to C. Disclaimer: they are not optimized, instead, I made an effort to make the same operations as much as possible, with the intent of comparing speeds. Also, not a scientific assessment, so take it with a grain of salt.

Genius is ten percent inspiration and fifty percent capital gains.

Working...