Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Python

Python in 2024: Faster, More Powerful, and More Popular Than Ever (infoworld.com) 45

"Over the course of 2024, Python has proven again and again why it's one of the most popular, useful, and promising programming languages out there," writes InfoWorld: The latest version of the language pushes the envelope further for speed and power, sheds many of Python's most decrepit elements, and broadens its appeal with developers worldwide. Here's a look back at the year in Python.

In the biggest news of the year, the core Python development team took a major step toward overcoming one of Python's longstanding drawbacks: the Global Interpreter Lock or "GIL," a mechanism for managing interpreter state. The GIL prevents data corruption across threads in Python programs, but it comes at the cost of making threads nearly useless for CPU-bound work. Over the years, various attempts to remove the GIL ended in tears, as they made single-threaded Python programs drastically slower. But the most recent no-GIL project goes a long way toward fixing that issue — enough that it's been made available for regular users to try out.

The no-GIL or "free-threaded" builds are still considered experimental, so they shouldn't be deployed in production yet. The Python team wants to alleviate as much of the single-threaded performance impact as possible, along with any other concerns, before giving the no-GIL builds the full green light. It's also entirely possible these builds may never make it to full-blown production-ready status, but the early signs are encouraging.

Another forward-looking feature introduced in Python 3.13 is the experimental just-in-time compiler or JIT. It expands on previous efforts to speed up the interpreter by generating machine code for certain operations at runtime. Right now, the speedup doesn't amount to much (maybe 5% for most programs), but future versions of Python will expand the JIT's functionality where it yields real-world payoffs.

Python is now more widely used than JavaScript on GitHub (thanks partly to its role in AI and data science code).

Python in 2024: Faster, More Powerful, and More Popular Than Ever

Comments Filter:
  • by 93 Escort Wagon ( 326346 ) on Sunday December 29, 2024 @04:22AM (#65047111)

    * Compared only to previous versions of python.
     

    • by serviscope_minor ( 664417 ) on Sunday December 29, 2024 @05:03AM (#65047149) Journal

      Yes? They aren't claiming it's a speed demon compared to anything else.

      It's not news to anyone in the python community that python is quite slow.

      I'm glad they are really working on the speed of it. The GIL is good. Speeding up python with multiple cores is currently not much fun. Also the jit.

      • I'm glad they are really working on the speed of it. The GIL is good. Speeding up python with multiple cores is currently not much fun. Also the jit.

        Speed is good. What I'd really like to see is that the packaging story gets sorted out. That seems to maybe be being sorted out with the new UV tool [saaspegasus.com] which is getting towards both being complete and acceptably fast unlike past tools which were either one or the other. UV is written in Rust which is not a problem but you fear that some people might make it one (for good reasons, Python itself is mostly not written in Python). Seeing the Python Packaging Authority start to update their tutorials to match with

        • Ah yes uv, and it's precursor rye. And also ruff.

          Having tried them, I love them compared to the previous tooling, and the high speed is a huge boon.

          I don't have any strong opinion on rust, native code is good for fast things and rust appears to be a reasonable choice. I hope it sticks, there has in the past there has been a strong push for "pure python" in the python world. I appreciate the desire to eat ones own dog food, but it made a lot of the ecosystem glacially slow and for years there was very little

      • by Creepy ( 93888 )

        Steaming piece of shit language, IMO, lol. OK, not that bad, I just had a bad introduction and someone tried to write a production app in python 1, when it was still a shit programming language but not bad for scripting. I still hate white space significance, but 200 hours debugging a Makefile issue is entirely to blame (and yeah, modern editors would've found it in minutes, but vi on Solaris, not so easy, and no, emacs wasn't available, but don't get me going on emacs).

        • by dfghjk ( 711126 )

          "...but vi on Solaris, not so easy, and no, emacs wasn't available, but don't get me going on emacs)."

          Why is your solution the two oldest and shittiest editors on the planet?

    • Speed is only a requirement for maybe 5% of low level computing. In most real life cases, IO transfer speeds and latency is going to be your bottleneck anyway.
      • ... untl you write a nested for loop or two
        • I have written bubble sorts for 10 to 20 items, but if you are doing them for more than that you should probably be using something else. In other words, a long running nested loop is probably indicative of you doing it won't
          • by dfghjk ( 711126 )

            "I have written bubble sorts for 10 to 20 items,..."
            masturbates wildly.

            "but if you are doing them for more than that you should probably be using something else."
            So you think sorting 10-20 items is indicative of 95% of programming? Do you listen to yourself?

            "In other words, a long running nested loop is probably indicative of you doing it won't"
            Thanks for the other words, really cleared things up. Nice to hear your definition of a long running loop, not at all surprising.

          • For 10-20 items, I use the built-in sort function, and I don't concern myself about how it goes about doing it.
            For millions of items, I dump them in a database, and I do a database query that returns the results sorted as required.

            • That's what I do too. The couple times I have used bubble sort were very specific circumstances where there was no standard sort available. But almost always it is a better idea not to reinvent the wheel.
            • ... Why not use the built-in function for millions of items?

        • by ceoyoyo ( 59147 )

          One of the very first things they teach in interpreted programming is that if you're writing a loop, never mind a nested one, you're probably doing it wrong.

          • That's silly. If you have a set of data and you need to do something to each item, then you're going to have a loop of some sort somewhere. And if your data has multiple dimensions, you're going to have nested loops of some flavor in your code, even if they aren't the classic "for i in (level1) { for j in (i.level2) {} }" sort of thing. The looping always exists, even if it's hidden in a language's operator.

            • by ceoyoyo ( 59147 )

              The looping always exists, even if it's hidden in a language's operator.

              Yes. I didn't say the looping didn't exist. I said, very specifically, "if you're writing a loop, never mind a nested one, you're probably doing it wrong."

              Emphasis added because apparently it was too subtle without.

              You avoid writing loops in interpreted languages by:

              (1) using a library (e.g. numpy) or a language feature (e.g. list comprehension, generator, map, etc.)

              (2) writing your loop in a compiled language and calling it as a functi

      • by dfghjk ( 711126 )

        Most programming is low level computing, you just don't know any better. And sped is a requirement for more than just that.

        "In most real life cases, IO transfer speeds and latency is going to be your bottleneck anyway."

        Complete bullshit. But at least we know that your programming is neither performant nor useful.

        • As an example, what is the point of having performance for a web server? The latency of the Internet is going to dictate the user experience. If you are developing a GUI app then, yes, performance in the GUI layer is important or it will be choppy as hell, but as long as long running tasks are kept in the background where they should be then milliseconds aren't going to matter.
          • by gwjgwj ( 727408 )
            So why Facebook was optimizing string class to get 1% improvement [youtu.be] then?
            • I assume because the large companies are hiring more developers than they actually need and someone was looking for anything they could do to stand out.
            • Because for them, that saves a lot of money. But the vast majority of programmers in the world do not work on Facebook. Or on any other company which has the kind of use case Facebook has. They're an outlier among a group of outliers, and what they need is not in any way representative for what the majority of companies need, and thus for what the majority of developers spend time doing.

              Heck, even at Faceboook, work like this is an outlier. The majority of developers there won't ever touch this kind of code

        • Most programming is business logic, where speed is at best a distant fifth on the priority list. For every programmer doing low level work, there are dozens working at large corporations building and maintaining the internal systems that keep them running.

          And for them, the times when speed matter, they will usually be IO bottlenecked.

      • You'd think so. Having worked on mixed C++ / Python data projects I found it was much worse.
        Unless the c++ modules did exactly what the program needed there would be be a tonne of data marshalling back and forth. Often this was exacerbated by algorithms implemented by bolting together a long, convoluted, sequence of operations because the right operator wasn't available, or inevitably someone would loop through all the pixels directly.
        My predecessor tried to solve that with a sort of visor pattern where
        • In my experience, there's a trade off between time to design and implement and running speed. C++ is horrendous to add complex libraries to. Last time I tried that, I had to obtain all the dependencies for the library myself and then hope it would compile with the versions I got. With python you just "pip install pandas" and you have everything you need to do data mining.
        • So you ended up using Python as a scripting language and C++ as your general purpose language.

          Using Python as a general purpose language is like using pliers as a hammer, or a slot-headed screwdriver as a chisel. Effective, but not recommended.

      • That was true 30 or 40 years ago.

    • Python is todays version of BASIC. Any system that uses an interpreter or JIT compiler is a bit cromulent and the users will eventually grow tired of it and move on to a real time language and compiler.
  • What are the options to compile Python to machine code? Wouldn't that be an obvious way to increase its performance? Or possibly translate it to C or something and then compiling the C?

    • by caseih ( 160668 )

      Certainly a subset of python could be compiled into high speed code. Cython is an example. Other parts of the language are much harder to compile due to python's dynamic nature. There are projects to make self contained binaries but the speed up is minimal. Jit code generation in the interpreter is still the best ready to get a nice speed up.

      • by ceoyoyo ( 59147 )

        Decently written Cython code can run as fast as hand written C, and efforts to improve the interpreter have more or less doubled Python's speed over the last few versions. Interpreter improvements aside, JIT is the most convenient, for the end user, way to get a speed up.

  • "It's also entirely possible these builds may never make it to full-blown production-ready status, but the early signs are encouraging."

    So they're experimenting and have no solution. Really great news?

    "...JIT. It expands on previous efforts to speed up the interpreter..."

    So you'll have the fastest version of shitty global lock single threaded crap.

    Python's popularity is a mirage, the moment performance matters programmers must switch to a different language via a library.

  • The no-GIL or "free-threaded" builds are still considered experimental, so they shouldn't be deployed in production yet.

    Who do you think you're talking to? You can't tell them what to do [imgur.com].

  • There's also Mojo, for compiling python (and AI extensions) into high-speed binaries: https://www.modular.com/mojo [modular.com].

  • Not impressed (Score:4, Interesting)

    by boulat ( 216724 ) on Sunday December 29, 2024 @10:22AM (#65047501)

    I've tested this against Python 3.12 with GIL, Python 3.13 noGIL, and Go 1.23.

    TL;DR:

    - Python 3.13 noGIL did better than Python 3.12 with GIL on 1 benchmark (pidigits) by about 30%
    - 3.13 noGIL actually did a lot worse on binary-trees benchmark, and slightly worse on fasta benchmark
    - Golang smoked it out of the water on every single test

    • by larkost ( 79011 )

      I don't understand why you are surprised by this. The NoGIL version is expected to be slower, as is Golang.

      NoGIL will probably never be faster than "regular" Python on most code. It is only when you have a lot of truly highly parallel (i.e.: little interaction between threads) that anyone is expecting an eventually speedup. The version that exists in the Python main-line now is mostly there to allow for the gradual shifting of some fundamental aspects of Python to allow for the eventual speed-up in that hig

  • Not GIL-cup.
  • There are three times more Python jobs than PHP around here and nobody is going to hire a 45 year old for a Junior Python Developer position XD

FORTUNE'S FUN FACTS TO KNOW AND TELL: #44 Zebras are colored with dark stripes on a light background.

Working...