Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Python

Python 3.11 Performance Benchmarks Are Looking Fantastic (phoronix.com) 205

"Besides new language features and other improvements, Python 3.11 performance is looking fantastic with very nice performance uplift over prior Python 3.x releases," writes Phoronix's Michael Larabel. From the report: Python 3.11 has been baking support for task groups in asyncio, fine-grained error locations in tracebacks, the self-type to return an instance of their class, TypeVarTuple for variadic generics, and various other features. Besides changes affecting the Python language itself, Python 3.11 has been landing performance work from the "Faster Cython Project" to speed-up the reference implementation. Python 3.11 is 10~60% faster than Python 3.10 according to the official figures and a 1.22x speed-up with their standard benchmark suite.

The Python Docs cover some of the significant performance improvements made for this upcoming release. The formal Python 3.11.0 release isn't expected until October while multiple betas will come through July and then at least two release candidates in the following months before early October.

This discussion has been archived. No new comments can be posted.

Python 3.11 Performance Benchmarks Are Looking Fantastic

Comments Filter:
  • by Anonymous Coward on Tuesday June 07, 2022 @09:33PM (#62602096)

    The performance increases are nice, but, as a comparison, I have a program used in my laboratory that I actively maintain. It's part of a hard real-time system. It's written in VB6. Yes, VB6. And yes, I continue to develop it under Win 10. The IDE isn't perfect, but it works. I decided recently that it was time to move forward by a couple of decades and re-implemented the core of the application in Python 3.10.

    The result?

    Python is about 50% slower than compiled VB6, even after throwing every efficiency trick and JIT package I've found at it, and has far more timing irregularities (from the CG, I presume). It's bad enough that it cannot be used for the application.

    Python with all the modern tricks is less efficient than VB6 with it's crapola compiler that doesn't even do common subexpression elimination. I'm going to repeat that to make it sink in: VB6's compiler is so bad that it doesn't do even the most basic source-to-source optimizations like CSE. But it produces faster code than Python. If you use straight Python without the de rigeur efficiency packages like Numpy, it would be far, far less efficient than VB6.

    Curious, I had a look at the intermediate C emitted when compiling Python with one of the JIT systems. Oh. My. God. It's like these people never studied. Sure a performance boost of 10-60% from 3.10 to 3.11 is nice, but to be competitive, Python needs an order of magnitude improvement to just keep up with one of the least efficient languages ever.

    • Python is about 50% slower than compiled VB6, even after throwing every efficiency trick and JIT package I've found at it

      Does that include running it in PyPy? Considering that PyPy is at 3.9 language level, I'm assuming not.

    • by jma05 ( 897351 )

      I am surprised that anyone still develops in that dated language last released in 1998. I am glad to be rid of it.

      > Python is about 50% slower than compiled VB6

      VB5 onwards had native code compilers. VB6 wasn't slow at all. It was closer to managed C++. Benchmark p-code compiled app with Python since they are both interpreted. That would be apples to apples comparison. It might still be faster than Python since it was more statically typed.

      The problem was that VB was a terrible scripting language. Very in

    • I have played a bit with real-time programming in Python, and in my experience there is no sane way to do it in Windows (I expect you were running it in that OS since you start from VB6). If anyone has any tips & tricks to add here, please feel free. I have a feeling that the problem was in the Windows scheduler. Possibly there are Windows specific API's to have more fine-grained control over execution, but by default if you e.g. sleep for 1 ms you return 15+ms later.

      Under Linux, the performance was m

      • To my knowledge, hard realtime has never been practical on stock Windows, though I believe they may make special editions that are suitable.

        Some definitions of soft realtime have been possible for a while, although I'd never have done it in VB6 even in the 90s, because of nondeterministic garbage collection.

        Please correct me if I'm wrong. I've only ever been a consumer of realtime software (mostly audio creation), never a creator. And if I were, I would strongly prefer to use Linux or QNX, where I can get

    • Re: (Score:2, Insightful)

      by blahabl ( 7651114 )

      The performance increases are nice, but, as a comparison, I have a program used in my laboratory that I actively maintain. It's part of a hard real-time system. It's written in VB6. Yes, VB6. And yes, I continue to develop it under Win 10. The IDE isn't perfect, but it works. I decided recently that it was time to move forward by a couple of decades and re-implemented the core of the application in Python 3.10.

      The result?

      Python is about 50% slower than compiled VB6, even after throwing every efficiency trick and JIT package I've found at it, and has far more timing irregularities (from the CG, I presume). It's bad enough that it cannot be used for the application.

      Python with all the modern tricks is less efficient than VB6 with it's crapola compiler that doesn't even do common subexpression elimination. I'm going to repeat that to make it sink in: VB6's compiler is so bad that it doesn't do even the most basic source-to-source optimizations like CSE. But it produces faster code than Python. If you use straight Python without the de rigeur efficiency packages like Numpy, it would be far, far less efficient than VB6.

      Curious, I had a look at the intermediate C emitted when compiling Python with one of the JIT systems. Oh. My. God. It's like these people never studied. Sure a performance boost of 10-60% from 3.10 to 3.11 is nice, but to be competitive, Python needs an order of magnitude improvement to just keep up with one of the least efficient languages ever.

      Wow. Screw, meet hammer.

      I mean really? You wrote a part of a "hard real time system" in Python? And you're wondering what went wrong? And complaining Python might not be the best tool for that job?

      Well, no kidding. Protip: Javascript might not be what you should be looking at either.

    • You're comparing a compiled language designed for 25 year old computers, with a (usually) interpreted language, that is vastly more powerful and flexible, and supports many different programming paradigms, but, for that very reason, requires things like the global interpreter lock, garbage collection (exists in VB6 but primitive), and so forth. And which therefore also can't support some of the optimizations that existed even 25 years ago. It sucks that Python is slow as a result, even by scripting langua

  • by fleeped ( 1945926 ) on Tuesday June 07, 2022 @09:35PM (#62602100)
    Python is a scripting language. Why butcher it beyond recognition? I blame modern data scientists, just because they found some libraries in python (pandas, scipy etc) that are easy to use, they think Python is the best thing since sliced bread, and people/companies seem to be jumping on that bandwagon. And you can't even blame them
    • by SQL Error ( 16383 ) on Tuesday June 07, 2022 @09:46PM (#62602136)

      Python is far more widely used than just as a "scripting language".

      You can certainly argue that Java andGo are better suited to large applications, but that doesn't stop people writing large applications in Python, and it is vastly superior to PHP or Node.js.

      • Vastly superior to PHP or Node.js is not a high bar... People writing large applications in Python, well let's say I'd never want to read or work with that code. It's the wrong tool for that job. Just because you can, doesn't mean you should, in a professional setting.
        • Most "big applications" these day are web apps, and there really isn't a lot of alternatives to Django/Python outside of horrid productivity murdering J2EE/whatever the C# equiv is behemoths. Ruby on Rails never really scaled to that kind of complexity, PHP.... just no, and NodeJS is genuinely illsuited to anything beyond a fairly rudimentary degree of "enterprise" complexity.

          And people have been doing just fine with Django for well over 15 years now. I recently was part of a team that did a big sweep throu

          • by Shaeun ( 1867894 )

            Most "big applications" these day are web apps, and there really isn't a lot of alternatives to Django/Python outside of horrid productivity murdering J2EE/whatever the C# equiv is behemoths. Ruby on Rails never really scaled to that kind of complexity, PHP.... just no, and NodeJS is genuinely illsuited to anything beyond a fairly rudimentary degree of "enterprise" complexity.

            And people have been doing just fine with Django for well over 15 years now. I recently was part of a team that did a big sweep through a bunch of crusty old Java/Oracle apps at a govt department I was working at replacing them with Django apps,. and management was absolutely floored that we where taking 15 year old apps that they had spent millions on, and with a team of 5 guys replacing them completely in a space of 2 thee months. One of them had a 70 page form (I know I know, governments lol) that required a complicated workflow of authorizations through multiple departments , physics simulations (It was for controlled burnoffs) and other stuff. Two months it took to replace it. And it took me a week to handover my corner of the code base to the guy replacing me after I golden parachuted out of that place. He picked it up straight away.

            Python is just fine for big apps in a professional setting

            Python is good at Rapid Application Development.
            That doe snot make it good at everything. That does not make it CPU efficient. Pick your poison, fast to develop, more resource cost and an upgrade treadmill where the python WILL break after a python upgrade or slow to develop with less chance of the toolchain upgrades causing an error.

            Not every tool is good for every situation. You can put lipstick on a pig, but it's still just lipstick. I mean there could have been a framework written in C that had a

      • >Python is far more widely used than just as a "scripting language".
        But that's what he's saying, that using it as more than a scripting language is an issue, because no matter how much people want it to be more than a scripting language, that's exactly what it is.
        As a scripting language, it's OK. It has powerful syntax, lots of cool third-party libraries, but runs slow as fuck and is crippled by using whitespace as flow control.

      • by narcc ( 412956 )

        You can certainly argue that Java andGo are better suited to large applications,

        Have you used Python? I could argue that any language is better suited to large applications.

        but that doesn't stop people writing large applications in Python,

        Those people are unbalanced. Python is the worst choice for any application, let alone large ones! Aside from the bone-headed design and endless breaking changes, it's also the slowest and least efficient language I've ever seen.

        and it is vastly superior to PHP or Node.js.

        I find it telling that you picked two broadly-hated niche languages to compare against a general purpose language. Still, in their respective domains, Python lags far, far, behind.

        • You can certainly argue that Java andGo are better suited to large applications,

          I dunno about that, there's some real stinkers out there.

      • You can certainly argue that Java andGo are better suited to large applications

        I'd argue that recent C++ is better than either of those for large applications.

        (And by "recent" I mean C++11. C++20? Even better...)

        • C++20? Even better

          Cool; so once we'll have compliant C++20 compilers in 2026, we'll all enjoy it. ;)

          • by pjt33 ( 739471 )

            I tend to assume that everyone left on /. is a jaded cynic, so it's nice to run into an optimist occasionally.

          • C++20? Even better

            Cool; so once we'll have compliant C++20 compilers in 2026, we'll all enjoy it. ;)

            The chart is fairly complete since about 2019:

            https://en.cppreference.com/w/... [cppreference.com]

            Some compilers already have VC++23 support:

            https://en.cppreference.com/w/... [cppreference.com]

            • Modules: partial...partial...none...none

              Come on, catching up with Modula-2 from 1970s (!!!) in toolchain design quality is literally the most important change that will have happened to C++. You can't possibly say that support for C++20 is "fairly complete" without modules any more than a living person is "fairly complete" without any intestines.

              • Nobody ever wrote big programs in Modula-2.

                (or any other language in the 1970s - there wasn't really the disk space)

                One look at the size of the precompiled header folder in a big visual C++ project will tell you where the problem is. It's about 4Gb on the program I'm working on right now. I usually put it on a separate drive to avoid constant backing up of 4Gb of regenerateable data.

      • You can, but I generally don't. There are things about Python that don't scale well to larger systems, such as dependency management, lack of static typing, breakage between Python versions, the global interpreter lock, and so forth. I know that there are workarounds for most of these, and they work well for smaller scripts and systems, but become more and more painful as a system grows in size, scope, usage, and importance.

        Once any of these things starts being a serious issue, I tend to want to transitio

    • by Bite The Pillow ( 3087109 ) on Tuesday June 07, 2022 @09:52PM (#62602152)

      What's being butchered? If it's faster, how does that hurt you?

      • We got faster python with numpy/cython at the mega expense of readability, simple code etc. I'm just curious about the sacrifices in this case
        • The sacrifice is greater code complexity in the eval loop (not visible at Python level unless you're digging into things with the `dis` module). This isn't a special purpose thing to speed specific uses of Python, it's a speed up for the same Python you've been writing, with no changes, for years.
          • Well if it's invisible I can live with that! And maybe that will stop people using other arcane/unreadable ways to optimize it. E.g. with numpy, yes, vectorization is cool and all, but if I really want vectorized performance I'll write a GPU kernel, if not I'll write a for loop.
            • by jma05 ( 897351 )

              You have no idea how ridiculous it sounds if you advice that a scientist use a "GPU kernel" instead of tensorflow or cupy.
              In maybe 0.01% of the cases, they may farm that job to coders like you, but for everyday scientific analysis, a GPU kernel is arcane and completely unnecessary.

              • Ok I might have not expressed myself very well. Of course use tensorflow or cupy if they do the job, but these are wrappers, no? It's not really python code that does the hard work. But for your average joe's high-level scripting, doing hard triple loop math in python is ... unsuitable. Like, I wouldn't write a C++ program to do some basic text processing over a bunch of files in a folder, yay python is miracle cure for that, even for basic crappy-looking guis via tkinter. As a sidenote, it's sad to put ad
                • by jma05 ( 897351 )

                  They are indeed wrappers... transparent wrappers. There is no code difference between tensorflow running on the CPU vs. GPU. Where and how it runs is an implementation detail. A scientist or mathematician thinks in terms of matrices, not machine instructions. It is the job of a software engineer to abstract that away so that they can focus on what's important to them.

                  No one does heavy math in pure Python. Everyone uses numpy etc. Numpy is more expressive than regular Python, which barely just got matrix mul

    • by Junta ( 36770 )

      Depends, if it's the implementation details causing speed up, then absolutely, it doesn't drive any change to the programmer. Nothing to lose there.

      There are of course language features to use that muddy the 'easy' waters a bit to let developers accept more restrictions for the sake of better performance, but the 'simple' world (no types, mix-typed lists, classes based on dictionaries, etc) is still supported. You could make the argument that by the time you start heavily using the features that try to pr

    • by shanen ( 462549 )

      Attempting to answer your question seriously, but it's because of bad metrics of "performance". The time to develop the solutions should be the real focus, but because they want to reuse those solutions many time, the execution time then gets dragged into it.

  • by cas2000 ( 148703 ) on Tuesday June 07, 2022 @09:49PM (#62602144)

    That's great!

    I'm looking forward to the day when python devs understand that back traces are a debugging tool, not an acceptable error message in a released program...not even for an early or beta release.

    A traceback is just far too many lines of useless noise to an end user, it doesn't tell them what went wrong or why or even hint at what they might do to fix or avoid the problem in future. A user wants to see something like "Division by Zero error on line nnn of filename.py", not 50+ lines of noise hiding that error msg.

    Python's default should be sys.tracebacklimit = 0 and explicitly over-ridden by the programmer only while they're actively debugging their code, or via a --debug or --verbose option. That would, hopefully, encourage python devs to write useful error messages, like programmers do for other languages.

    • No, the stack trace should be the default. Only in a serious application might you put that to a log file instead. There is no such thing as a fully debugged program.

      • No, the stack trace should be the default. Only in a serious application might you put that to a log file instead. There is no such thing as a fully debugged program.

        Yes, there is such a thing, it's called "what the users run". They won't be debugging it, so it is "fully debugged", as in, won't be subjected to any further debugging (note: that's very different from bug-free). If they report bugs you can debug them on the development build which does output stacktraces.

        • When you get that support call "I gone and done somefin and it don't go" having that stack trace is invaluable.

          I actually do a lot of VBA work for which I spent some effort adding an end user stack trace facility. Very useful in practice.

          • by Entrope ( 68843 )

            That's why it should be logged to a file rather than to the screen. You want your hypothetical semiliterate user to send you a file rather than read off the stack trace from a console or text box somewhere.

            And arguably, if an exception backtrace is enough detail to figure out where your bug is, you shipped crap code. Ideally, the logic flaw is more subtle and history-dependent than that, so you should need the kind of specific context that a log file would provide.

            • Actually, users ship screen shots.

              And most bugs are with libraries being used rather than simple bugs in code. A stack trace is invaluable.

              • by Shaeun ( 1867894 )

                Actually, users ship screen shots.

                And most bugs are with libraries being used rather than simple bugs in code. A stack trace is invaluable.

                You mean the screenshots where the whole error is truncated because it didn't fit in the box and you can't tell what it is because you didn't bother to add proper logging to your application? An application is not finished until the logging is done. Just use log4j, it's easy to set up and mature. and having easy to read error codes allows users to fix configuration errors based on a knowledge base. This reduces cost to support and makes YOU money.

                • nope. Because it is easier for users than finding the log files. And because I make the box as big as needed and do not clutter it up.

            • by Junta ( 36770 )

              And you can do that as a developer, but you have to actually set that up. Languages can't make any assumptions about writing to disk until the developer designates where to put it.

              So if a programmer was lazy and there's a stack trace, might as well dump it to the screen, it doesn't have a better idea what to do by default and 9/10 times the error by itself is useless.

              For comparison, the answer in C in *nix systems was to dump core to wherever, but that's almost unheard of by default now, and generally ulim

  • they are changing from print("Hello world!") to print[{"Hellow World"}] .. python developers will switch within days, I am sure of it.

    • by narcc ( 412956 )

      I hope you're kidding. Why they can't figure out something as simple as 'print' I'll never understand...

  • by MostAwesomeDude ( 980382 ) on Wednesday June 08, 2022 @01:02AM (#62602504) Homepage

    CPython is still 3x or 4x slower than PyPy.

    • by Junta ( 36770 )

      For the subset of python ecosystem that PyPy supports, which is why we continue to care about CPython, which supports the entire ecosystem.

  • by ZiggyZiggyZig ( 5490070 ) on Wednesday June 08, 2022 @05:03AM (#62602846)

    Finally Python 3.11 For Workgroups!

    I can't wait for Python 95!

  • Finally, a Python good enough for workgroups.

  • When evaluating languages, syntax, package ecosystem and community are just a subset of the criteria.

    One should check it's typing system, runtime requirements, memory handling, library maturity, optimization capabilities, standardization committee members, licensing restrictions, etc

    Generally, the adoption of a language based on "speed of feedback" on a successful implementation is the Sugarbomb Candy Cereal of programming. All bright colors, marketing pitch and self-taught hard drivin' fans of the co

Elliptic paraboloids for sale.

Working...