Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Python Programming

Proposed Change Could Speed Python Dramatically (infoworld.com) 97

"One of Python's long-standing weaknesses, its inability to scale well in multithreaded environments, is the target of a new proposal among the core developers of the popular programming language," reports InfoWorld: Developer Sam Gross has proposed a major change to the Global Interpreter Lock, or GIL — a key component in CPython, the reference implementation of Python. If accepted, Gross's proposal would rewrite the way Python serializes access to objects in its runtime from multiple threads, and would boost multithreaded performance significantly... The new proposal makes changes to the way reference counting works for Python objects, so that references from the thread that owns an object are handled differently from those coming from other threads.

The overall effect of this change, and a number of others with it, actually boosts single-threaded performance slightly — by around 10%, according to some benchmarks performed on a forked version of the interpreter versus the mainline CPython 3.9 interpreter. Multithreaded performance, on some benchmarks, scales almost linearly with each new thread in the best case — e.g., when using 20 threads, an 18.1x speedup on one benchmark and a 19.8x speedup on another.

This discussion has been archived. No new comments can be posted.

Proposed Change Could Speed Python Dramatically

Comments Filter:
  • So (Score:4, Funny)

    by MemoryDragon ( 544441 ) on Saturday October 16, 2021 @11:49AM (#61898213)

    they are adding multiline comments finally?

  • by dbrueck ( 1872018 ) on Saturday October 16, 2021 @11:54AM (#61898231)

    This change could provide real benefits, but it's not a automatic silver bullet as many multithreaded Python programs rely on the inherent thread safety of data structures.

    For example, consider a dictionary object that is read from in the main thread, while one or more worker threads are computing results and writing them to that same dictionary. In a typical implementation in Python, you wouldn't explicitly lock access to the dictionary because it's completely unnecessary (due to the GIL): it's impossible for any of the threads to corrupt the Python interpreter because nobody is /really/ accessing the dictionary concurrently. With the proposed change, however, such programs would likely blow up.

    That said, I'd love to see this thing be added to Python as some sort of runtime option that can be enabled or via a separate python executable that you can drop in, so that by default you'd still get the automatic thread safety but then have the option of much better performance if you explicitly agree to take on the burden of thread safety yourself.

    • Yes, there are certain actions in python that are already considered to be atomic and they may be able to keep that. I doubt it though. Python has had locking and semaphores for quite some time. Programs that *really* needed multi-threaded performance just used the multiprocess module instead and then used queues to pass the information around .My guess is they'll implement this by adding an option to the multithreading module to allow for globalLock=false during instantiation of the multithreading class.
      • Historically python is pretty good about maintaining backwards compatibility unless absolutely necessary.

        Funny, that's not what I've read everywhere about Python 2 vs Python 3. In fact, before that I had never read of a language that broke compatibility with its earlier versions.

        • by BadDreamer ( 196188 ) on Saturday October 16, 2021 @05:27PM (#61898993) Homepage

          Visual Basic broke *utterly* when going from VB6 to VB dotNET. It wasn't fully backwards compatible before that either.

          Perl routinely broke backwards compatibility, up to and through 5.10.1.

          Java has broken lots. INT anyone?

          It's been the norm since the 90's. Before that, sure, not so much. COBOL written in the 70's will generally run fine today. But in the big languages, entrenched in enterprises, it's the norm, and has led to immense amounts of legacy.

          If you haven't heard of it, you haven't been in the trenches much.

          • Perl routinely broke backwards compatibility, up to and through 5.10.1.

            Huh? What features of Perl were changed so that a piece of code written for 5.6 wouldn't work on 5.8 or 5.10 without edits?

            • Perl 4 vs Perl 5 was a major break and required substantial code changes. It happened earlier in the language's history than the Python 2 vs Python 3 break, so the community shifted to Perl 5 more quickly. Since then, almost every release has a short list of changes that may cause incompatibilities with older versions, right up through the current 5.34.
    • I have not had good experiences with using multi-threading to improve computation speed. I did some work on a custom pool memory allocator in C++, and it went like the clappers. Then I modified it to be thread safe, with locks and mutexes and so on, and the performance collapsed. In the end, it was not worth the faff of working out what was going on with multiple threads.

      • Yeah, in current Python you can't really get more computing capacity via threads unless the thread ends up releasing the GIL (e.g. worker threads compressing images by calling out to a C library). Threads are also handy for other GIL-releasing scenarios like disk or network I/O, waiting on user input, monitoring for hardware events, etc.

      • Generally, multi-threading is done to isolate the parts doing the heavy lifting to allow the program to still handle input (via UI or socket or wherever) and output. Doing it to improve computation speed beyond that requires that the load can be predicted and handled well, which is rare, or a heavy amount of hand crafting and tuning to get it working right.

        This will be very useful for using Python as a "filter", and for running continuous calculations on streaming or varying input data. I doubt it will do m

        • Generally, multi-threading is done to isolate the parts doing the heavy lifting to allow the program to still handle input...

          This is where I did score a hit with multi-threading. My real-time sound synthesis code was in one high priority thread, and the MIDI musical score interpreter was in another. I had another thread doing GUI stuff, which was lowest priority, compared to rendering the music.

          I have to say that the biggest benefit I have seen from multi-core hardware is that a bad process consuming 100% CPU cannot lock you out of your machine. However, all the web browsers I have experimented with launch multiple processes, so

      • Multi-threading can produce excellent results, with a big if. The problem must fit. I've had some very good results. 2X for some problems, Nearly Nx where N is the number of available processors in others. And I've also had cases where things got worse. A memory pool allocator just sounds like one that would not go well.
    • by Jeremi ( 14640 ) on Saturday October 16, 2021 @04:38PM (#61898901) Homepage

      This change could provide real benefits, but it's not a automatic silver bullet as many multithreaded Python programs rely on the inherent thread safety of data structures.

      Have a look at the section titled "Collection thread-safety", starting on page 6 of the proposal. The authors of the proposal are aware of the issue and have implemented mechanisms to allow for thread-safe multithreaded access to lists and dictionaries, at least.

    • by sjames ( 1099 )

      The big obvious win would be a state where a thread in a function need not take the GIL when accessing locally scoped variables.

      Possibly a win, posibly a train wreck: Allow a function to declare read-only access to variables in larger scopes. No lock needed as long as all users have declared read-only.

    • Agreed. More Python pain. That last line in the article of breaking everything causes me PTSD flashback of 2.7 to 3.0
  • by RightwingNutjob ( 1302813 ) on Saturday October 16, 2021 @11:58AM (#61898241)

    working on something...possibly a robot, I don't remember at this point...this python fanboi I was working with swore that python was the way to go 'cuz it had this great library for just what we needed to do.

    My position was that it should have been done in C on account of it was a tiny embedded CPU and we couldn't spare the overhead.

    He was all like...naw man, it'll be fine with the jit compilation and it'll be great!

    And then he's looking through the manual for this library and it turns out it's not only not threadsafe, it'll actively fail if called from a thread.

    I don't say a word.

    He looks at me and tells me to shut up.

    • by caseih ( 160668 )

      And yet, MicroPython is increasingly being used in robotics with a wide variety of microcontrollers. It's not necessarily a replacement for C, but if I want to prototype something quickly, MicroPython works very well indeed. Having a REPL on the serial port is pretty darn useful too.

      • Back in the day, you could get a thing called a Javalin which was a BasicSTAMP microcontroller with a Java bytecode interpreter. It had its uses, but it had its limitations too.

        It came from a time when java was still new enough and hyped enough for people to think it was Fred Brooks's silver bullet.

      • I like to think of MicroPython as of an API layer.

        Whenever you have (new) hardware that MicroPython hasn't yet supported with your MCU, you could go ahead and write a Python object backend in your language of choice (C of course), and have the API exposed in Python.

        Yes, MicroPython needs some extra resources compared to plain C, but that's not an insupportable amount. Besides, you mostly are going to waste resources anyway by whatever "OS" replacement you're going to use on your PLC (STM32 libraries? Some R

  • It's about time! (Score:3, Insightful)

    by slotcanyon ( 7348400 ) on Saturday October 16, 2021 @12:06PM (#61898255)
    Literally. Now if they would also use curly braces instead of indentation for logical blocks, it would finally be a language worth using. Messed-up indentation is constantly screwing up logic when code is merged or refactored, and editors/IDEs that auto-"fix" indentation silently mess up program logic.
    • Literally. Now if they would also use curly braces instead of indentation for logical blocks, it would finally be a language worth using. Messed-up indentation is constantly screwing up logic when code is merged or refactored, and editors/IDEs that auto-"fix" indentation silently mess up program logic.

      Indeed!

    • You don't indent *between* the curly braces all the time for readability? That makes the curly braces unnecessarily redundant.
      • by ceoyoyo ( 59147 )

        Probably a perl programmer. They think line feeds are unnecessarily redundant.

      • Re: (Score:3, Insightful)

        by Dan East ( 318230 )

        Indenting as a matter of style, and indenting with perfection as it dictates the actual program logic, are two entirely different things.

        Also...

        if (failure) { puts("Failure!"); return;}

        Is perfectly fine. Again, it's a matter of style, and style should be flexible and superficial to function.

        • But just because it's perfectly fine, it should almost never be done since it makes the code much less consistent an readable than python.
        • No, that is absolutely NOT "perfectly fine". That will make it incredibly easy to miss that that actually performs a "return" call when scanning the code on a late Friday afternoon trying to figure out what is making the program exit early.

          Code is written once, and read hundreds of times. It NEEDS to be consistent, clear and without clever formatting exceptions.

          You have just illustrated exactly why Python's approach is so vastly superior to curly bracing.

          • No, that is absolutely NOT "perfectly fine".

            This! And then some more of this.

            It's even nicer of you do it like:

            if (failure) { puts("Failure!"); return;
            free(ptr);

            And then wonder where your memory goes. Or somebody smart goes through your code and amends:

            if (failure) { puts("Failure!"); return; } else if (!failure && refcnt==0) {
            puts("Outa here");
            }

            ...because that will happen. Nobody will go ahead and format your code before applying changes to it.

            Finally, someone who writes that, would also write this, sooner or later:

            if (failure) return;

            ...and then some idiot comes along and says:

            if (failure) puts("Failure"); return;

            Or even worse:

            if (failure) return;
            do_work();

            while fixing a bug sometime at 2 AM. All because this, to someone, was somehow unbearable:

            if (failure) {
            puts("Failure!");
            return;
            }

          • by leptons ( 891340 )
            Except when you merge two branches and everything suddenly goes to shit because the indentation doesn't work.
            • No solution is perfect, sadly. But in my decades of work, such merge issues have been rare.

              On the other hand, a clause ending up outside of a scope by mistake in curly brace languages has happened a lot more often, often with results which are not caught until (at best) integration test, or worse, deployment. Yes, unit tests SHOULD handle that, in a perfect world.

            • That's actually an argument in favor of it. Git and most version control systems are dumb syntax comparisons so merges are a great way to introduce bugs. The more checks you can do at compile time the greater the likelihood that the result will be as intended. Indentation correlates with the prior logic statements so you're describing a condition where the logical flow of a function has changed for a decent block of code and while Python lacks proper compile time checking, it good at checking for this type
      • I put the curly braces where I need them, sometimes making several new if-blocks or removing/merging several for loops, and then I have emacs auto-indent the whole file for me...using the curly braces I put in there.

      • High-level languages are also unnecessarily redundant. We *could* just code in 0s and 1s. But unnecessary does not mean less practical. Yes, I usually do indent between the curly braces for readability. But editors and IDEs really don't reinterpret curly braces as completely different characters (square brackets for example), or silently change curly braces to 4, 2, or 8 square brackets, or multiple square brackets to curly braces, or treat them as unimportant and interchangeable. And when I'm refactoring,
        • I have never had these problems. But then I don't use very many IDEs for Python, just PyCharm. And I use Vi in a pinch. Sounds like you haven't used the right editors.
      • The curly braces represent a syntax element, while the whitespace represents human readability decoration.

        Having whitespace elements, which a very large fraction of IDEs and copy-paste buffers will casually and automatically change, be syntactically important is a terrible idea since this now represents "quietly and invisibly corrupting your code."
    • by Rei ( 128717 )

      Performance is only half the problem. Python's memory footprint is insane, and it's many times forced me to rewrite programs in C. (In some situations numpy is a solution, but in far too many I've found myself in, it wasn't)

      • Performance is only half the problem. Python's memory footprint is insane...

        Well no it isn't, not for my last Python project, anyway. I initially over-engineered the job, using an SQLite database. This created a need for SQL to access the data, which was a bit of a pain in the butt. It turned out that I could write much simpler code using pipe-separated-values format. It was less efficient than the SQL approach I suppose, but who cares? I did not notice any significant processing delay. The stupid simple approach got the job done. I did some calculations to work out if the dumb lin

    • Literally. Now if they would also use curly braces instead of indentation for logical blocks, it would finally be a language worth using. Messed-up indentation is constantly screwing up logic when code is merged or refactored, and editors/IDEs that auto-"fix" indentation silently mess up program logic.

      Huh? That's one of my favourite features. You should be indenting already for readability, and once your code is indented the curly braces are just extra characters reducing readability.

      Sure, once in a while the wrong indentation breaks the logic. But the other side is the indentation goes wrong and is doesn't change the logic. In that case what the dev sees (indents are easier to see than braces) and what the program does are completely different, which in general turns out worse.

      • by caseih ( 160668 )

        I also love the white space syntax. It feels more comfortable to me and I feel like being close executable pseudocode makes development a bit faster. But I can certainly understand why people don't like it, and why there can be issues at scale, especially when you're dealing with different developers on a team, cutting and pasting, etc. It's a trade-off. One I happen to like, but it's not for everyone.

        It may seem like a joke, but a braces dialect might not be so bad for those that want to use it. I once s

      • Huh? That's one of my favourite features. You should be indenting already for readability, and once your code is indented the curly braces are just extra characters reducing readability.

        There is a pernicious kind of typo that afflicts a language that makes indentation syntactically significant. When you refactor Python code by cut and paste, or copy in stuff from existing code, you have to take extra care that you adjust the indentation correctly, or you can end up with code that is syntactically valid, but does nothing like what you expected.

        I have found this with programming in C. In conditionals, the body can be a single statement, without the need to create a block enclosed in curlies.

      • by leptons ( 891340 )
        > and once your code is indented the curly braces are just extra characters reducing readability.

        This is false. The curly braces provide extra readability - they clearly show where a block starts and ends instead of having to infer that from whitespace. And significant whitespace can absolutely become a problem when merging code.

        There's no rule saying you can't indent C code meaningfully, and most C code I see is indented perfectly. It's lack of discipline that leads to not indenting code and I don't
        • > and once your code is indented the curly braces are just extra characters reducing readability.

          This is false. The curly braces provide extra readability - they clearly show where a block starts and ends instead of having to infer that from whitespace. And significant whitespace can absolutely become a problem when merging code.

          There's no rule saying you can't indent C code meaningfully, and most C code I see is indented perfectly. It's lack of discipline that leads to not indenting code and I don't need an interpreter to bug out if I don't indent code perfectly every time, because I have curly braces to keep the compiler from crashing if I merge two pieces of code with slightly different indentation. The indentation shouldn't be necessary, the program logic is what is key.

          More code isn't necessarily clearer code. When the code is properly indented the braces become redundant, and redundant code is generally clutter you need to mentally filter out when reading.

          • by Anil ( 7001 )

            I guess it is a matter of taste. but there are many conditions in python where bracketing would make things much more readble.

            The lack of brackets leads to a hodge-podge of ways to denote start/end blocks of code in many conditions, even PEP8 has no consensus
            for multi-line 'if' or function calls, or function definitions.

            Double-indent. to prevent you from bluring into your actual code;
            or, adding a comment;
            or adding parens (and then matching function indenting).

            Maybe just put a bracket there. and it doesn't

    • by Kremmy ( 793693 )
      It wouldn't be so bad if people would be consistent about using tab for tabulation. But for some reason, even in the Python guidelines, people want spaces for tabulation. It's particularly unfortunate because using tabs for tabulation lets you have your editor display them as whatever size you want them to be so you can remain consistent in the file but have everyone editing it see it the way they want to, taking as much or as little space as they prefer.
  • that I never use it. I spawn processes and use pipes and queues and shared memory like in ancient times - essentially trading memory - and massive amounts of it - for performances. It's one of my main beef with it.

    It's high time this got fixed.

    • by znrt ( 2424692 )

      It's high time this got fixed.

      as far as i can tell, it won't. this change will only add some performance, which is good, but it can't make python magically thread safe, nor can it add any real threading paradigm, meaning you will have to continue doing the same.

      which isn't a bad approach at all, imo. kids these days! ;-)

    • by piojo ( 995934 )

      I was so surprised when I read someone write that python lacked effective multithreading. How could such a mature language not do something so fundamental? I tested it and found that indeed, a program with 4 threads doing work took 4x as long. (I don't remember what kind of data they were accessing in the work.)

      Apparently fixing this is a big challenge [python.org], though it's hard to not be judgmental that the language designer didn't think this important enough to fix a decade ago.

      • by caseih ( 160668 )

        Multithreading on a single core is not so efficient in any language anyway. So when you need performance in Python, you want to use multiprocessing anyway. On Linux there's really very little difference in overhead between a thread and a process as well so maybe it doesn't matter as much. And many times multi threading is used when asynchronous programming might the better option, such as scaling up server programming. Multi-threaded programming has really fallen out of late because it doesn't scale pas

        • by piojo ( 995934 )

          I'm aware that async is a good paradigm for network communications, and for ease of use. Though I tried it for a program that was I/O bound (in Rust) and found it hurt performance compared to using a small number of worker threads. As for threads and cores--except in python, single process multi-threaded programs I've written have always seemed to be able to utilize multiple cores. At least they maxed out my CPU. If cores aren't being used efficiently, would I still see Task Manager or top claiming high uti

          • by caseih ( 160668 )

            You are correct. In Python multi-threaded programs occupy one core only, because of the GIL. This work will hopefully change that a bit. In other languages, or when talking about OS-level threads, those are farmed out to many cores, making them truly run simultaneously, which is why multi-threading on multiple cores is a good way of increasing computational performance. Totally agreed. The question is, does Python need multi-therading performance? On Linux, that could be no because on Linux there's vi

        • There is enormous difference in overhead between a thread and a process. Start up times for a thread are typically hundreds of cycles, versus hundreds of thousands or more for a process. More importantly, data access times across thread are only a few cycles to perhaps a hundred. For processes this is many thousands at least.

          The simplest, safest and lowest hanging fruit for multiple cores is to split up a big loop across threads. This can typically be done with one OpenMP directive, has virtually no overhea

      • by ceoyoyo ( 59147 )

        If you're looking for high performance, you're probably better off using one of the multithreaded libraries or writing your intensive code in C, where you can use threads just fine, and calling it from Python.

        It would be nice to have a Python interpreter that utilized multiple cores better, but there are lots of more effective options already.

    • that I never use it. I spawn processes and use pipes and queues and shared memory like in ancient times - essentially trading memory - and massive amounts of it - for performances. It's one of my main beef with it.

      It's high time this got fixed.

      Try out the multiprocessing library [python.org]. That's essentially what it does. It's definitely got some limitations in how you pass memory around, but you can take full advantage of multi-threaded performance.

    • Badly thought out multi-threading is pathetic in any language. I have found this in C++. I had a whole bunch of threads rendering real time audio for a synth, but managing these threads and collecting the results clobbered the performance, probably because I divided the task too finely. Multi-threading is not a magic potion.

      • You want to use something mature, like OpenMP, that can manage this kind of scheduling and data collation for you. Using whatever threading library is being pushed this week just makes these issues harder.

  • ... involve switching to a more scalable, performant language? Because it should.
  • Finally! (Score:4, Funny)

    by fahrbot-bot ( 874524 ) on Saturday October 16, 2021 @01:32PM (#61898517)

    Proposed Change Could Speed Python Dramatically

    They're going to re-write it in Perl. :-)

  • by John_The_Geek ( 1159117 ) on Saturday October 16, 2021 @01:44PM (#61898539)

    Python performance limitations have lead too many people to desperate hacks, the foremost of which is the completely misapplied multiprocessing.

    As the name suggests, it is multiple processes. That means expensive launch time per process and, more importantly, expensive communication across totally different memory spaces. That is acceptable, and at times necessary, for multiple nodes. Most languages have a mechanism for those cases (MPI for the pros). While Python has an MPI, the community has largely decided to pretend that mutiprocessing=multithreading out of embarrassment that there is no multi-threading.

    But, it is no substitute for multi-threading - for when codes should be sharing data on a single node, in a single memory space. Multi-threading is orders of magnitude faster to communicate data, and the thread coordination and creation mechanisms are critical for performance. More importantly, what might be a single directive to multi-thread a loop instead becomes a totally unnatural, contrived code to accommodate the wrong, distributed memory, paradigm.

    We are not talking minor differences. This is truly multiple orders of magnitude kind of stuff, and it is fortunate that most of the important Python libraries can be written in C or other languages (Fortran, lol) that can access multi-threading, usually via OpenMP.

    And as every major processor these days is very multi-core, this would be a huge step forward in making Python more efficient.

    • by theCoder ( 23772 )

      I think that it depends a lot on your workload. If each execution unit is mostly independent and only needs to coordinate with other units so that work isn't duplicated, then multiprocessing will probably be as fast as multithreading. It might even be faster because you might be able to avoid some underlying locks. In addition, it allows for easier migrating to running on multiple hosts and also prevents a crash in one execution unit from taking down the entire system.

      Unless you are starting a new thread

      • That startup time for a new process is always much, much more than for a new thread. In any circumstance. Perhaps you are saying that you don't care because of how long they live relative to the compute time, which could be true.

        Even if each unit is 100% independent, MT will still be faster then MP. There is no locking, mutexes, semaphores, etc. necessary in that case, so all you do in increase the context switch time. As the dependencies increase, the thread coordination mechanisms (mutexes, etc.) will alw

        • by theCoder ( 23772 )

          There could be locks in your lower level libraries that could make MT threading slower than MP. For example, calls to malloc() could take a lock (depending on your libc implementation). Those locks could add up in your MT implementation due to contention that wouldn't happen in the MP implementation. But you're right that without knowing specifics of a workload, either solution could be better.

          The MP solution that I am most familiar with was called the Sun Grid Engine. Before their demise, Sun open sour

  • So decades ago when I wanted multithreading performance from python and was literally kicked off to C++ land (love it btw), they now change their tune?

    Thought the "science was settled" about how the GIL is the best thing since sliced bread and I am just a hipster who oh wait, correctly spotted what would be inportant in the future and asked for it earlier... like pre-python 3 you tools.

    Go rewrite your rewrite because you didnt listen to us last time...

    • by tdelaney ( 458893 ) on Saturday October 16, 2021 @05:17PM (#61898971)

      No, the GIL has always been seen as at best a necessary evil, not a desirable feature. The GIL is not considered part of Python but only part of the CPython implementation - other implementations are free to (and do) use other methods.

      There have been many attempts to remove the GIL in the past, but they've all suffered from one major flaw - significant performance reduction on single-threaded code. This new implementation appears to not suffer from that same flaw, so for the first time in a long time people are getting excited about the possibility of finally fixing what most consider an undesirable (but practical) implementation in CPython.

    • by Sloppy ( 14984 )

      Thought the "science was settled" about how the GIL is the best thing since sliced bread

      Not on my planet. For the last 20 years or so, the most praise I have heard about the GIL is that it's here and it works.

  • Yes, we could compensate for python's abysmal performance by using more processors. But first off, that's not possible for all types of problems and secondly, that's just a horrifyingly wasteful solution..

    Python is dog slow. A 10% improvement is not lipstick on a pig, it's chapstick.

    • Because it's very, very important to wait for the database quickly.

    • by gweihir ( 88907 )

      Python is excellent at being glue-code. That it is not suitable for heavy lifting is pretty obvious. On the other hand, doing that heavy lifting in C and embedding that in Python is pretty easy to do.

  • I made a mistake translating some code from jython 2.5 to CPython. Completely forgot about the GIL. Jython doesn't have a GIL at all so you get python syntax but proper multi-threading.
  • One of the frequently made mistakes by non-CS experts is to expect multithreading to give massive speed ups. In actual reality, there are few things where that is true. Most real-world software can benefit from a few parallel activities (say, up to 3 or 4), but that is it. For some software, single-threading it is the fastest option. In addition, even where parallelizing it is beneficial, this is not a beginner's game. Doing multi-threaded software right is tricky and you need to deal with some new problems

  • I can't even conceive how difficult it would be to do reference counting across multiple cores spread across multiple machines spread across multiple locations with different caching strategies, interconnect speeds and latencies.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...