Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Python Programming

Microsoft Funds a Team with Guido van Rossum to Double the Speed of Python (zdnet.com) 153

ZDNet reports: Guido van Rossum, who created popular programming language Python 30 years ago, has outlined his ambitions to make it twice as fast — addressing a key weakness of Python compared to faster languages like C++.

Speed in Core Python (CPython) is one of the reasons why other implementations have emerged, such as Pyston.... In a contribution to the U.S. PyCon Language Summit this week, van Rossum posted a document on Microsoft-owned GitHub, first spotted by The Register, detailing some of his ambitions to make Python a faster language, promising to double its speed in Python 3.11 — one of three Python branches that will emerge next year in a pre-alpha release... van Rossum was "given freedom to pick a project" at Microsoft and adds that he "chose to go back to my roots".

"This is Microsoft's way of giving back to Python," writes van Rossum... According to van Rossum, Microsoft has funded a small Python team to "take charge of performance improvements" in the interpreted language...

He says that the main beneficiaries of upcoming changes to Python will be those running "CPU-intensive pure Python code" and users of websites with built-in Python.

The Register notes that the faster CPython project "has a GitHub repository which includes a fork of CPython as well as an issue tracker for ideas and tools for analysing performance."

"According to Van Rossum, there will be 'no long-lived forks/branches, no surprise 6,000 line pull requests,' and everything will be open source."
This discussion has been archived. No new comments can be posted.

Microsoft Funds a Team with Guido van Rossum to Double the Speed of Python

Comments Filter:
  • Sigh. (Score:5, Interesting)

    by Excelcia ( 906188 ) <slashdot@excelcia.ca> on Sunday May 16, 2021 @11:41PM (#61391866) Homepage Journal

    Gotta love how the Python people say it's already fast enough, that you don't need to go to compiled languages. And then stories like this come along.

    A comment made by one of the developers of PiTiVi (an open source video editor done in Python) on the how-to-contribute page [pitivi.org] for that project actually sums up Python in general, and also extends quite well to my thoughts on Microsoft's current efforts. They suggest if you want to contribute to that project one area you could contribute is:

            GStreamer Editing Services, the C library on which Pitivi depends to do all the serious work. If you want to work on the backend, this is the way to go.

    Which is a great summation for Python. Great for quick and dirty little tools, good for a project framework perhaps, but if you want to do serious work, go to a C library. This will always be the case. Ruby and Python and Perl and even mighty Java, they will have their niches supported by adherents who expound some aspect of their garbage collection, or ease of use, or type safety, and sure, if you need a jack-knife then it's great. But for the real work, people will always turn to native code. In a few years Microsoft will be done extending and extinguishing their work for Pyton, and there will be some new debutant... there will always be some new debutant, bright and beautiful in that sparkling ball gown, that will draw the ooohs and aahhhs of the boys in the crowd and which will rally the people to cry "this... THIS is the one that will kill native code dead once and for all", and yet it will never happen. It will never, ever happen. You can put a turbocharger on an original Volkswagen beetle and it's still a VW beetle. Cool to drive, and you gotta love that put put engine sound they make, but that's all it will ever be.

    • by Ostracus ( 1354233 ) on Sunday May 16, 2021 @11:58PM (#61391900) Journal

      Which is a great summation for Python. Great for quick and dirty little tools, good for a project framework perhaps, but if you want to do serious work, go to a C library. This will always be the case.

      Well if it will "always be the case" then why should anyone even work on this project?

    • Re:Sigh. (Score:4, Informative)

      by Randseed ( 132501 ) on Monday May 17, 2021 @12:14AM (#61391926)

      You have it. The reality is that we can't. Native code -- the lower level you go -- is always going to be better, faster, and smaller. I remember coding in assembly of all things because that took the overhead off and I could do amazing (at the time) graphics. Now if you look at what has happened over the last twenty years or so, the trend has been to convenience and functionality. That has left us with this bad environment of programs which are 100GB but should be 1GB. They include all sorts of extraneous crap in them that the program never uses, some idiot's DRM, and a bunch of redundant libraries. This has even happened in the Linux world -- and I can't believe I'm having to say this -- where people write things that use one function each out of fifteen libraries requiring all of those to be installed. Granted, this works seven hells of a lot better than Windows, but unfortunately people using these kinds of toolkits is going to stay.

      Twenty years ago, there was this dream of a world where somebody could just string applications together using prebuilt components (objects) and it would greatly improve software development. Instead, what happened was that most of the supposed toolkits were basically crap and different groups tried to compete against each other (e.g., KDE and GNOME).

      What a mess. But on Slashdot, we're still typing in HTML 1.0.

      • I'm not sure when exactly it was that libraries stopped being composed of small independent modules that the linker could intelligently include and exclude... but they did...

        I think thats a big part of it. Link in any part of a library and expect to get the entire thing these days.

        The simplicity of it all is over.
        • Re:Sigh. (Score:4, Informative)

          by Randseed ( 132501 ) on Monday May 17, 2021 @02:34AM (#61392110)
          That's part of the thing. It used to be that you'd do something like "include " and you'd get those functions. Now it's like you "#include " and you pull in a bunch of other crap. But that isn't even the problem. Okay, so you pulled in a bunch of crap you aren't using and increased your compile time. Big deal. But now every other damned program needs its own version of a dynamic library because they aren't compatible with each other. I'm describing DLL Hell.
          • But now every other damned program needs its own version of a dynamic library because they aren't compatible with each other. I'm describing DLL Hell.

            No, you certainly are not. It's DLL hell when you need multiple versions of libraries and it becomes a problem. In Unixlikes it is never a problem like it used to be on Windows because everything's a file. Dynamic libraries aren't special on Unix, they're just a file that gets loaded from a path, and the only drawback to having multiple versions of the same library loaded from differing locations is that you're not sharing memory.

            Unixlikes don't ever get into DLL hell. They just get disk bloat.

            • While the idea of Shared Libraries, and DLL sounds good on paper. I have found they are often lacking in real world use. We can say, that code was programmed poorly, or was designed by an idiot. But the point of having computers ans software is to solve the business use case, which is often unique and not part of the standard set. The .NET Certification my Microsoft was all about making software that already exists, Basic CRUD based products. Where we as programmers were called onto make something tha

            • Re: (Score:3, Informative)

              But the description you just supplied is EXACTLY how DLLs work on Windows. A DLL is a file, and your application loads it into memory. The notion of DLL hell is not caused by Windows; it is an issue that any shared library can face when the developers of that library release new versions with breaking changes. If your app always loads the default version of the DLL that is registered in the Registry, and there is a breaking update, it can be an issue. The resolution on Windows is to keep a separate copy, or

              • Statically compiled binaries FTW.

              • [description of DLL hell on windows] This is the exact same way it is handled on Linux.

                No, it just isn't. True DLL hell on windows used to come when one application required one version of a library and another required a different one. You were not able to have both installed. On Linux major/minor versioning in Linux library file names means not only can you have both installed but you can also choose which is the default. Sure you can still, theoretically, have problems - if you need an old version of a library that has been released twice with the same minor version, for example, but t

      • Re:Sigh. (Score:5, Insightful)

        by djinn6 ( 1868030 ) on Monday May 17, 2021 @05:17AM (#61392376)

        Now if you look at what has happened over the last twenty years or so, the trend has been to convenience and functionality. That has left us with this bad environment of programs which are 100GB but should be 1GB.

        This is the "good enough" problem. The 100 GB program is good enough because people can get a 2 TB hard drive for $60, therefore it's not worth spending any more time optimizing it to be 1 GB. Few if any of your customers would pay more for you to do that. They would however, pay your competitor instead if they got the product to market before you.

        • Now if you look at what has happened over the last twenty years or so, the trend has been to convenience and functionality. That has left us with this bad environment of programs which are 100GB but should be 1GB.

          This is the "good enough" problem. The 100 GB program is good enough because people can get a 2 TB hard drive for $60, therefore it's not worth spending any more time optimizing it to be 1 GB. Few if any of your customers would pay more for you to do that. They would however, pay your competitor instead if they got the product to market before you.

          It's not money expensive, it's time-expensive.

          Memory access of L2 Cache is more time-expensive than L1, main memory is even more time-expensive. SDD, spinning disks, and internet are all even worse.

          The bigger the program, the less likely its components will fit into L1, L2, or main memory.

          Bigger program = Slower Program.

          • Not necessarily. I have gotten improvements with programs that ran off of the disk run faster than the RAM Equivalent. We have the 20/80 rule. where 20% of the data is used 80% of the time. Having to load 100% of your data in RAM, is costly, but if you can pull from disk the data you need when you need it, then you are flying much faster.

      • Re:Sigh. (Score:5, Insightful)

        by jythie ( 914043 ) on Monday May 17, 2021 @07:32AM (#61392660)
        On the other hand, I've seen projects come together with a handful of developers over a handful of sprints that would have taken 'real programmers' a team of 30 and a couple of years a few decades back. We have traded program efficiency for developer time and scaling... not a tradeoff which is always the best, but I think on the whole it has opened things up dramatically.
    • Comment removed based on user account deletion
      • by bored ( 40072 )

        Really inefficient native code because the java bytecode model isn't really designed as an IL for modern register oriented machines. Further all the dynamic functionality has to either be trapped and interpreted or turned into pretty inefficient native code to deal with the dynamic nature. Plus garbage collection/etc tends to fragment up the heap/etc so one gets poor memory placement, which destroys cache locality, etc, etc, etc.

        This isn't really just a java thing, python has similar problems without all th

    • by Chas ( 5144 )

      For the majority of projects currently in the Python space, yes. It's more than fast enough.

      But for a small subset of "great giant beastie" projects, it isn't, it's causing people to have to jump through flaming hoops to get what they want.

      Sure, more average projects won't notice a difference without benchmarking software.

      But if you can introduce a meaningful speedup in very large, complex applications without screwing up functionality or introducing security issues, why not?

      • But for a small subset of "great giant beastie" projects, it isn't, it's causing people to have to jump through flaming hoops to get what they want.

        Ya. That's pretty much my experience with it.
        I don't give a shit about small CLI programs written in Python. I mean really, who gives a shit what the resource usage of short-lived processes are?

        But anything big... Is such. a. fucking. nightmare.
        Zenoss, I'm looking at you.

    • Re:Sigh. (Score:5, Insightful)

      by tmmagee ( 1475877 ) on Monday May 17, 2021 @01:21AM (#61392028)

      I think this has always been true. Look at video games. The game engine (where the "serious" performance intensive work happens) is in C/C++, all of the game logic is in a scripting language. It is exactly where the balance between the convenience of scripting languages and the power of system languages has always lain, and probably exactly where it will always lie.

      • Re:Sigh. (Score:5, Informative)

        by phantomfive ( 622387 ) on Monday May 17, 2021 @02:18AM (#61392090) Journal

        I think this has always been true. Look at video games. The game engine (where the "serious" performance intensive work happens) is in C/C++, all of the game logic is in a scripting language.

        That has definitely NOT been always true. In the past, video games were written completely in assembly for speed and compactness.

        • Reminds me of the dudes at Future Crew. https://www.youtube.com/watch?... [youtube.com]
          • Reminds me of the dudes at Future Crew.

            Well, that (native code and assembler), but also knowing fucking well the hardware you're running on.

            A tons of the effect in their demo owe to the hardware capabilities of the VGA:
            - the weird planar mode of Tweak-mode/Mode-X videos modes (each 4 pixels share the same memory address but occupy different planes), this way, you can fill up to 16 pixels with a single 32bit write (e.g.: useful for quickly filling large flat polygons).
            - the VGA latches (allow you to blit 4 pixels around with a single byte copy) (

        • by jythie ( 914043 )
          That is pretty 'far past', and even then not as true as you might think. Even in the 'pure assembly' games you tended to find attempts to make things data driven when possible, including things like control logic, in other words, early scripting systems.
        • Re:Sigh. (Score:4, Informative)

          by Curupira ( 1899458 ) on Monday May 17, 2021 @09:48AM (#61393122)

          I think this has always been true. Look at video games. The game engine (where the "serious" performance intensive work happens) is in C/C++, all of the game logic is in a scripting language.

          That has definitely NOT been always true. In the past, video games were written completely in assembly for speed and compactness.

          You're right, it wasn't always true, but it became true pretty quickly for some genres. The main example is perhaps adventure games: they didn't really need the same speed and responsiveness that action games needed, therefore adventure game programmers quickly learned to detach the game engine from the game logic.

          And so Infocom developed Z-machine (a virtual machine for its text adventure games like Zork) in 1979, which was followed by Sierra's AGI in 1984 and by LucasArts's SCUMM in 1987, among many others.

    • by Laxator2 ( 973549 ) on Monday May 17, 2021 @03:55AM (#61392252)

      You'll be surprised how many people are not aware of this trivial fact, that software runs on hardware. There are times when all you want from a language is to expose the functionality provided by the hardware and run as fast as the hardware allows it. Most scientific simulation code has very simple structure: Load data from file, run nested loops, write result to file. FORTRAN is still used for this kind of work as it is good at what its name says: FORmula TRANslator. Also the resulting code is very fast.
      Then there are situations where you want to think of the problem and not worry about the hardware. Here is where languates like Python truly shine. However, as long as these two opposite requirements exist we will have specialize languages that address one situation or the other.

    • Re: Sigh. (Score:3, Insightful)

      by orlanz ( 882574 )

      ... but if you want to do serious work ...But for the real work, people will always turn to native code....

      Sorry, but I hear stupid or arrogant or ignorant, usually the last, statements like this all the time in this debate. You are arguing with a fool if they are saying Python is almost as fast as C/C++.

      But equally could be said if the opposition is arguing that "serious work" can't be produced with the "tool for the job" based on the singular dimension of "speed of execution".

      I am not arguing that embedded systems, OS, hardware drivers, micro-controllers, etc aren't "serious work". Just that this arena is op

      • Re: Sigh. (Score:5, Interesting)

        by Joey Vegetables ( 686525 ) on Monday May 17, 2021 @11:45AM (#61393496) Journal

        Addressing mainly the GP:

        Part of my "real work" this morning consisted of extracting data from two, um, "interestingly" formatted Excel spreadsheets, into a relatively standard data format that these spreadsheets did not come close to resembling.

        I used Python + openpyxl to get this job done, in a couple hours' time. Note that I had never used openpyxl before, so a good bit of that time was simply familiarizing myself with the library.

        It would have been possible to use C or C++ or for that matter assembly, but why bother? It would have taken easily 10x as long, maybe more.

        Not all "real work" requires being close to the metal. A lot of it requires being closer to the problem domain instead, and in that space, higher-level languages such as Python tend to be not only useful, but, at least arguably, optimally so.

      • Exactly. Besides what is "real work" anyways. If one gets paid, then shit, that's real work.
    • by Junta ( 36770 )

      I generally hear the argument that Python is fast enough for the time save compared to writing in C in many scenarios.

      You are mostly I/O bound, and your computational drawback won't even be noticed compared to the latency penalty of your network? Then Python may be good enough.

      You are doing some significant number crunching, but all the operations your are doing fall within the scope of numpy? Python will probably be fine for your work, since you are merely supervising C code.

      You have a larger project in Py

    • by jythie ( 914043 )
      I think a big part of the problem is that many of the core python devs are obsessed with 'relevance' and being everything to everyone, which has led to a language with a frustrating number of silos and upgrade instability.

      Thing is, python is good enough for 'serious work', just not _all_ serious work, and I think the steering people would be better off for accepting that.
    • Gotta love how the Python people say it's already fast enough, that you don't need to go to compiled languages. And then stories like this come along.

      I don't know what the Python people say, but if they say that, they're full of shit.
      Python is a fucking dog. I don't know if it's the interpreter itself, or just shitty programming design, but every major Python app I have to get running for the various middle managers in my org requires its own server to do fucking simple work, and requires regular restarts of the application because it eventually grinds to a fucking halt as memory use increases.

      I truly fucking hate having to deal with shit written in t

      • If it's too slow for what you need to do, use something else.

        If someone handed you a pile of crap code, which one can write in any language, it may not be the fault of the language; it may be the fault of the people who wrote the crap code.

        I'm a software developer with enough experience to know that there are lots of tools out there, and that some are better suited for particular classes of problems than others.

        And for me, for what I use Python for, it is good, if not optimal.

        If I were usually doing video d

        • If it's too slow for what you need to do, use something else.

          I thank you for that pointless advice.

          If someone handed you a pile of crap code, which one can write in any language, it may not be the fault of the language; it may be the fault of the people who wrote the crap code.

          Pretty sure I said exactly that.
          However, I'm intimately aware of at least 2 facets of the interpreter itself that makes it practically impossible for a large project to perform well.
          First: It's literally one of the slowest language interpreters in existence after QuickBasic.
          Second: The way the interpreter handles object allocation will eventually lead to your program to become a stalled out quagmire with the GC desperately trying to keep things going.

          I'm a software developer with enough experience to know that there are lots of tools out there, and that some are better suited for particular classes of problems than others.

          Good for you.
          I

          • Parsing data in a text file? This was a very similar job (it was Excel, not a text file). If it had been 400TB, then I assure you I would not have used Python. (Probably C# or Java.) But it was a few MB. It took about 120x longer to write than to run.

            BTW, I also could have competently used C#, Java, or a handful of other tools to do this job. I chose Python (plus a library I had not previously used) because it just seemed like the right fit. I'm sure that is partly because of my familiarity with it;

    • Python is fast enough, but it could always get faster. We use interpreted languages for the sake that they are easier to maintain. I have made my career fixing and maintaining old code. When something was programmed on an interpreted language, my life is much easier, as I can just go into the code and fix it. However for that 20+ year old VB executable, or that C Code. I have to hunt for the source code, and occasionally, I needed to go into a hex editor and change some values around in the executab

    • by caseih ( 160668 )

      Python has always been an excellent glue language. It's good for some kinds of large projects, but that usually involves putting together components from libraries, which is really handy.

      Python is heavily used in super computing where it competes with Matlab. In both cases, these interpreted languages use high speed math libraries that are compiled. It's the expressivity of these languages that appeals to their use by mathematicians and physicists.

      Python may or may not be appropriate for your use cases (

    • The age of fameworks.
    • Yeah, the argument for Python being good enough performance has always been based on the idea that Python isn't being used to do much besides set a few object properties and perform minimal data transformations; all the heavy lifting has to go in a C library. By that logic, any language with C bindings is fast enough.

    • Gotta love how the Python people say it's already fast enough, that you don't need to go to compiled languages. And then stories like this come along.

      It is fast enough for a lot of things, and not fast enough for others. This sounds like an effort to reduce the list of things it's not fast enough for.

      But for the real work, people will always turn to native code.

      If your only definition of "real work" is maximum performance while executing the lines you type, sure. But I've seen a lot of other code doing "real work" for which other languages, including python, are a better choice than C/C++.

    • Gotta love how the Python people say it's already fast enough, that you don't need to go to compiled languages. And then stories like this come along.

      A comment made by one of the developers of PiTiVi (an open source video editor done in Python) on the how-to-contribute page [pitivi.org] for that project actually sums up Python in general, and also extends quite well to my thoughts on Microsoft's current efforts. They suggest if you want to contribute to that project one area you could contribute is:

      GStreamer Editing Services, the C library on which Pitivi depends to do all the serious work. If you want to work on the backend, this is the way to go.

      Which is a great summation for Python. Great for quick and dirty little tools, good for a project framework perhaps, but if you want to do serious work, go to a C library. This will always be the case. Ruby and Python and Perl and even mighty Java, they will have their niches supported by adherents who expound some aspect of their garbage collection, or ease of use, or type safety, and sure, if you need a jack-knife then it's great. But for the real work, people will always turn to native code

      WTF you talk about? Do you realize all the high performance/big data systems written in Java? Very rarely you need to go native with Java (which sports a JIT that compiles to native almost right off the bat for more efficiently that most hand-tuned code.

      I get the criticism with Ruby and Python (and who the hell uses Perl anyways, have seen what's out there since, I dunno, the mid 2000's)? But Java (or C# for that matter)?

      All I can see is "cool story bro."

  • Guido is going to make you an offer you can't refuse. He's gonna speed Python up. And if you don't sign on, you're gonna sleep with the fishes, capiche?
  • by 93 Escort Wagon ( 326346 ) on Sunday May 16, 2021 @11:53PM (#61391894)

    I suspect they picked a compiled language to avoid mentioning the more embarrassing fact that python is slow compared to other scripting languages like perl or even php.

    • Right, and making CPython twice as fast would just be a drop in a bucket of the performance gap.

      • It compiles to Java. Except ... no static typing...

        • Jython was pretty cool during the days of Python 2.2+. But development has pretty much halted and they never made the jump to Python 3. So sadly for all practical purpose, Jython is dead in the water.

          AFAIR performance wise Jython wasn't faster than CPython because it could not use the C parts of the standard library and had to resort to pure Python for a lot of otherwise C optimized functions.

          And as you already stated, many optimizations of the JVM could not be applied due to lack of static typing.

    • Is Python really slow compared to Perl?

      • by bill_mcgonigle ( 4333 ) * on Monday May 17, 2021 @07:32AM (#61392658) Homepage Journal

        Very. Perl may be a pain in the ass, but a website running on mod_perl is very fast.

        Horizontal scaling has allowed Python-based websites to become more tenable. You can get more decent Python developers for less money than excellent Perl programmers, so you throw hardware at it.

        Python's decent object model has also allowed non-experts to contribute decent libraries and then better programmers can contribute fixes. CPAN has a higher bar to entry.

        Perl got too popular in the 90's and the raw object model was left exposed, and Larry wandered off into Perl6 land, which he never delivered, leaving the language without an obvious designer for evolution. Quite tragjc.

        If Perl6 usability had continued on the Perl5 base it would have been a big change, but, hey, updating my perl4 code was a pain too. Instead, new computer science inventions were chased and even two decades later they're still at it. But most people moved on.

        Python isn't great, but it's delivered, even the hard stuff. Projects like Julia left the hard stuff to the end and never got to it. Python is slow and memory-intensive but CPU's are fast and Java is worse on memory and very verbose (being closer to C++/Rust, with Groovy inheriting its infrastructure woes). Nobody in commercial tech wants to be associated with ADA, and PHP is a fractal of bad design. Go is weird enough that most people skip it, even though it has value. Ruby is elegant but much slower than even python. And those are just the popular ones I know.

        So Python wins by default as the least-problematic decent language in the dynamic space. Might as well make it more efficient.

        • Very. Perl may be a pain in the ass, but a website running on mod_perl is very fast.

          Seconded.
          I wrote a mod_perl implementation of an IP management system because everything else out there sucked.
          It was faster, more capable, and had a laughably smaller working set.
          Though I don't find Perl a pain in the ass, I do understand why some people don't like it.
          I suspect I love it for the same reason I love C, javascript, and lua.
          I like freedom in my language.

      • Oh god yes.
        At least in any project of substantial size.
        I don't know that a $a++ while($a < 10000); loop is any quicker, but any python project of substantial size becomes a monster that its GC can't keep up with, and you *will* have to restart that thing, and build service monitors to keep it alive (including waiting 8 minutes for it to start back up), and you will need about 12x as much RAM as you should need for whatever it does... which isn't a problem if you've got infinite resources to hand everyt
        • I love Python, but I don't tend to use it for huge apps. That's really not its forte, plus, in that realm, strong and static typing become more and more useful.

          You can get a lot of the same ease-of-use plus nearly-native speed by using modern C# on modern .NET, both of which today are open-source and cross-platform.

    • by egr ( 932620 )
      I feel that 2x speedup is not really that impressive for Python, and it will still remain as one of the slowest interpreted programming languages.
  • Give us control over the GIL and parts of my code could be 32x faster.
    • See this. [python.org]

      However frustrating it might be, the GIL there for a reason. Over-used, maybe, but trust me, you don't really want to be without it if you're writing multithreaded code, or using libraries that do.

      The point at which it starts to become a significant problem is the point at which I would tend to delegate to something else. Like pretty much any other tool, Python is great for some things, and not so much for others.

  • They'll just re-write it all in 10 very, very - very - ugly lines of Perl. :-)

    [said as Perl fan]

    Of course, there's also an Emacs function for that. :-)

    [said as a Emacs fan]

    • Ugly is in the eye of the beholder...
      I find those 10 lines to be a thing of beauty.
      Now the lisp driving the emacs macro... No. I can't make myself like that.
      My brain revolts against needing every fucking evaluated statement/block needing to be in parentheses.
  • by DrXym ( 126579 ) on Monday May 17, 2021 @02:41AM (#61392120)
    ... if Python hadn't been tearing itself apart trying to maintain two versions of Python simultaneously for 10 years longer than necessary. Aside from splitting development & focus, it meant packages used kludge compatibility layers that certainly didn't help with speed. Should have torn that band-aid off fast and moved on to optimizing performance.
  • Implement this and get 4x faster SSL downloads: https://bugs.python.org/issue3... [python.org]

    Quite possibly more with some extra optimization.

  • I assume current python code has had all low-hanging fruit related to performance picked up already. I mean, this is not version 0.1 just following a proof-of-concept. I also assume they wouldn't fix the GIL/green threads in a dot-release, this is too much of an undertaking touching too many lines.

    So, how do you optimize such a code base for speed? JIT seems out of the question, too slow at startup. What's left? Gotta change the language a bit, again. Look for it, new constructs so that the language bec

    • Oh come, come, was it really so hard to migrate to python3?

      • YMMV. It wasn't for me, but I use Python mainly for small-ish scripting tasks.

        The bigger difficulty a lot of folks had was that many popular libraries were not upgraded until long after Python3 was out.

        I guess I should admit that we have a huge Heroku app written by a contractor, that requires Python2, and that we will likely retire it rather than porting to Python3 because it honestly doesn't seem worth the effort (the fact that it's still Python2 is among the least of its problems).

  • by gawdonblue ( 996454 ) on Monday May 17, 2021 @05:56AM (#61392460)
    Python for Workgroups :)
  • Is it necessary to keep mentioning Microsoft owns Github? I mean, before they bought it, I had no clue who the owners were, or their political/philosophical leanings, and I didn't care. Now it's MICROSOFT OWNED Github? Geez, why not just shout WITCH and be done with it Dice.com Owned Slashdot Media owned Slashdot?
    • It's been BizX and not Dice for ages now.

      And it's even worse than it was under Dice, where at least mobile worked.

      I haven't been able to use mobile for weeks now. With or without javascript enabled it tells me javascript is disabled, and to use the desktop site. I hit the link and it tells me the same thing in a different font. What a fucking shit show.

  • Lack of types, lack of encapsulation, obsession with making everything "the python way" even when that way is confusing and unintuitive - all of these make Python a language in which it is very difficult to write large projects.

    C++ and Java can scale to 20 devs working on the same codebase. With Python it always ended up an ugly mess and/or a struggle for control.

    • by tomhath ( 637240 )
      You obviously don't know Python very well.
      • You obviously don't know Python very well.

        Classical pythonista answer. Here's what you should understand, when a programmer solves a problem, his main domain is the problem domain, not the specifics of the language he is working on.

  • So that, instead of being exasperatingly slow, Python will become annoyingly slow.
  • Garbage in, garbage out. Can you polish a turd?

  • There has already been projects like Pypy that successfully speeds up Python programs. Pypy is currently 4.2x faster than CPython (the Python version that almost everyone is running) according to their benchmarks.

    What's new here is that a core member of the CPython team is now finally working on addressing speed with JIT techniques with the intention of getting the changes merged. So there's a high probability that everyone running Python will benefit.

    I don't think Guido has any special qualifications when

  • I remember Guido saying many years ago that Python didn't need to be "fast" (when discussing about the GIL), that all we needed to do was throw more hardware/servers at it. I think it was on SE Radio, I can't remember, and that was a while ago, maybe 12 years. It was wrong back then, and it is wrong now. I don't know if there's enough money to throw at this problem to fix it.

"Facts are stupid things." -- President Ronald Reagan (a blooper from his speeach at the '88 GOP convention)

Working...