Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Programming Java Python

Why ESR Hates C++, Respects Java, and Thinks Go (But Not Rust) Will Replace C (ibiblio.org) 608

Open source guru Eric S. Raymond followed up his post on alternatives to C by explaining why he won't touch C++ any more, calling the story "a launch point for a disquisition on the economics of computer-language design, why some truly unfortunate choices got made and baked into our infrastructure, and how we're probably going to fix them." My problem with [C++] is that it piles complexity on complexity upon chrome upon gingerbread in an attempt to address problems that cannot actually be solved because the foundational abstractions are leaky. It's all very well to say "well, don't do that" about things like bare pointers, and for small-scale single-developer projects (like my eqn upgrade) it is realistic to expect the discipline can be enforced. Not so on projects with larger scale or multiple devs at varying skill levels (the case I normally deal with)... C is flawed, but it does have one immensely valuable property that C++ didn't keep -- if you can mentally model the hardware it's running on, you can easily see all the way down. If C++ had actually eliminated C's flaws (that is, been type-safe and memory-safe) giving away that transparency might be a trade worth making. As it is, nope.
He calls Java a better attempt at fixing C's leaky abstractions, but believes it "left a huge hole in the options for systems programming that wouldn't be properly addressed for another 15 years, until Rust and Go." He delves into a history of programming languages, touching on Lisp, Python, and programmer-centric languages (versus machine-centric languages), identifying one of the biggest differentiators as "the presence or absence of automatic memory management." Falling machine-resource costs led to the rise of scripting languages and Node.js, but Raymond still sees Rust and Go as a response to the increasing scale of projects.
Eventually we will have garbage collection techniques with low enough latency overhead to be usable in kernels and low-level firmware, and those will ship in language implementations. Those are the languages that will truly end C's long reign. There are broad hints in the working papers from the Go development group that they're headed in this direction... Sorry, Rustaceans -- you've got a plausible future in kernels and deep firmware, but too many strikes against you to beat Go over most of C's range. No garbage collection, plus Rust is a harder transition from C because of the borrow checker, plus the standardized part of the API is still seriously incomplete (where's my select(2), again?).

The only consolation you get, if it is one, is that the C++ fans are screwed worse than you are. At least Rust has a real prospect of dramatically lowering downstream defect rates relative to C anywhere it's not crowded out by Go; C++ doesn't have that.

Why ESR Hates C++, Respects Java, and Thinks Go (But Not Rust) Will Replace C

Comments Filter:
  • by Anonymous Coward on Monday November 27, 2017 @03:43AM (#55628107)

    He seems to think he has some great insight into why C is C, why C++ is C++. But really, he is so fucking clueless I don't know where to start.

    • Re: (Score:3, Interesting)

      by e70838 ( 976799 )
      I have studied deeply C++98. I try to avoid C++ and favor Java each time I can. I am programming since 1994 and I agree with every single word ESR said. I do not know if rust or go are the future. I think we can do better and that there are a lot of insight to get from Ada (and ravenscar).
    • Re: (Score:3, Funny)

      by Hal_Porter ( 817932 )

      People that met him claim he has halitosis too. That's what happens to people who oppose the C/C++ binarchy. Their heresy festers inside them, causes severe halitosis and they are driven out of society to live in the mountains. Where, from the look of him, ESR is headed.

      Praise Kernighan, Ritchie and Stroustrup! Death to the heretic ESR!

  • business code (Score:5, Insightful)

    by afidel ( 530433 ) on Monday November 27, 2017 @03:47AM (#55628113)

    There's enough business logic programmed in C++ and Java to keep both languages around until my kids retire and they're not yet in the workforce. Rust and Go, yeah doubt there's a single company of any size running their business processes on either.

  • by Roodvlees ( 2742853 ) on Monday November 27, 2017 @04:08AM (#55628147)
    He's a person, not a technical description.
    We have enough abbreviations in tech.
    Hate it when people do that.
  • by Opportunist ( 166417 ) on Monday November 27, 2017 @04:10AM (#55628149)

    The higher the level of abstraction in your language, the higher the overhead it will create. Now, it needn't be so absolutely stupidly overengineered as .net is, but still the metric fits, the more safeguards and handrails your language comes with, the higher the overhead it incurs to have them. This is admittedly not really a huge problem in today's working environment because our computer speeds are far greater than our needs.

    Still, somehow it feels silly that I need increasingly more powerful computers just to run the same kind of program, only because programmers can't be assed to learn their trade and instead rely on ridiculously overblown frameworks that is the equivalent of delivering a pack of soda with a semi because you have to bring a soda factory along with the workforce since the framework doesn't know how to deliver a single soda.

    • by JaredOfEuropa ( 526365 ) on Monday November 27, 2017 @04:39AM (#55628245) Journal
      Anything that allows us to reduce errors, increase functional complexity, reduce development time, improve readability and maintainability, and/or make it easier to code for a greater amount of people, is progress in my book. Working at a higher abstraction level achieves some or all of those goals.

      And good frameworks help with that. When I build a house, I don't want a craftsman who takes time to learn how to use an adze so he can plane down lumber to the correct size for the job; I want a builder who knows he can get lumber of the correct dimensions right at the store. The skills to build instead of buy are useful in many trades (both building and programming), but they are expensive and a possible source of additional errors. Frameworks are often a good answer to that... as long as the developer understands the nature of the framework, its limitations, the licensing model, its viability, and thus can assess the consequences of using it.
      • by Opportunist ( 166417 ) on Monday November 27, 2017 @05:23AM (#55628355)

        Personally, I think way more problems arise of terse syntax and high symbolic abstraction that C/C++ and derived languages like so much. I mean, I'm as lazy as the next programmer and that's why I like C (and its derived languages) but even I cannot ignore that
        { (a!=1)?b=!b:b=0}

        is way less readable than

        begin
        if a is not equal 1 then set b equal complement of b else set b equal 0
        end

        You'd immediately spot an error in the second because the sentence would look "wrong".

        • by religionofpeas ( 4511805 ) on Monday November 27, 2017 @07:04AM (#55628617)

          With some extra spaces, and the whole thing changed to an expression (which is how ?: is supposed to be used) it's a lot easier to read.

          b = (a != 1) ? !b : 0

          The advantage of the ternary operator is that you only need the LHS part once, which helps if it's a more complex variable.

          • by coofercat ( 719737 ) on Monday November 27, 2017 @10:01AM (#55629473) Homepage Journal

            A colleague and I were joking around one day, when a hardcore-dev (with a lot less humour, and chronic flatulence, as I remember) overheard us. He maintained that super-terse code is easier to read than the alternative. Since we were just messing about, we both just let him say his peace and then stated that the One True Language was of course Turbo Pascal 6 (which sort of ended the conversation).

            My take on it is that the terse syntax does make sense (more quickly) to someone who knows the syntax really well. If you don't know it quite as well, then the long-form is better because as OP says "the sentence would look wrong". Also, actual words are 'googleable' where as it's hard to lookup the meaning of "~->" or whatever. Thus, the long-form plays to more average programmers.

            The question then becomes... who should a language be for? For the super-expert, or for the midrange programmer, or possibly even the junior? IMHO, midrange is a good place to aim at because that's where the majority are, and if they're using your language then you'd want them to be able to do so reasonably easily and safely. That way, of all the billions of opcodes executed around the world as a result of your language, the majority of them will be reasonably safe and sensible.

        • I think you meant: b = (a==1) ? 0 : ~b
          • depends if b is expected to only be 1/0 at the end of this function even if it *may* be some other true value (why this would be the case I do not know), also !b is going to be faster than ~b in many cases. But! write it as it makes sense and profile to see if that speed is an issue or not.

            Given the a b operators (yeah I know this is an example, but I'm running with it) this is likely an inline function that will be called very heavily in a nested loop or somesuch... as a result the speed of operators can

        • by robkeeney ( 1061032 ) on Monday November 27, 2017 @12:39PM (#55630707)

          People who do electrical engineering learn to read and understand the funky symbols they use in electricity. We don't expect them to write out everything in plain English. It's the same with programming. Your Pascal-y pseudo-code took how much more space and time to convey no extra information? Your pseudo-code actually took longer for me to parse and understand than the C version.

      • by DrYak ( 748999 ) on Monday November 27, 2017 @06:02AM (#55628461) Homepage

        And good frameworks help with that. When I build a house, I don't want a craftsman who takes time to learn how to use an adze so he can plane down lumber to the correct size for the job; I want a builder who knows he can get lumber of the correct dimensions right at the store.

        On the other hand, when all you want to build is a garden shed, you can do it yourself in a quick week-end afternoon project by quickly nailing a few planks together. You definitely don't want a several month-long adventure involving half a dozen sub-contractors (and each further down, their own individual group of a dozen of sub-contractors), plus hiring a few special planification manager (because sub-contracors D and Y each out-source their screw to a different sub-sub-contractor. Incompatiubles) which will all require two hectars of work space around your shed. And somehow the garden shed need to be connected to an industrial triphase 380V power connector in order to be able to function.

        Some time, over reliance on frameworks and helpers means that some very simple projects that would be handled by a few dozens of C or C++ lines of code (perhaps a couple of hundreds top), suddenly need to pull more than 20 MiBs of libraries in the package and are dependent on 200 different github repositories (hoping that they'll not blocked on the dev's whim - see Node.js and string alignement). And you need to use special command line settings to tell the VM to allocate 2 GiB of memory for the process.

        • Quite. Which is why the inclusion of a framework should be a matter of design not coding. This decision should be left to a software architect rather than a developer.
      • by CptLoRes ( 4510239 ) on Monday November 27, 2017 @08:23AM (#55628889)
        But what happens when everybody buys lumber at the store? There still must be somebody that make sure the lumber is the right size and quality for your project. This problem is exactly why we today need giga range cpu's and ram just to watch a web page. Nobody knows how to deal with the details any longer, and so they end up building a new house every time there is a new problem.
    • by TheRaven64 ( 641858 ) on Monday November 27, 2017 @05:48AM (#55628425) Journal

      It's not always so clear cut. What you say is definitely true for naive compilers, but higher-level abstraction also often mean more information for the compiler and more freedom for the compiler. These can translate to better optimisations. To give a trivial example, languages like Java provide an abstraction that looks like a C struct, but don't require that the memory layout be visible to the programmer. Imagine that you create a struct-like Java object with RGB values to represent a colour and you do the same in C. Now you put them in an array and try to do some processing on them. The C version is constrained to lay out the objects as three fields with no padding (this is visible in the language with sizeof and will break ABIs if it dynamically changes). The Java version, in contrast, is allowed to put an unused padding field at the end of the struct. Why does that matter? If you want to vectorise the loop, then being able to guarantee 4-element alignment for every object in the array is a huge win. This is a legal transform for a Java compiler, but not a legal transform for a C compiler unless it can prove that no pointers to the array escape (and a few other constraints).

      The big advantage of C was that a fairly simple compiler for a simple architecture could get very good performance. The disadvantage for C is that compilers quickly hit diminishing returns and the abstract machine makes a number of desirable optimisations unsound.

      For example, if your language has a first-class notion of immutability, then this gives the compiler the opportunity to elide copies or add copies if they make sense for NUMA systems, and gives the compiler a lot more freedom with regard to reordering or eliding loads. Similarly, if your source language has higher-level notions of sharing then this means that you can avoid a lot of defensive memory barriers that you'd need for correct C/C++ code. If your language has stricter guarantees on aliasing, then a whole lot of optimisations suddenly become easier.

      Any compiler optimisation is a mixture of two things: an analysis and a transformation. The analysis must be able to tell you if the preconditions for the transform are met. The more information you can give to the compiler, the more often the analysis can prove that the preconditions hold and enable the transform.

    • by Dutch Gun ( 899105 ) on Monday November 27, 2017 @06:05AM (#55628471)

      The higher the level of abstraction in your language, the higher the overhead it will create.

      This is exactly why C++ remains popular among those who create large, complex, high-performance applications. C++ is well known for using zero-cost abstractions. That means you get the performance of low-level C code, but can design much safer interfaces and type safety in your code which allow the compiler, not a runtime, to validate that the code is correct and safe.

      For certain types of applications, it's an effective compromise between the pragmatism of retaining backwards compatibility with decades-old ecosystems, while at the same time providing better safety and abstractions than C.

  • In defense of C++ (Score:5, Interesting)

    by Dutch Gun ( 899105 ) on Monday November 27, 2017 @04:14AM (#55628157)

    The reason we have to say "don't do that" is because C++ remains compatible with C and older version of C++. There are literally billions of lines of existing C++ code out there, and the language committee realizes it can't just snap its finger and order everyone to rewrite all that old code (which is stable, functional, and debugged, btw) because we have something newer and better now.

    It's pretty straightforward to write safe, new C++ code if you understand how to use the new features and abstractions. I wrote an entire game / game engine recently using modern C++, and it's amazing how few bugs I've had thanks to recent language improvements and techniques.

    I'm not sure where this "large projects can't enforce code discipline" idea comes from. What does he think "coding standards" are, which nearly every major company, organization, or project has? And if someone doesn't understand how to use a smart pointer instead of a raw pointer or avoiding class inheritance hell at this point, then really, they shouldn't be contributing to your C++ projects.

    I get it that some people dislike or distrust C++. It's a complex language that's hard to master. They don't like that it makes a lot of compromises in the name of practicality, but that real-world practicality is why many of us use it for large, performance-critical real-world projects. I'd never argue that C++ is the right language for every project. In fact, it's a fairly specialized language at this point. But that level of hyperbole is a bit annoying.

    • by K. S. Kyosuke ( 729550 ) on Monday November 27, 2017 @04:21AM (#55628185)

      and debugged

      Amply demonstrated by the numerous memory exploits? ;)

    • by gnupun ( 752725 )

      I get it that some people dislike or distrust C++. It's a complex language that's hard to master.

      And once you master it, there's limited benefits. It's useful for large, complex programs where speed is important. Examples are games, browsers, large desktop apps etc. That's it -- it's useful in a very small amount of software. For any other type of software, you can use C, Java, Python, Rust, Nim etc.

      Languages like Rust (which is already used in browsers like Firefox) and Nim (which has a very efficient reference counting GC) are the future where performance is important. If you want high performance an

      • Re: (Score:3, Insightful)

        It's useful for large, complex programs where speed is important. That's it -- it's useful in a very small amount of software.

        Not sure what your frame of reference is, but that's a LOT of software. Hell, it's basically everything that isn't trivial or severely memory constrained. Had to switch form C++ to C once for a pretty heavily memory constrained embedded application, but otherwise I've been able to get away with using C++ practically everywhere.

        • by gnupun ( 752725 ) on Monday November 27, 2017 @05:43AM (#55628407)

          Not sure what your frame of reference is, but that's a LOT of software.

          No, it's not. Few programmers work on projects that are millions of lines of code and it has to be as fast as possible (real-time).

          For servers, memory is cheap ($200 extra) so you can just use Java for that 2 million LOC project.

          That leaves C++ only for AI, professional games and large desktop apps (Photoshop, browsers, office etc.). While these types of software are used a lot, no more than 100,000 programmers are working on this, at any given time.

          For in-house desktop apps of medium complexity (upto say 500k LOC), you can use C# or VB.net.

          Even for games, where low time for development is paramount, the engine is written by one company in C++. Then dozens of other companies use that engine and Lua or some other scripting language to actually write the game quickly.

          The remaining 95% programmers can use a sane programming language like C, Python, Swift, Java, Rust, Nim or even Go.

          Bottom line: programmer time is money for the company and C++ probably has the 2nd highest cost per line of code compared to other languages (assembly language is 1st in cost/LOC).

          • For servers, memory is cheap ($200 extra) so you can just use Java for that 2 million LOC project.

            You may have been right a few years ago, but for the last couple of years memory, specially server memory, has become way more expensive due to supply simply not being apple to keep up with supply after much of the manufacturing capacity was shifted to making memory for mobile devices like smartphones and tablets. We're talking about a situation where's it's been badly eating into server vendor profit margins and sales due to increased cost. Thus memory use is important and so is performance when companies

          • Re: (Score:3, Insightful)

            For in-house desktop apps of medium complexity (upto say 500k LOC), you can use C# or VB.net.

            Ok stop right there. No one should use any of the .net crap except if you are forced to due to some Microsoft constraints.

      • Re:In defense of C++ (Score:5, Interesting)

        by TheRaven64 ( 641858 ) on Monday November 27, 2017 @05:54AM (#55628447) Journal
        I've recently started using C++ for a bunch of things where I would previously have used a scripting language. It has basically everything I want from such a language:
        • Closures (including compile-time specialisation with C++14 auto parameter lambdas).
        • A rich set of collection classes (written by people who actually care about performance).
        • Regular expressions.
        • Smart pointers (so I don't need to think about memory management)
        • Futures / promises and threads.
        • A tiny dependency footprint (so my code will run on pretty much any *NIX system)

        Most of the time, I can compile at -O0 and run in less time than it takes the Python interpreter to start and if I find that performance actually does matter then I can quickly profile it, find the bottleneck, and replace it with something a lot more efficient.

      • by loonycyborg ( 1262242 ) on Monday November 27, 2017 @06:58AM (#55628603)
        C++ still reigns supreme due to its flexibility. While in some less pragmatic language you are very likely to hit some roadblock because language designers wanted to enforce some principle which ended up counter productive in your particular case, it won't be so with C++. This language has no other principle than practicality, and it will never block you from getting the job done. Even it being superset of C ends up being another aspect of practicality. Because C ABI still is de facto standard for language interop and system APIs and is implementation language for astronomic number of important libs.
    • I'm not sure where this "large projects can't enforce code discipline" idea comes from. What does he think "coding standards" are

      Won't help you when your code is 30 years old and has been hacked around by slave labour in the form of military conscripts and customer provided "consultants".

      • Amen to that. I'm a developer on a 1.2MLOC project written almost entirely in C++. It's 15 years old by now, and has has literally a thousand pairs of hands in it. It's open source, and so the quality of code has varied mightily over those 15 years. It's a big, bloated, barely maintainable mess, and leaks memory like a sieve that's been blasted with a shotgun.

        I learned C++ hacking on that code. I also learned to hate it.

        This large project, at least, can't enforce code discipline, nor would it do any good if

    • And if someone doesn't understand how to use a smart pointer instead of a raw pointer or avoiding class inheritance hell at this point, then really, they shouldn't be contributing to your C++ projects.

      Maybe the projects aren't mine. Maybe the project is run by a business, and I'm just one of the people in the team, and the boss has hired a few idiots as well.

      • by Dutch Gun ( 899105 ) on Monday November 27, 2017 @06:44AM (#55628553)

        No computer language is going to help a project programmed by idiots.

        • by religionofpeas ( 4511805 ) on Monday November 27, 2017 @07:08AM (#55628627)

          No language can compensate for having idiots in your team, but some languages, like C++, make it worse.

          And remember: if you see no idiots on your team, you are the idiot.

        • by squiggleslash ( 241428 ) on Monday November 27, 2017 @11:05AM (#55629973) Homepage Journal

          No computer language is going to help a project programmed by idiots.

          That's just not true. Programming languages can enforce constraints that make common errors either difficult or impossible.

          I have to admit a disenchantment with software development in general these days, largely because the consensus within the community is that fast and cheap is better than reliable and secure. We pick programming languages like PHP and C++ where we know we're going to make errors we're never going to be able to debug, and often will be completely unaware of until they strike, because meh who cares I can write in {insecure language} and I like it so sucks to be users right? Besides I'm a genius and would never make those mistakes (yes you will, asshole.)

          I refuse to read another ESR article on principle, he's a jack-ass, and I seriously doubt Go is going to be taking over from C any time soon, but I generally agree with the sentiment that we made a mistake going away from Java, back to languages that are optimized towards making errors. Java is too bureaucratic, and C# is nearly as bad, and while it's overstated I do generally agree that there needs to be more control over GC for the average programmer, but there has to be a happy medium here - better than Java doesn't have to mean insane type checkers and/or going back to directly manipulating pointers.

          And yeah, I know C++ has smartpointers. But it also has regular old shit pointers. And sure, you would only use the latter in the right circumstances, but, let's be honest, all those other programmers you work with, who you are soooooo much smarter than, wouldn't...

  • Well, don't do that! (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Monday November 27, 2017 @04:17AM (#55628171) Journal

    Arguing that it's harder for large-scale projects to manage a 'well, don't do that' approach implies that he's completely missed the last 40 years of tool development. This is much more of a problem for small C++ projects than large ones. Large ones have pre-push hooks that run static checkers that enforce rules like no bare pointer and no operator new / delete. It's the smaller ones that rely on programmer discipline to do this that are more likely to have problems.

    Go is a horrible language. It has multithreading as a core part of the language, but no memory model and no type system that can express notions of sharing or immutability. The designers clearly realised that generic types are important, and so added precisely one to the language (the map type, which is parameterised on the key/value types). It has a map type that maps from one object type to another, but no way for users to define what equality (or ordered comparison or hash) means on objects.

    • by roca ( 43122 ) on Monday November 27, 2017 @05:46AM (#55628417) Homepage

      Mozilla has run a very large-scale C++ project for many years, with an elite team of developers. Mozilla makes extensive use of enormous test suites, static checkers, Valgrind, ASAN, TSAN, etc. Mozilla created Rust because we concluded C++ was not reliable enough or secure enough for large-scale multithreaded applications.

      • by TheRaven64 ( 641858 ) on Monday November 27, 2017 @06:03AM (#55628463) Journal

        Mozilla has run a very large-scale C++ project for many years, with an elite team of developers

        If you've ever looked at the Mozilla code base, then you'd be a lot more reluctant to describe their team of developers as 'elite'. The most positive thing I can say about it is that it's not as bad as OpenOffice.

        • by roca ( 43122 )

          I worked on it for years. Given the size and the age of the code, and the problem domain, it's not bad. As for "elite", well, almost any of Mozilla's C++ developers could get a job at Google/Facebook/Apple/Microsoft easily. Many have.

      • by Viol8 ( 599362 ) on Monday November 27, 2017 @07:12AM (#55628635) Homepage

        "we concluded C++ was not reliable enough or secure enough for large-scale multithreaded applications."

        That should have been "we concluded multithreading was not reliable enough or secure for large scale applications".

        When your browser switched from multiprocess to multithreaded back in the day (presumably to make it easier to port to windows) its reliability went down the toilet. Now you're making a big deal about going back to multiprocess. Well whoop-de-doo.

        There is nothing wrong with C++ for large scale applications , in fact that was one of its design ideals.

  • Vectorization (Score:3, Interesting)

    by shatteredsilicon ( 755134 ) on Monday November 27, 2017 @04:33AM (#55628219)
    The big problem when it comes to using anything other than Fortran, C or C++ is that 20 years after the first MMX and SSE instruction sets have been added to CPUs, there are only a handful of compilers that known how to vectorize even loops that are hand-crafted to be vectorizable - and the ones that can do it are all commercially licenced (GCC might theoretically have some support for it, but in reality it doesn't vectorize most things). And since most of the performance advancement in silicon has for a long time now been focused in SIMD units, that means that for any performance sensitive workload there are no feasible alternatives. If it has taken GCC 20 years to get not very far, how long will be be before much younger compilers get anywhere with this performance critical feature?
    • Re:Vectorization (Score:5, Interesting)

      by TeknoHog ( 164938 ) on Monday November 27, 2017 @08:06AM (#55628827) Homepage Journal

      Agreed, but I'd like to take a step back. IMHO, it is idiotic to first write a loop and then vectorize it -- we should have vector types to begin with. We've had them in Fortran for over 20 years, though not necessarily in all compilers as you point out (I remember using a nice SIMD-aware commercial compiler back in 2001). Today, you can use Julia as a modern replacement of Fortran with a free compiler, though you may need to give the @simd hint in some cases.

      I guess my physics background shows here. When we manipulate vectors in physics, we generally don't think of looping over all components sequentially; the components are a matter of representation, while the physical vector concept is independent of the coordinate system. Vectors also come with certain assumptions of independent operations per component.

      Your post is also a good reminder to the folks who laud C's ability to work at the low level; in my impression, C was designed to act like a very simple processor, so as real CPUs become more complex, the low-level idea gets ugly with backward constructs like loop vectorization. To effectively deal with SIMD etc. you need a higher-level perspective of vectors/matrices, as paradoxical as that may seem.

      Similar issues apply to multiprocessor systems, which have also been used in the scientific/HPC field for decades. So it's funny how it suddenly becomes completely new and hard to program for, when the same tech is sold to the general public in the form of multi"core" systems.

  • by ( 4475953 ) on Monday November 27, 2017 @04:54AM (#55628275)

    What I find kind of annoying is that Ada fixed all these flaws decades ago with Ada 95, now it is at Ada 2012 and still gets no love, just because it's a bit more verbose than C if you use it correctly. (Though not necessarily more verbose than C++.) Sure it has some flaws, e.g. concerning aliases and their scoping rules, but these are mostly inconveniences and some of them have been fixed in Ada 2012. But it doesn't stop there, the same story can be said about dynamic languages. Take fancy new dynamic language X and you can be fairly certain that CommonLisp solved all the problems of the new language already in the 80s.

    Maybe developers are in the end less rational than they think? It seems to me that a language must have serious flaws, lots of incoherent shortcuts and tricks, or at least a cryptic syntax to become really successful.

  • ESR is incompetent (Score:4, Interesting)

    by loufoque ( 1400831 ) on Monday November 27, 2017 @05:15AM (#55628333)

    I remember I interacted with him back when he started the irker project.

    That pretty trivial piece of software, written in Python, was riddled with bugs, and no amount of bug reporting and discussing with him the design mistakes got anything fixed for a whole week, despite him actively trying.
    I rewrote the whole thing in C++ in two days and it always worked robustly from the get go.

  • Drivers/Firmware (Score:5, Insightful)

    by zifn4b ( 1040588 ) on Monday November 27, 2017 @07:39AM (#55628707)
    How do I write drivers and firmware in Go? I think C is going to be around for awhile.
  • by hcs_$reboot ( 1536101 ) on Monday November 27, 2017 @08:57AM (#55629061)
    A language that has keywords like `static_cast` or `thread_local` has reasons to be hated.
  • by Dr. Crash ( 237179 ) on Monday November 27, 2017 @10:21AM (#55629635)

    ESR is making an early invalid assumption - that "fast transparent garbage collection will happen".

    Sorry, no. The smartest people in the CS world - possibly the
    smartest in the world, period (specifically those at MIT AI Lab,
    Xerox PARC, BBN, TJ Watson, and Stanford) worked the GC problem
    for literally 20 years, throwing hardware at it, software, tagged
    architectures, secondary processors, all that.

    They never cracked it. GCing at realtime speed is just a tough problem.
    Unless ESR can show me code that can GC in faster than O(n) time
    AND not have to freeze the allocator process for O(n) time, he's just
    pitiably wrong.

    (and no, I don't count flip and sweep GC as workable in this, as it
    means that a buffer that DMA hardware is writing to will move without
    warning. Nor is "generational" GCing, all that does is to stave off the
    inevitable full-out GC for a few minutes to hours, which is fine for a
    hacker sitting at a terminal but no good at all for a self-driving car or
    SaaS server).

    Now, I could be wrong; if he *has* a realtime garbage collection algorithm
    then he deserves the Turing award.

    But I'm betting "not".

    • You have no clue about GCs, nor has ESR ... so you arguing with it about it is moot.

      Hint: I can probably describe 5 real time garbage collection algorithms/variations from my mind, and implement one in about 3 days. Hm, perhaps one day ... but my C++ is rusty.

      Another hint: real time most likely does not mean what you think it means.

  • by snadrus ( 930168 ) on Tuesday November 28, 2017 @01:31AM (#55635017) Homepage Journal

    GC was hacked-on for decades to no avail (in bringing it low-level).But now here it works well (very fast, concurrent).
    What changed? The language spec was made very simple.

    Compiling was a very tricky, slow business. Now here it's fast and relatively simple.
    What changed? A simpler language. Smart people who know which options to take away.

    Only painfully low-level languages could work with raw memory pointers. Now we have that in 2 friendly, "default-safe" languages.
    What changed? Realization a lot of power comes from low-level operations.

    So C & it's layered C++ will break as safer variants with the same power begin to exist.

    High level languages depended on dozens of C libraries and libc. Go needs none of those.
    What changed? A realization this is important.

    A fork of Go now runs without a kernel on bare-metal ARM. That's the right space to grow into a kernel-module-capable language. Languages aren't fast or slow, their implementations are. Go's ease of portage suggests it could show up in the kernel.

To do two things at once is to do neither. -- Publilius Syrus

Working...