Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming

What's To Love About C? 793

First time accepted submitter edA-qa writes "Antiquated, clunky, and unsafe. Though beloved to some, C is a language that many choose to hate. The mass opinion is indeed so negative it's hard to believe that anybody would program anything in C. Yet they do. In fact a lot of things, even new things, are programmed in C. The standard was recently updated and the tools continue to evolve. While many are quick to dismiss the language it is in no danger of disappearing."
This discussion has been archived. No new comments can be posted.

What's To Love About C?

Comments Filter:
  • because (Score:4, Funny)

    by Anonymous Coward on Monday July 02, 2012 @12:37PM (#40518773)

    char *post = "first";

    • Re:because (Score:5, Insightful)

      by Anonymous Coward on Monday July 02, 2012 @12:38PM (#40518785)

      Using a string literal as not const is very bad form.

      • Re:because (Score:4, Funny)

        by Anonymous Coward on Monday July 02, 2012 @12:44PM (#40518867)

        Back to C++ Land with you, rascal!

        • Re:because (Score:5, Informative)

          by localman57 ( 1340533 ) on Monday July 02, 2012 @12:50PM (#40518947)
          No, he's right. On systems where your constants exist in a different medium than your variables (such as microcontrollers where variables are in RAM but constants are in flash), declaring a string as const or not const can have a big impact on what resources you eat up. Typcially, there's often a #pragma or non-standard keyword such as ROM that goes along with this.
          • Re:because (Score:5, Funny)

            by Dachannien ( 617929 ) on Monday July 02, 2012 @03:15PM (#40520749)

            No, he's right. On systems where your constants exist in a different medium than your variables (such as microcontrollers where variables are in RAM but constants are in flash), declaring a string as const or not const can have a big impact on what resources you eat up. Typcially, there's often a #pragma or non-standard keyword such as ROM that goes along with this.

            And this is particularly relevant in this case, because Slashdot runs on CowboyNeal's microwave oven.

  • One good reason... (Score:5, Insightful)

    by Anonymous Coward on Monday July 02, 2012 @12:40PM (#40518813)

    It's not the bloated obscenity that is C++.

    • Re: (Score:3, Insightful)

      It's not the bloated obscenity that is C++.

      C++ is not a bloated obscenity. It is an excellent language.

      I am not claiming it is a language without warts, but I challenge any one who modded the parent post up to provide a coherent argument as to why C++ is bloated and what features you could therefore remove without detracting from the effectiveness of the language.

      • by AuMatar ( 183847 ) on Monday July 02, 2012 @01:18PM (#40519287)

        I like C++, but I can think of a few. Lets ignore for the moment things like macros that are outdated in C++ and kept for C compatibility. We can easily get rid of:

        *Value templates.
        *Diamond inheritance (just make it illegal)
        *The entire algorithms part of the STL. Nobody uses it and it makes for unreadable code (keep the container classes, of course)
        *Kill either structs or classes. We don 't need both, with one being syntactic sugar to change the default visibility on the other
        *The iostream libraries. I don't think I've ever seen code that didn't say fuck that and just use C style stdio. They're clunky.

        That's off the top of my head, without going into things we could do better. And I like C++, it's my preferred language. The real argument here is even though C++ is bloated, it's far from the worst that way. Perl takes that crown, with it's 500 special variables. And the people who complain about C++'s bloat generally like Python or Ruby, which are both just as bloated as C++, without the bonus of it's simplicity.

        • by serviscope_minor ( 664417 ) on Monday July 02, 2012 @01:45PM (#40519561) Journal

          *Value templates.

          What are value templates? If you mean things templated on integers, then I respectfully disagree. I use some very nice small-vector/matrix linear algebra libraries which would be pale shadows of themselves without templating on integers.

          *Diamond inheritance (just make it illegal)

          Is it a big problem? I think I had a design once where it made sense, but I can't for the life of me remember even what the domain was.

          The trouble with interfaces is that you often end up having to duplicate code because you can't pull in extra stuff like you can with inheritance.

          *The entire algorithms part of the STL.

          No way! I, for one like things like sort (and related), heap, nth_element (one of the very few language std libraries with this O(n) algorithm!), permute, random_shuffle, binary_search (and related), set_union (and related), min/max, min/max_element, equal, swap, and a few others.

          for_each is pretty useless, annoying and unclear.

          Many of the others are generally a bit more useful now with lambdas.

          The algorithms part of the STL is one of my favourite things, and something I really miss when going to other platforms.

          Kill either structs or classes.

          I think that's pretty harmless as these things go. For program structure, I use class, since I want everything private by default (for encapsulation). For template programming, the public-by-default struct is useful for since one often has many tiny structs.

          It's a small oddity, but it must add positively 5 or 6 lines to the compiler.

          *The iostream libraries. I don't think I've ever seen code that didn't say fuck that and just use C style stdio. They're clunky.

          Well, I like the type safety of the C++ library. The formatting stinks and is really painful to use. I ended up writing/finding a few helpers, e.g. ones that use printf style formatting (but with safety) where necessary.

          I've actually got pretty used to it. I find having the variables inline in the positions they appear in the output now easier to follow than jumping between a format string and list of variables.

          I agree that other languages are more bloated, but I think that if you removed any feature you would lose something significant.

          • by AuMatar ( 183847 ) on Monday July 02, 2012 @02:12PM (#40519921)

            Value templates- yes, templating on the value, rather than the type. It's an extremely niche use case that causes difficult to read code and provides little in the way of benefits. I doubt even 5% of the user base knows it exists. Kill it.

            Diamond inheritance- it isn't used much, but when it is it's a problem. Because you don't know when the common base class will be initialized, or which constructor will be called by what values. Make it illegal to solve those problems (or add it to the specification)

            STL algorithms- sorry, totally disagree. The entire thing is a horrible hack based on using constructors to make quasi-first order functions. They were never meant to work like that. At best it's confusing, at worst it's unreadable and hard to debug. I bounce any code review I see using them. Worse, it encourages people to make more code like that, which tends to be done worse and be even more unreadable and frequently just wrong (generic programming has a lot of gotchas). I'd rather see the whole thing killed.

            Structs vs class- it is pretty harmless, but we're talking bloat. It's bloat. There's no reason to have both. I wouldn't say this is something that has to be fixed, but it's silly to have the two of them.

            Yup, I stand by my original opinion- all of these could be cut with no real loss to the language. Then again, if I were writing my own language I'd take C, add constructors, destructors, and C++ style syntax for member functions, add container classes, and call it perfect.

            • by serviscope_minor ( 664417 ) on Monday July 02, 2012 @02:21PM (#40520039) Journal

              Value templates- yes, templating on the value, rather than the type. It's an extremely niche use case that causes difficult to read code and provides little in the way of benefits. I doubt even 5% of the user base knows it exists. Kill it.

              I think you are underestimating the number of people who use it indirectly.

              I'm in the 5% who use it directly, and it is an invaluable tool which gives C++ a massive advantage over other languages. Otherwise, it is very useful inside metaprogramming, which is the key to making easy to use (not easy to write) libraries.

              Besides how is it hard to read?

              STL algorithms- sorry, totally disagree. The entire thing is a horrible hack based on using constructors to make quasi-first order functions.

              I'm not entirely sure I follow. How do constructors enter into it?

              I bounce any code review I see using them.

              What do you use to sort data? What do you do when you want a heap, or want to perform a set difference or some such operation?

              Yup, I stand by my original opinion- all of these could be cut with no real loss to the language. Then again, if I were writing my own language I'd take C, add constructors, destructors, and C++ style syntax for member functions, add container classes, and call it perfect.

              So, you would have (presumably) type generic container classes?

               

            • Yup, I stand by my original opinion- all of these could be cut with no real loss to the language. Then again, if I were writing my own language I'd take C, add constructors, destructors, and C++ style syntax for member functions, add container classes, and call it perfect.

              So do it. You wouldn't be the first person to create a language more to his liking, believe me. I've seen dozens, if not hundreds, of programming languages which originate from a developer liking some features of another language, but disliking others.

              You clearly have strong opinions; stop writing essays and start writing code! (Or write essays to form a spec, and then code. I see that, too.)

            • by GauteL ( 29207 ) on Monday July 02, 2012 @03:10PM (#40520681)

              When it comes to STL algorithms, you may have been right before C++11/C++0x. Now, with lambda functions, the standard algorithms are genuinely brilliant. Modern IDEs should support debugging the lambdas as well.

              I honestly don't care if you disagree. There are enough people around that use them extensively and they are not going away. I would most likely bounce your code your code for NOT using STL algorithms if they were the quickest solution.

              I completely agree on struct/class and diamond inheritance though.

        • by AndrewStephens ( 815287 ) on Monday July 02, 2012 @03:02PM (#40520559) Homepage

          Nobody uses everything in C++, I estimate that most programmers only ever use 75% of the language. The problem is that everybody uses a different 75%. For instance, diamond inheritance can be a pain, but is occasionally unavoidable and I am glad it works. STL algorithms are the best part of C++, complex problems reduce down a few lines of code.

          Your one example that is actually bloated is iostreams, which is slow and overkill for almost any program. I wish more C++ text books would ignore iostreams and spend more time on STL.

        • by Darinbob ( 1142669 ) on Monday July 02, 2012 @03:06PM (#40520627)

          Macros aren't really outdated. Maybe 95% of their uses can be replaced with enums or inline functions, but there are also times where textual substitution can do useful things.

          Templates as a whole are overused and lead to problems, and in particular tend to not co-exist with object oriented styles. Small simple templates I like, but I see them used to completely replicate a data structure which leads to bloat. And the bloat in C++ is not bloat in features as you sort of imply but bloat in the immense size of the executable. Templates really are just a style of textual substitution, except that unlike C macros this is hidden from the programmers who often don't realize how much extra space can be taken up by them if they're not careful. I don't think there are any compilers yet smart enough to determine when two template instantations only needs one copy at run time, and you end up with standard "libraries" where most of the code is in a header file and recompiled often. To be fair you can use templates that do some nice libraries (ie datastructures store void* and the templates just do the type safety checks for you before casting), except that this is contrary to the STL style. I used to like the rogue wave libraries myself.

          You need structs in C++ to be compatible with C. One of C++'s goals is to compile proper C programs and keep compatibility where possible. But if you got rid of the class keyword instead you'd have a revolt. So the typical style almost everyone uses is that struct is used as a plain C struct only, and if you need anything beyond what C can do then you use a class instead. What C++ should have done I think is to enforce that style.

          I agree, iostream is awful. Originally it wasn't too bad in some ways and early on it did a good job of adding some extra type safety to IO. But it always felt like a bit of a kludge. It is nice to have a more generic output destination like a stream buffer but it came saddled with the operator overloading and you needed to know implementation details to do a lot of useful stuff.

          Diamond inheritance is a side effect of C++ never being able to say no. It should have just said "no multiple inheritance". Once it did have mutliple inheritance it should have not allowed the cross pollenization and stuck to a style where multiple inheritance had only mixins or interfaces. But they didn't want to say "no" and figured that they could use a keyword so that the user could do what they wanted. Which resulted in a lot of confusion.

          Actually Ruby is a relatively simple language. A good job of making a textual style of Smalltalk (only with a horrible kludge of blocks). The bloated part may be the Ruby on Rails which is not the same thing at all. Ignoring libraries and looking just at the syntax/semantics of the language then Ruby is much simpler than C++.

      • by bluefoxlucid ( 723572 ) on Monday July 02, 2012 @01:36PM (#40519493) Homepage Journal

        Compare C++ to Objective-C. You might dislike the syntax of Objective-C some, but it has clear advantages:

        • It is a strict superset of C: C compiles as Objective-C (C++ doesn't, since structs have a different syntax, among other things)
        • More importantly, it is runtime-resolved. That means you can expand Objective-C classes without breaking the ABI; in C++ you can't add members to classes, at all. Class members can be in different libraries, and in different header files (a "private" member is one not exposed in the API, but you COULD access it directly by defining it or including the header for it).
        • The mangling in C++ is a serious pain and makes loading C++ libraries and programs retardedly slow.
        • Swizzling (you can replace members of classes at runtime--not just inherit, but actually overload class members) makes the language quite a bit more flexible.
        • Operator overloading and templates are an abomination, and don't exist in Objective-C.

        We can all supply better languages for a purpose; Objective-C and C++ have the same purpose (an object oriented general purpose native compiled mid-level programming language), and so this comparison is relevant. Comparing to Java or Python or C#.NET would be irrelevant.

        A loose argument can be made that Objective-C is better because of its much better polymorphism features--the main point of an object oriented language. Objective-C does indeed supply much more flexible, more useful polymorphism; C++ class inheritance is pretty rigid because of its rigid ABI. Objective-C's run time resolution enables this, and I would admit that run time resolution of class objects is a bit slower ... if it wasn't optimized by cache (i.e. resolve the first time on demand and build the PLT) AND if C++ class member mangling didn't make actually building the PLT at load time so god damn slow. Two points to Objective-C.

        Objective-C doesn't have operator overloading. Operator overloading is often claimed as a negative feature because it makes code hard to read. The effective argument AGAINST operator overloading is is that everyone is used to the 'cout >>' and 'cin

        The biggest argument for operator overloading is really that nobody uses it, so we're all familiar with the corner case syntax in the standard library. Think about that: the biggest argument for it is that it never gets used.

        Also, Objective C has reference counting with cycle detectors and all. Garbage collection is a limited feature (you can create garbage collected objects intentionally).

        It's actually relatively easy to argue that C++ is an abomination. It's not unexpected either: it's an old language, and an OOP shim hacked on top of a well-designed mid-level language. That C is so well designed is surprising; but then it is the legacy of assembly, PASCAL, FORTRAN, and BASIC. COBAL is circa 1959, BASIC circa 1964, C circa 1972, C++ circa 1984. C++ had only Smalltalk to learn from, really, and was trying to be C with Classes.

        • The biggest argument for operator overloading is really that nobody uses it, so we're all familiar with the corner case syntax in the standard library. Think about that: the biggest argument for it is that it never gets used.

          Also, Objective C has reference counting with cycle detectors and all

          Smart pointers are used in C++ to take care of reference counting. And dereferencing a smart pointer is achieved by operator overloading "->" .

          I can remember back when I was reading up on Objective-C about

        • Re: (Score:3, Insightful)

          by 91degrees ( 207121 )
          Operator overloading and templates are an abomination, and don't exist in Objective-C.

          What's wrong with operator overloading? What if you happen to want a vector or matrix class, or a complex number class? Seems to be a limitation.

          But, the nice thing about C++ is the RAII. Create an object on the stack, and the constructor is called. Function ends, object goes out of scope, destructor is called. Useful for locks. Can also implement timers, resource counting, and of your class is well behaved, you
        • by stanlyb ( 1839382 ) on Monday July 02, 2012 @02:03PM (#40519793)
          And don't forget, the only reason for Objective-C to have these features at run-time is also the only reason why Obj-C is 10 times slower than any other language.
        • by serviscope_minor ( 664417 ) on Monday July 02, 2012 @02:04PM (#40519823) Journal

          * It is a strict superset of C: C compiles as Objective-C (C++ doesn't, since structs have a different syntax, among other things)

          I've never had trouble pulling C headers into a C++ compiler. Problems can exist, I suppose, but are very rare and easy to work around.

          * More importantly, it is runtime-resolved.

          Or, if you prefer, that's a huge disadvantage. It makes things very, very slow since all sorts of useful optimizations are not possible. If C++ used nothing but dynamic binding, I'd have to stick to C.

          in C++ you can't add members to classes, at all.

          That is a wart with C++. If you pimplify your classes properly, you don't break the ABI. Since it's such a common idiom, it would be really nice if there was compiler support to make it trivially easy.

          * The mangling in C++ is a serious pain and makes loading C++ libraries and programs retardedly slow.

          How so? The mangling gives functions funny long names, but they appear to the linker as just functions with long names, and behave the same way as C programs. I don't recall ever being bitten by this...

          * Swizzling (you can replace members of classes at runtime--not just inherit, but actually overload class members) makes the language quite a bit more flexible.

          That's a potentially useful flecibility.

          * Operator overloading and templates are an abomination, and don't exist in Objective-C.

          You made coherent arguments up to this point, where you basically revert to saying it's bad because you say so. If you use + to add ints and + to add floats in ObjC, then you aren't completely against overloading. Templates and overloading are two of the best features of C++.

          I really don't see how compile time hackery (templates) is inferior to run-time swizzling in ObjC.

          Without templates, there can be no good container libraries.

          Without operator overloading, writing intuitive mathematics libraries (for example) would be impossible.

          A loose argument can be made that Objective-C is better because of its much better polymorphism features--the main point of an object oriented language.

          C++ isn't a objected-oriented language. It's a multi-paradigm language where object-orientation is one of the paradigms supported. You're ignoring all other facets of C++.

          The biggest argument for operator overloading is really that nobody uses it, so we're all familiar with the corner case syntax in the standard library. Think about that: the biggest argument for it is that it never gets used.

          On what grounds is that the biggest argument for it? I use it all the time. I use olperator> on streams. I use + to concatenate strings. I, personally use + to add all sorts of things (like ints, floats, Vectors, Matrices, automatic differentiation types, an occasionally complex numbers). I use operator= all the time to assign one container to another.

          So, your point is basically wrong. I and many others use operator overloading all the time to do useful things, and make stuff clearer and simpler.

          Also, Objective C has reference counting with cycle detectors and all. Garbage collection is a limited feature (you can create garbage collected objects intentionally).

          C++ as they say doesn't need garbage collection as it produces very little garbage. Garbage collection is very nice when you have long lived mutable cyclic datastructures. For all other cases, the arguments for and against are much more subtle.

          Personally, I fund deterministic destruction useful. I find it much easier to run out of memory in Java compared to C++ when dealing with large datasets.

          It's actually relatively easy to argue that C++ is an abomination.

          I beg to differ: you have been unable to argue that so far.

        • A loose argument can be made that Objective-C is better because of its much better polymorphism features--the main point of an object oriented language. Objective-C does indeed supply much more flexible, more useful polymorphism; C++ class inheritance is pretty rigid because of its rigid ABI. Objective-C's run time resolution enables this, and I would admit that run time resolution of class objects is a bit slower ... if it wasn't optimized by cache (i.e. resolve the first time on demand and build the PLT) AND if C++ class member mangling didn't make actually building the PLT at load time so god damn slow. Two points to Objective-C.

          All well and good, but increased dynamism of Obj-C also means that it is that much slower - if I remember correctly, any Obj-C message dispatch, even in the best case (i.e. directly handled by the object), is still several times slower than a C++ virtual method call - and most method calls in C++ aren't virtual, and are typically inlined.

          Operator overloading and templates are an abomination, and don't exist in Objective-C.

          Operator overloading is arguable. Every time you have to write something like a.add(b).multiply(c) in Java when working with, say, BigDecimal, you understand why operator o

      • by betterunixthanunix ( 980855 ) on Monday July 02, 2012 @01:55PM (#40519673)

        what features you could therefore remove without detracting from the effectiveness of the language.

        Exceptions -- I personally like exceptions, but exceptions in C++ are terrible and not necessary. C++ exceptions have the following problems:

        1. No clearly defined exception type -- you could potentially catch something that has no descriptive string, no information about the stack, etc.
        2. Double exception faults. This is a subtle problem but one that seems to be easier to trigger in C++ than other languages, and one that could be fixed by changing how exceptions are handled. For those who are not familiar, exceptions that propagate out of a destructor cause abort() to be called if the destructor was called as part of the stack unwinding process for another exception. If I wanted to call abort() when errors occurred, I would not bother with throw statements, I would just call abort().

          Incidentally, the problem here is the order in which things happen. Exceptions should be caught before the stack unwinding process, which guarantees that the handler will execute before another exception is called. This also allows for "restarts" i.e. the ability to resume execution from the point where an exception was thrown, which might make sense (e.g. for exceptions that indicate some non-critical function could not be performed, which should not prevent a critical function from completing).

        I have also seen quite a few large C++ projects that simply ban exceptions, because they wind up creating more problems than they solve.

        • There are problems with C++ exceptions (there's a current thread on comp.lang.c++.moderated about double faulting.

          I would agree that they're a long way from perfect, but they capture probably 99.9% of the use cases perfectly. You can always not use them for the other .1%. Most of the time they ensure that resources are freed when an error occurs without the need to write lots of tedious and error prone error checking code.

          Organisations that ban exceptions are frankly being silly.

          You do mention that one prob

          • Most of the time they ensure that resources are freed when an error occurs

            Except that freeing resources might cause an exception to be thrown (see, for example, the fclose manpage; note the number of errors that fclose might return, and now imagine exceptions being thrown for those errors), which is seemingly innocent (after all, it is an error case) but which is practically guaranteed to create a double exception fault if any destructor fails to catch all exceptions.

            Organisations that ban exceptions are frankly being silly.

            Not really; I have heard several software engineering researchers say that exceptions are the completely wrong

    • by UnknownSoldier ( 67820 ) on Monday July 02, 2012 @03:07PM (#40520629)

      I used to work with a team that produced a professional C/C++ compiler. We used to joke:

      There are two problems with C++
      1. it's design
      2. it's implementation

      As a programming I've come to love the sweet spot 1/2 between C and C++.
      C leads to overly terse code
      C++ leads to over-engineered solutions

      Its too bad the preprocessor is still stuck in the '70s :-( Every year C++ is slowly turning into a clusterfuck LISP implementation by people who don't understand how to write a safe macro meta-language.

  • by skids ( 119237 ) on Monday July 02, 2012 @12:42PM (#40518831) Homepage

    ...and not some VM? Most of the popular languages these days are all dynamic. And they are very convenient and nice. But if you actually want to know what the machine is actually doing, and want to have a say in such things, C is the way to go.

    I mean, unless you want to, you know, use pascal or fortran or something.

    • by interval1066 ( 668936 ) on Monday July 02, 2012 @12:53PM (#40518965) Journal
      Although possible, and done, I have a hard time thinking of good reasons to write drivers in higher level or interpeted languages. Besides, most kernels are written in C, makes sense to write drivers in C. When some one trots out a kernel in Python then I'll jump on the systems work in python bandwagon.
    • by slew ( 2918 ) on Monday July 02, 2012 @01:01PM (#40519065)

      ...and not some VM? Most of the popular languages these days are all dynamic. And they are very convenient and nice. But if you actually want to know what the machine is actually doing, and want to have a say in such things, C is the way to go.

      I mean, unless you want to, you know, use pascal or fortran or something.

      Although "C" compiles down really very close to the metal, so does "C++" (and a host of other more modern languages). However, it's not that easy to take that next step to the metal. In between is a machine that virtualizes the registers (using register renaming techinques), virtualizes the memory (using difficult to predict caching, translation, and ordering operations), and reorders the instructions (speculation, branch target hiding, etc), and runs in a sandbox (under the os which is timeslicing between tasks and faulting and swapping in memory translations and maybe even simulating some instructions).

      Knowing what the machine is actually doing is often a mythical quest down the rabbit hole. Although I'm a big fan of "C+" (what I call the c++ subset that doesn't have all the crapola that I don't use**), I'm under no illusion that all you are really doing with C is using a more predictable translation strategy (e.g., static compile time translation) rather than some magical "metal" commanding language.

      ** +objects, +const, +templates, +-stl (some are okay), -overloading, -virtual inheritance, -rtti, -boost, etc...

      • +templates implies +overloading

        For example: the STL iterators use the same interface as pointers, which then lets you use templated algorithms that can accept an STL iterator or pointers into an array.

        Another example: your templated function needs to call fabs() on some variable of type T. Overloading lets the compiler pick the right one, even if it's a user-defined type.

        Overloading doesn't deserve the bad reputation. As long as you use it as a tool for specific reasons, it's wonderful to have. The most

      • by skids ( 119237 )

        While I agree to some point that processors do a lot of funky things these days, C is a pretty solid demarcation point for well-defined behavior that gets you close enough to the metal to do most of what you need to do down there. It's close enough that you can do some amount of optimizing for different CPU families, but not so close that you cannot write the majority of your code generically without worrying about performance.

        As to C++, it isn't just a pretty C. It may not have a VM but it does some pret

    • Re: (Score:3, Funny)

      by Penguinisto ( 415985 )

      Even better - it allows you to actually make the results efficient. While most CS grads these days likely expect hardware tech to keep up with the bloat, there are a few folks out there who know what it's like to economize like hell when it comes to CPU or memory.

      Also, when you're seriously pushing limits, it's the performance difference between a 1982 Chevy Nova with a busted head gasket (.NET, I'm looking at YOU), and compiling yourself a Porsche 911 with all the Autobahn goodies included.

      • by skids ( 119237 ) on Monday July 02, 2012 @02:47PM (#40520397) Homepage

        I think the problem these days is an entire generation of coders who have never in their life experienced a UI that was actually so repsonsive that it was impossible to ever perceive a delay between your keystroke and its effects. They've been raised entirely on laggy windows textboxes and mouseclicks that just migt get around to doing something anytime now.

        They don't even know how slow their kit is running. They've never seen one run fast. So they consider similar results satisfactory when they write code.

    • by ByOhTek ( 1181381 ) on Monday July 02, 2012 @01:35PM (#40519459) Journal

      That, and because of that, you can easily import a C library into any language without having to worry about compiler used, or bullshit like that.

      I have a C library. I can dynamically import it into C, Python, Java or C# fairly easy, on any platform, I don't even have to compile the library or have the same compiler. Want to do that with any other language (including C++)? You are in for some pain and suffering.

  • Good habits (Score:5, Insightful)

    by Bucky24 ( 1943328 ) on Monday July 02, 2012 @12:43PM (#40518845)
    Personally the thing I like most about C is that it's not "safe". It doesn't take care of a lot of memory management for you and you can easily eat up all the memory or introduce a buffer overload vulnerability if you're not paying attention. It forces programmers to actually look at what they're doing and consider what it will do in the long run, and causes good coding habits to form. I think the majority of people who dismiss C as "too hard" are coming from Java programming. C gives you a lot of power, but, as the well-cliched saying goes, "with great power comes great responsibility".
    • Re:Good habits (Score:5, Informative)

      by localman57 ( 1340533 ) on Monday July 02, 2012 @12:53PM (#40518967)
      Exactly. My company does a lot of different things from embedded systems to web interfaces, and, generally speaking, the C guys write better Java code than the Java guys write C code.
      • Re:Good habits (Score:5, Insightful)

        by ShanghaiBill ( 739463 ) on Monday July 02, 2012 @01:20PM (#40519305)

        the C guys write better Java code than the Java guys write C code.

        My experience is that the C guys (and gals) often write better Java than the Java guys write Java.
        Programmers who have never written in C (and/or assembly) often have a poor understanding of how computer actually work.

        • Re:Good habits (Score:5, Insightful)

          by Alamais ( 4180 ) on Monday July 02, 2012 @04:55PM (#40521799)

          Having recently had to learn TI assembly to do things on some strangely designed DSP boards, I really do think it should be more widely taught. Memory management takes on a whole new meaning once you've had to fit your program (data acquisition and FFTs) and data into one shared 10kword RAM block which has some hardware-reserved blocks and various alignment requirements. I had plenty of conceptual understanding of how computers worked beforehand, but now things like shared busses, registers, stacks, addressing, interrupts, etc. have much more direct meaning to me, and I feel a lot more grounded even when coding in higher-level languages.

          ...Not to mention I feel grateful now that I don't often have to use ASM.

    • Re:Good habits (Score:5, Insightful)

      by jmsp ( 1987118 ) on Monday July 02, 2012 @01:00PM (#40519053)
      > It forces programmers to actually look at what they're doing

      OMG! There. That's where all the hate comes from.

      --
      (Programming in C since 1984)
    • Re:Good habits (Score:5, Interesting)

      by jellomizer ( 103300 ) on Monday July 02, 2012 @01:07PM (#40519139)

      Problem is the diligence that is required. A C developer is a really good coder when they do their work in an other language. However for large projects, C doesn't make too much sense, because you need to expect your developers to be on their A Game in the course of the project. A developer is porting their proof of concept code into production, right near lunch time, and he is starving, and some of the other guys are waiting on him to finish up, because they are starving too, might mean some code got copied in, and put into the production set, without full though. Because the Proof of Concept code worked, it may pass many layers of Quality Check (and we all know most software development firms have very poor QA teams) Once it leaves and goes to the customer, it could be wide open to a security problem.

      Every Developer thinks they are the best developer on earth about 50% of them are actually below average. C is a great teaching tool, I wish most colleges still used it, and didn't switch to .Net and Java. However once you go into production, unless you really need the C performance (and you can code highly optimized C) going with other Languages that are a little more protected is a safer bet.

      Hire a C developer... Give him a higher level language, and you will probably get really good code.
      Hire a C developer have him program C, you are setting up a time bomb, where once they miss a beat you are screwed.

    • by Raenex ( 947668 )

      It forces programmers to actually look at what they're doing and consider what it will do in the long run, and causes good coding habits to form.

      It forces programmers to spend a lot of time thinking about low-level issues. As an example, anybody who has done string manipulation in C versus Perl (or Python, etc.) knows what I'm talking about.

      Rather than patting themselves on the back about how they manage to work with a high-level assembler, most developers want to use productive tools and can afford to sacrafice some memory in exchange.

  • by Anonymous Coward on Monday July 02, 2012 @12:44PM (#40518847)

    A web developer?

    C is the the best tool around for low level programming. It allows very high levels of abstraction and it keeps you very close to your architecture's features.

    FORTH and C++ are used to the same means, but I do not hear much about other tools being used for this kind of development.

    • I'm a web developer and the question was equally stupid to me. Hell, I know OkCupid [okcupid.com] runs just about everything in C or C++ (or did, not sure about now), mostly as they do an insane amount of mathematical calculation and need it uber-optimized.

  • Control (Score:5, Insightful)

    by Dan East ( 318230 ) on Monday July 02, 2012 @12:44PM (#40518849) Journal

    C offers control. If you can't handle having that much control (over memory, how the CPU acts on that memory, etc) then your software will have many problems and you will hate C.

    • Re:Control (Score:5, Insightful)

      by jythie ( 914043 ) on Monday July 02, 2012 @01:02PM (#40519079)
      Not just control, but predictability. C does exactly what you tell it to do and you can see what it is doing more easily then other languages. You can tell C++ to do 'X', but it might slip in a 'Y' and 'Z' without telling you.
      • by Raenex ( 947668 )

        C does exactly what you tell it to do and you can see what it is doing more easily then other languages.

        Thanks for the laugh. C is absolutely full of either undefined behavior or implementation defined behavior, by design. On top of that, it has this mantra of "trust the programmer", which in reality means don't help the programmer out, so they often end up in undefined territory by making simple mistakes.

      • Re:Control (Score:4, Interesting)

        by Thiez ( 1281866 ) on Monday July 02, 2012 @01:58PM (#40519713)

        C, predictable? Would you mind taking this http://blog.regehr.org/archives/721 [regehr.org] quiz? It's about the predictability of integers in C.

        • Fully one-fourth of these questions ask about aspects of a specific implementation, not the C standard, and "implementation defined" is not an option.

          Questions 3 and 18 on the quiz assumes that int is 32-bit. Question 3 assumes that unsigned short automatically gets promoted to signed int because no values are lost, but that's true only if sizeof(signed int) > sizeof(unsigned short). It's not true on platforms with 16-bit int, for example. Question 18 states that the answer is correct "as long as the

    • by ADRA ( 37398 )

      C is the manual transmission of cars, its a lot harder to operate, but you have a lot more control of how it drives. More managed languages are the automatic transmission. For 99% of the time, it'll get you to where you're going just fine, but will waste a little more gas to do it. Standards are more efficient and auto's are more simple.

      To further expand the analogy, the new world order of standard transmission is to move into dual-clutch manuals, where you can still twist the knobs if you really want to, b

  • by heson ( 915298 ) on Monday July 02, 2012 @12:44PM (#40518861) Journal
    Close to the metal but still not specific to any machine if you do not need to. Easy to understand exactly what machinecode the compiler will produce.
  • by Zephyn ( 415698 ) on Monday July 02, 2012 @12:45PM (#40518879)

    It's for cookie.

    That's good enough for me.

  • by codefool ( 189025 ) * <ghester AT codefool DOT org> on Monday July 02, 2012 @12:47PM (#40518897) Homepage Journal
    The power of C is - and always has been - that it is a shorthand for assembly. It compiles very small and runs very fast, making it ideal for embedded systems.
  • by localman57 ( 1340533 ) on Monday July 02, 2012 @12:47PM (#40518905)
    C is going to stay around for a long time in embedded systems. In this environment many microcontrollers still have 4k or less of RAM, and cost less than a postage stamp. In these systems there is virtually no abstraction. You write directly to hardware registers, and typically don't use any dynamically allocated memory. You use C because, assuming you understand the instruciton set, you can pretty much predict what assembly instructions it's going to generate and create very efficient code, without the hassle of writing assembly. Aditionally, your code is portable for unit testing or, to a lesser degree, other microcontrollers. This allows you to write a program that will run in 3.2 k of ram, rather than 4k, which allows a manufacturer to save 4 cents on the microcontroller they pick. This saves you $40,000 when you're making a million of something.
  • Where'd the Unix go? (Score:4, Interesting)

    by burdickjp ( 2530248 ) on Monday July 02, 2012 @12:49PM (#40518929)
    It's not dead because all of your VMs and interpreters have to interact with SOMETHING, and that SOMETHING has to be written in a low-level language. I strongly believe in using only the lowest-level language necessary for the job, but for OS development that's C.
  • Simple. (Score:5, Insightful)

    by Short Circuit ( 52384 ) <mikemol@gmail.com> on Monday July 02, 2012 @12:50PM (#40518943) Homepage Journal

    Tons of people love to have something to hate. It might be because they don't like something about it...but I think it's mostly because people like to set up communities held together by rhetoric against a tool or technology perceived and portrayed as an enemy.

    "C++ sucks. We are at war with C++. We have always been at war with C++.[1]"

    Swap out "C++" for whatever language you like.

    Certainly there are going to be cases and scenarios where C is preferable over C++, where C++ is preferable over C, or where Brainfuck is preferable over either. Use the right tool for the right job,[2] and the right tool is whichever allows you to most effectively achieve your goals.

    [1] Or, at least since its inception. Or since [insert arbitrary date here].[3]
    [2] For whoever asks "what's the right job for Brainfuck?" ... just wait. Someone will eventually come along and get modded +2 Funny when they reply to you.
    [3] I see what you'll do there, Mr. Connor.

  • simplicity (Score:5, Insightful)

    by roman_mir ( 125474 ) on Monday July 02, 2012 @12:55PM (#40518995) Homepage Journal

    C is a very simple language and yet it allows operating memory directly in a way similar to assembler. C is portable (well, it can be compiled for different platforms), I rather enjoyed the language a while back, but since about 98 I use Java for most of big development and I am pretty sure that if I had to do everything in C that I did in Java, it would have taken me much more time.

    C is nice in another way - it allows you and in some sense it forces you to understand the machine in a more intimate way, you end up using parts of the machine, registers, memory addresses, you are more physically connected to the hardware. Java is a high level abstraction, but then again, to write so much logic, it is just better to do it at higher level, where you don't have to think about the machine.

    C is great tool to control the machine, Java is a great tool to build business applications (I am mostly talking about back end, but sometimes front end too).

    So I like C, I used to code in it a lot, but I just to use it nowadays. What's to love about it? Applications written in it can be closely integrated into the specific OS, you can really use the underlying architecture, talk to the CPU but also the GPU (CUDA), if I wanted to use the GPU from a Java application, I'd probably end up compiling some C code and a JNI bridge to it.

    C teaches you to think about the machine, I think it gives an important understanding and it's faster to develop in than in Assembler.

  • by fahrbot-bot ( 874524 ) on Monday July 02, 2012 @01:04PM (#40519101)

    Though beloved to some, C is a language that many choose to hate. The mass opinion is indeed so negative it's hard to believe that anybody would program anything in C.

    The masses to which you refer are idiots. C is great. It lets you do what you want, how you want. True, you're afforded enough programming rope to easily hang yourself, but you learn not to, and while most things can be more easily done in higher languages (you'll have to pry Perl from my dead, cold hands), many things can only be done in languages like C or its derivatives. C is one of those languages that separates the adults from the kids, so put on your big-boy pants, stop whinging about it and step up.

  • by bcrowell ( 177657 ) on Monday July 02, 2012 @01:05PM (#40519115) Homepage

    The mass opinion is indeed so negative it's hard to believe that anybody would program anything in C.

    Huh? What mass opinion? Where's the evidence for this?

    Pick the right tool for the job. C is the right tool for some jobs, specifically jobs like writing drivers or operating systems.

    Historically, C won by having an innovative syntax for pointers, which a lot of people liked, and it also won by being a small language that was easy to implement. Because it was small and easy to implement, it ended up being widely available. Ca. 1980, the joke was that C was like masturbation: it might not be what you really want, but it's always available. A lot of people in 2012 may not realize that in the era when C was winning popularity, people didn't usually have access to free compilers, and for many types of hardware (e.g., 8-bit desktops like the TRS-80), there simply weren't any good development tools. Another big win for C was that because it was so widely available, it became easy to find programmers who could code in it; it fed on its own success in a positive feedback loop. This is why languages like java had C-like syntax -- they wanted to ride the coattails of C.

    IMO the biggest problems have been when people started to use C for tasks for which it wasn't the right tool. It started creeping up into higher-level applications, where it wasn't really appropriate. This became particularly problematic with the rise of the internet. Networks used to be small and run by people with whom you had personal contact, so nobody really cared about the kind of buffer-overflow vulnerabilities that C is prone to. The attitude was that if you gave a program crazy input that caused it to crash, well, what was the big deal? You crashed the program, and you were only hurting yourself.

  • by Salis ( 52373 ) on Monday July 02, 2012 @01:05PM (#40519121) Journal

    Go ahead. Argue. I dare you.

    v same sig since 2002. v

    • by Rising Ape ( 1620461 ) on Monday July 02, 2012 @01:31PM (#40519413)

      The main problem with Fortran is Fortran programmers. Or, more specifically, those who started off with F77 or earlier and carried on doing things the same way. For numerical stuff, I prefer F90 or later to C - far fewer ways to shoot yourself in the foot with memory management.

  • by Anonymous Coward on Monday July 02, 2012 @01:08PM (#40519147)

    I'm as tired of single-langujage zealots as I am about single-issue zealots in politics. It's a repetition of the old saw: "When the only tool you have is a hammer, everything starts looking like a nail." C has its applications. C++ has its applications Perl has its applications. FORTRAN (remember that language?) has its applications. And so on down the list.

    The fact is, a true professional doesn't have a single tool, he has a whole toolbox full of tools from which to select one, or two, or three, or more to get a particular job done. Look at your auto mechanic: he doesn't try to use a screwdriver when a torque wrench is called for. Look at the Web developer: he doesn't try to write C code to do Web sites. And no one in their right mind would write the heart of an Internet router in C++ or PHP or (shudder) COBOL. The tool has to be matched to the job.

    Sometimes you do the job multiple times, once in an easy-to-debug language to get your algorithms down and your corner cases identified, then a second pass in a language that lets you get closer to the nuts and bolts -- hey, you already have the high-level stuff debugged.

    And then you have people who are more comfortable creating tools to get the job done. I don't know how many times I've written a LEXX/YACC package to generate exactly what I need from a higher-level description...or to give a customer scripting capability suited to the particular task. I call it part of layered programming, and using multiple languages in a project to get the job done right, and in a way that can be maintained.

    Finally, programming style helps reduce mistakes, as do good development tools like IDEs that do syntax highlighting.

    OK, every language has its shortcomings. Even specific implementations of a language will drive you up the wall with its mysteries. But that's part of matching the language to the job.

    I'll grant you that string handling in C sucks. It's part of the charm, though, for some projects, because you don't have to worry about the run-time kicking in to do garbage collection at points in your code where timing is critical. But if the job is text processing without real-time constraints, C is one of the worse choices for a tool. So don't use it for that. Use Perl, or Python, or PHP, or any number of other languages where string handling is a first-class citizen. (For a price. For a price.)

    That's the difference between a true professional and a one-hit wonder: the former knows his tools, how to use them, and when to use them.

  • by ClayDowling ( 629804 ) on Monday July 02, 2012 @01:09PM (#40519157) Homepage

    Lots of people hate manual transmissions in cars, too. That doesn't mean there isn't a place for them. I bought a manual transmission truck for the same reason I use C: it lets me get more performance out of lesser hardware, gives me more control, and it's just plain fun to work with.

  • Litmus test. (Score:5, Insightful)

    by HornWumpus ( 783565 ) on Monday July 02, 2012 @01:09PM (#40519161)

    If you can't code in C you can't code.

    That said, unless you are doing low level/embedded work if you can't find a better tool for the job, you also can't code.

    C should be _every_ programmers second language at the latest.

    The other thing to love about C? Pointers! Pointers to pointers! etc. Coding without pointers might be safe, so is fapping.

  • by stox ( 131684 ) on Monday July 02, 2012 @01:11PM (#40519185) Homepage

    After all, in our National Anthem, we ask, "Jose, can you C?"

  • Because it WORKS (Score:5, Interesting)

    by gman003 ( 1693318 ) on Monday July 02, 2012 @01:31PM (#40519411)

    C and C++ (I consider them essentially the same, if only because I write them essentially the same) have a few advantages:

    They work. A language update doesn't break your actual programs. It may break your ability to compile them, but you still have a working binary compiled with the old version, which gives you time to make the code work with the new version. You never have to run around like a chicken with it's head cut off because some automatic Java or Python or PHP or __LANGUAGE__ update broke __BIG_CRITICAL_APP__.

    The tools work. You have IDEs that actually work. You have debuggers that actually debug. You've got static analysis tools that actually analyze (I've seen some PHP "static analyzers" that actually just make sure you use a particular code style!). If you want, you can grab the intermediate assembly files and debug *those*.

    The coders work. Sure, C[++] has some really awful language features, but the programmers know about them, know how to use them properly (which, often times, is "never"). It's a language known by any decent programmer. Maybe not used, or liked, but pretty much everyone can read C.

    It does things many other languages can't. You cannot (last I checked) embed assembly snippets in any other major language. There are many libraries that only have C APIs.

    It's fast. Game developers, serious ones, use C++, because even Java is still too slow. And when you have 20,000,000 vertices you need to render every 16ms, speed *matters*. That's why web servers are written in C. That's why operating systems are written in C. That's why other programming languages are written in C. Because sometimes, processor time actually *is* more expensive than programmer time.

    I *like* C. That game I program in my free time? It's C++, but it acts like "C with objects and strings". Sure, I could have done it "faster" in another language, but I know C and like C, for all the reasons enumerated above.

  • by Greyfox ( 87712 ) on Monday July 02, 2012 @02:21PM (#40520029) Homepage Journal
    One, it's simple and reasonably consistent. If I recall correctly, the language only has 24 keywords. Everything else is library. You may have some complaints about the library, but that's not really a language issue is it?

    Two, the library. You may have some complaints about the library, but in the right hands a person really can control every aspect of a UNIX system with it. I've had to write process watchdogs at a couple of different companies. These were programs that would kick off another process and monitor its status. That is remarkably easy to do in C. It's a little harder to do correctly but it's still pretty nice. I still hand-code socket servers from time to time. That's also pretty easy with C and the standard library. Actually IPC in general is quite accessible. Handling time... easy. Checking filesystems to see how many blocks are left... easy. The list goes on.

    All those utilities we take for granted are all implemented in C, and you can get at all that functionality if you need to. Once you've done some system level programming, the constraints of less mature languages become annoying very quickly. Compare the capabilities of Java's "Process" to what you can do in C, for example.

    Sure it's dangerous. Sure there are corner cases you might get caught at. But really, the corporate software development model is far more dangerous, no matter what the language. If you code small, tightly focused libraries and unit test them, whatever language you use is probably pretty safe. If you code some monolithic abomination with half a million lines of tightly coupled, untested code, you're probably going to run into problems anyway. In 20 years of software development, I've never seen anyplace actually write smaller libraries and link them in. Hell most of the places I've worked either didn't have version control or were using it incorrectly. If your company puts up with bad programming, it really doesn't matter what language you're using.

  • C is great (Score:5, Insightful)

    by Erich ( 151 ) on Monday July 02, 2012 @02:57PM (#40520497) Homepage Journal
    • It is much more portable than assembly
    • The performance overhead compared to assembly is reasonable
    • Most people find it simpler to develop in than Forth
    • It's not the horrible monster that C++ is

    This makes it very nice for all kinds of embedded environments.

    Efficiency matters. Python is great, but you don't want to use it for embedded work.

What this country needs is a good five cent nickel.

Working...