Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
News

Scott Meyers on Programming C++ 69

Bill Venners writes "Artima.com has published a four-part interview with Scott Meyers, author of Effective C++, More Effective C++, and Effective STL. In Multiple Inheritance and Interfaces, Scott describes how his view of multiple inheritance has changed with time, the C++ community's take on Java's interface, and a schism of focus between the C++ and other prominent development communities. In Designing Contracts and Interfaces, Scott discusses interface contracts, private data, and designing minimal and complete interfaces. In Meaningful Programming, Scott discusses the importance of saying what you mean and understanding what you say, the three fundamental relationships between classes, and the difference between virtual and non-virtual functions. In Const, RTTI, and Efficiency, Scott describes the utility of const, the appropriate time to use RTTI, a good attitude about efficiency, and Scott Meyers' current quest for general programming principles."
This discussion has been archived. No new comments can be posted.

Scott Meyers on Programming C++

Comments Filter:
  • Meyer's books on C++ (Score:3, Interesting)

    by Henry V .009 ( 518000 ) on Monday January 06, 2003 @06:24PM (#5028562) Journal
    I own each of Meyer's books, and they are all wonderful. If you work with C++ in any sort of professional capacity get a hold of copies right now.
  • by Lumpish Scholar ( 17107 ) on Monday January 06, 2003 @06:40PM (#5028695) Homepage Journal
    Meyers: "... we have templates in C++, but there's no way to write user interfaces or talk to databases. Java has no templates, but you can write user interfaces up the wazoo and you can talk to databases with no trouble at all."
  • One thing he says [artima.com] though is not necessarily true in java:

    ". . .interfaces also have no data. I have come to appreciate that if you use abstract base classes and eliminate any data from them, then a lot of the difficulties of multiple inheritance that I wrote about just go away, even in C++."

    Interfaces in java can have member variables but they are always public (universally accessible), static (not associated with any particular instance of a class) and final (unchanging). Thus, if you use data members in interfaces you cannot hide them in the implementing class, but this is mitigated by them being read only. I have not seen anyone use these inherited constants (which is what they amount to), and I can't think of a major drawback to using them, but I don't know of any advantage either.

    Does anyone know why interfaces have data members at all?
    • Re:Good articles (Score:2, Informative)

      by Tom7 ( 102298 )

      > Does anyone know why interfaces have data members at all?

      It's so that you can define constants (you know, MAX_SET_SIZE), because Java has no preprocessor. I'm not sure why constants in interfaces are so important, but my guess it is the language's answer to some whiny C/C++ programmer on the design team who couldn't express his favorite idiom without it.
      • I'm not sure why constants in interfaces are so important, but my guess it is the language's answer to some whiny C/C++ programmer on the design team who couldn't express his favorite idiom without it.

        It's because Java doesn't have enumerated types, which are often part of an object's interface.

        Sure you can do without enums (e.g. by using an ABC and deriving the relevant constants), but you wouldn't want to.

    • Re:Good articles (Score:2, Insightful)

      by adamy ( 78406 )
      One way that they are used is to define a short cut to a set of a constants. Put all you constants in an interface, and then any class that needs those constants can implmenbet the interface, and they car refere to the constants without using the entire class path.

      Also, many classes use them for related Symbolc constants to functions that are overly flexible. See the Calendar implementation for get/set (Yes, an ABC, not an interface, but the result is the same).

      I actually think of this as an indicator of Bad design. Use of symbolic constants, especially as a replacement for Enum, is a serious breach of OO design in most cases. Static immutable instances of classes that the enum represents are quite often a better way to go. This was a tip I got from the other Effective Book: Effective Java
      Found here at Amazon [amazon.com]
      • Re:Good articles (Score:5, Interesting)

        by fingal ( 49160 ) on Monday January 06, 2003 @07:44PM (#5029141) Homepage
        I actually think of this as an indicator of Bad design. Use of symbolic constants, especially as a replacement for Enum, is a serious breach of OO design in most cases. Static immutable instances of classes that the enum represents are quite often a better way to go.

        The Static immutable instances of classes gets half-way there, but doesn't stop people creating new instances of the classes and then you are back where you where to begin with. A better thing to do is to forcibly limit the instances of the classes to a fixed number of instances like so:-

        public class MyEnum {
        private MyEnum() { }
        public final static MyEnum OPTION1 = new MyEnum();
        public final static MyEnum OPTION2 = new MyEnum();
        public final static MyEnum OPTION3 = new MyEnum();
        }

        Now you are guaranteed that there is only ever going to be MyEnum.OPTION1, MyEnum.OPTION2 and MyEnum.OPTION3.

        • Actually, that is covered in the section of the book to which i referred. A Very good point.

          I noticed on your site you've developed a CM tool in Java. Did you use this pattern in your code? It makes for an interesting style when developing select boxes, doesn't it? I found we typically developed a static array with the required static elements. For Instance

          public static final MyEnum[] FIRST_AND_THIRD - {OPTION1, OPTION3};

          Also a strange thing we had to do was:

          private static final Hashtable all; //static block for init:
          {
          all = new Hashtable;
          all.put(OPTION1.getName(), OPTION1) //etc

          }
          public static MyEnum findByName(String name){
          return all.get(name);
          }

          Have you found this to be the case as well?

          • The easiest way of making a cleaner approach to storing the ranges of items in the maps (without having to remember to update lots of locations when you change the contents of your MyEnum[] is to use the contents of the Array itself. So if you just want a List then you would say:-

            public final static List FIRST_AND_THIRD_LIST = Collections.unmodifiableList( Arrays.AsList( FIRST_AND_THIRD ) );

            Now in your example, you are using a Hashtable with a method of MyEnum called getName(). I would be tempted to extract this Nameable method into an interface (Named [fiennes.org]) and then create a dedicated Map that would look something like this:-

            public class NamedMap
            extends HashMap {
            public NamedMap() { super(); }

            public NamedMap(Collection collection) {
            this();
            put(collection);
            }

            public Object put(Named named) {
            put(named.getName(), named);
            }

            public Object put(Collection collection) {
            Iterator items = collection.iterator();
            while (items.hasNext()) {
            Object item = items.next();
            if (item instanceof Named) {
            put((Named) item);
            }}}

            }

            Then you can finish up by adding another constant to your MyEnum class that provides you with the Map functionality that you wanted:-

            public final static Map FIRST_AND_LAST_MAP = new NamedMap(FIRST_AND_THIRD_LIST);

            Now if you change the definition of FIRST_AND_LAST then all of your other constants (List and Map) will get updated automatically thereby ensuring that you preserve the main benefit of utilising constants: only having to remember to Get Things Right Once.

            • Looks good. Have you done this in the past.

              Also, one thing I had wondered, do you overload toString to return the internationalized string, or do you return the key to the properties file for external use? The first controls all access in one place, but the second makes it easier to centralize internationalization one place and turn it over to the translators.

              • Also, one thing I had wondered, do you overload toString to return the internationalized string, or do you return the key to the properties file for external use? The first controls all access in one place, but the second makes it easier to centralize internationalization one place and turn it over to the translators.

                I would generally say that something as generic as toString() shouldn't be tied to a particular meaning (especially when interfaces are concerned). Instead, I would be much more likely to provide a seperate Interface that described i8n (if I was using it). Another reason for this is that I kind of feel that toString() should be "relatively" constant for a given Object and if the value changed as some external entity changed the current language then this might confuse matters. I would be more likely to return the name of the key, with some additional information to make it clear that this is the key to a MyEnum Object such as "MyEnum(key)" to clarify the difference between it and a normal String with a value of "key". All depends on the context though...

        • And with 10,000 of those defined in your code, that's 10,000 instances of slightly-relabled Object to be constructed at startup and sit in RAM afterward, doing diddly-squat. A "public static final int CONST_FOO = 1" type definition meanwhile occupies precisely 4 bytes of space and takes no time to initialize.

          Remember, even bare Object instances have behavior. You can wait on them, notify them, get their hashcode, etc etc... they are not negligible.
          • And with 10,000 of those defined in your code, that's 10,000 instances of slightly-relabled Object to be constructed at startup and sit in RAM afterward, doing diddly-squat. A "public static final int CONST_FOO = 1" type definition meanwhile occupies precisely 4 bytes of space and takes no time to initialize.

            Fair enough. Couple of points that I would bounce back at you though:-

            • If you have 10,000 constants of any type inside a program design then I reckon that you are going to have larger problems than just your boot-up time.
            • If you have to have these constants then compartmentalising them in some way into sections that can be loaded individually by the classloader (either on demand or in a background thread) will help the start-up time.
            • Try running a decent java debugger / profiler like OptimizeIt and look at the number of "system" level Objects that are created. You may well find that the overhead of your 10,000 Objects is really not a major issue after all.
            • The benefits in terms of enforced range checking that is provided with the private constructor / public final static instances in will generally pay dividends in the long run. I know that we are all good programmers and we always check the range of any integer constant, but what happens when this range changes in the future? Do you want to be responsible for checking through all the 3rd party code that utilises your constants to make sure that they handle the new values?
    • Does anyone know why interfaces have data members at all?

      It can be a very useful way of exposing constants. For example, consider the following:-

      public interface VariableList {
      public final static String VAL1 = "val1";
      public final static String VAL2 = "val2";
      public final static String[] VAL_ARRAY = { VAL1, VAL2 };
      public final static List VAL_LIST = Collections.unmodifiableList( Arrays.asList( VAL_ARRAY ) );
      }

      This will provide you with a slightly more flexible set of constants that are automatically acquired by anything that implemnts the VariableList interface.

      Alternatively, you can use Mutable variables. Remember that it is only the references to the variable that is final. If the Object itself contains information then it accessed by all classes that implement the interface. For example:-

      public interface CountMe {
      public Set INSTANCES = new Set();
      }

      If we now stipulate that all instances of CountMe that want to be included in the CountMe functionality should register themselves inside the INSTANCES variable, for example:-

      import java.util.*;

      public class Test {

      public Test() {
      Class1 class1 = new Class1();
      Class2 class2 = new Class2();
      Iterator instances = class1.INSTANCES.iterator();
      while( instances.hasNext() ) {
      System.out.println(instances.next());
      }
      }

      public interface CountMe {
      public Set INSTANCES = new HashSet();
      }

      public class Class1 implements CountMe {
      public Class1() {
      INSTANCES.add(this);
      }
      }

      public class Class2 implements CountMe {
      public Class2() {
      INSTANCES.add(this);
      }
      }

      public static void main(String[] args) {
      new Test();
      }
      }

      and there you have it - data sharing across interfaces with no extending classes at all. Of course, you cannot force classes to register themselves in the INSTANCES variable, but if you wanted to extend a couple of base classes then it would be trivial to make a couple of implementations of CountMe that extend each of them which enforce the functionality explicitly.

    • I have seen and used interface variables as constants a lot. One good example is to use interface constants as enumerated types, much like the Color class has many colors predefined (i.e. Color.black is a Color object that represents black).

    • Does anyone know why interfaces have data members at all?

      They wouldn't normally have state, but there might be constants relating to the interface itself, typically passed used when determining what parameters to pass to functions in that interface. Given Java's lack of enumerated types, this is particularly important to avoid passing magic numbers or arbitrary boolean parameters everywhere.

  • One reason Java has standard libraries is that Sun pushed big time for standard libraries. C++ Hasn't had anyone pushing for stuff like that. Bjarne Stroustrup says that he favors a minimal standard library, and so in C++ the standard library is minimal.

    Interfaces are the primary mechanism where Java has enforced the APIs for JDBC, EJB, Servlet etc.

    Now that Java is about to get Templates, I wonder how that will change the way they go about standard APIs, especially for collections.

    What libraries would make sense to have in C++? Network stuff, Database access, Web Programming, Graphics (OpenGL, DirectX), Windows Apps (Qt, MFC), XML, Logging, OS abstractions (Memory and Threading come to mind, Semaphore/Mutex, IPC). But the real question is, 'Does the lack of standard libraries help or hurt development in these fields?' I'd venture to say a both POV could be argued.

    • Re:Standard APIs (Score:2, Interesting)

      by Nynaeve ( 163450 )
      Bjarne Stroustrup says that he favors a minimal standard library, and so in C++ the standard library is minimal.

      From C++ Answers From Bjarne Stroustrup [slashdot.org]:
      I think my most obvious mistake was not to introduce templates before multiple inheritance and not to ship a larger library with Release 1 of my C++ compiler

      Does he still favor a minimal library? If so, does anyone know why?

      • I was actually refereeing to what was written in the C++ Programming Language book. Don't have w/ me here at work so I can't get an exact quote, but the discussion dealt with something along the lines of:

        What should be in some library, everything. But what should be in the standard library...and he goes on to specify that it should a be a pretty small subset. I'll set if I can find it tonight at home.
    • Re:Standard APIs (Score:2, Informative)

      by m8pple ( 623433 )
      There is some work towards improving the C++ standard library via boost [boost.org], which is the semi-official testing ground for candidates. Most of the libraries are simply utilities for exploiting the language efficiently and safely, but there are also some more systems orientated libraries such as a cross platform threading library, filesytem support, type safe printf etc.

      There has been talk of doing some higher level things such as networking, graphics and user interfaces, but nobody can come up with specifications that are simple enough, or that are easily ported between platforms.

      Coolest new bit is the Spirit parser generator. Who needs seperate lex+yacc files, embed that BNF in the C++ :)

  • Meyers is apparently not the only one thinking that java needs templates: Preparing for Generics [sun.com]
  • by swillden ( 191260 ) <shawn-ds@willden.org> on Monday January 06, 2003 @08:44PM (#5029567) Journal

    ... and this article has helped me to understand precisely why we disagreed. And he's still wrong, although he's getting closer.

    I've been writing C++ code professionally since about 91, when Scott published his first book. I think I was lucky that I didn't come across his book for a few years, particularly because of his skepticism of MI. I might have followed his advice and my career would have been the worse for it.

    I have always made heavy use of MI in my code, and 80% of the time I've written C++ classes that are exactly analogous to Java interfaces; it just always seemed like a good idea to me. Also, I was a big fan of Robert C. Martin and his notions about how you can analyze design quality by looking at the abstractness of the classes and the dependencies between them (dependencies on abstract classes, especially pure abstract classes(*) are, much better than dependencies on classes that do a lot of stuff for the simple reason that classes that do a lot have more potential to change, so purely abstract firewalls tend to limit the ripple effect).

    To avoid self-aggrandizement, I didn't independently invent the notion of pure abstractions. I had fiddled with Objective-C and it had a construct (whose name I forget) that allowed you to define a pure interface. You could then use that pure interface type as a function parameter and the compiler would verify that objects you passed to that function met the interface requirements -- allowing you to get compile-time typechecking in an otherwise completely dynamically-typed language.

    However, as I said, I think only about 80% of my MI usage is with pure abstractions. Probably 10% of the time I use MI, I do it to facilitate a style of programming called "mix-in" programming. The idea is that you have a bunch of purely abstract classes that define the potential interfaces, and you have a bunch of concrete classes that implement the pure abstractions in various useful ways and then you can create useful classes by mixing together appropriate base classes (with the occasional bit of glue). Mix-ins aren't appropriate for everything, but they can be a very elegant solution for many toolkit kinds of scenarios. Diamond inheritance doesn't really happen, because none of the mixed-together classes have enough code in them to make it worth inheriting from them. If you need something almost like one mixed-together class, you just mix a completely new one.

    In practice, not only do pure abstractions and mix-ins sidestep all of the "problems" that make people leery of MI, they're also not at all confusing to less competent programmers. I've found that with just the tiniest explanation of how the structure is put together, people can see immediately how it works and why it's good (well, once they've understood the idea of polymorphism, anyway).

    I think C++ MI can also be used more fully to good effect, but that must, indeed, be done judiciously.

    (*) Scott said in the interview that the C++ community doesn't have a name for interface classes. Maybe not, but I've been using the term "pure abstract class" for close to a decade and I don't think I've come across a single marginally-competent C++ programmer who didn't immediately understand the term, and I'm pretty sure I picked the term up from comp.lang.c++.

    • One way of thinking about MI that I've been finding useful recently is this:
      The ISA paradigm sees classes as nouns (e.g. an Image IS A collection of pixels.) I think that classes can also be adjectives - so for example I might have a 'Rotatable' class, so then my Image class IS A (collection of pixels) and IS rotatable

      class Image : public std::vector, public Rotatable
      {
      ...
      };

      In other words, there's more to objects than just a simple way of grouping together related functions. The approach I'm aiming for involves designing quite a large number of fairly minimal classes and plugging them together using MI to get the final behaviour I want. The advantage to this is that each class can be small enough to be robust, generally being only responsible for one task or resource. I admit that I'm not quite there yet, but it seems a promising line of investigation.
      • The approach I'm aiming for involves designing quite a large number of fairly minimal classes and plugging them together using MI to get the final behaviour I want.

        Yes, that's mix-in programming in a nutshell. Thanks, you stated it better than I did. You can see why "diamond" inheritance isn't a concern in that kind of a structure -- there's really never any need to inherit from a "plugged-together" class; if you need another class with slightly different behavior, you grab your toolbox and plug it tothether.

      • The ISA paradigm sees classes as nouns (e.g. an Image IS A collection of pixels.) I think that classes can also be adjectives - so for example I might have a 'Rotatable' class, so then my Image class IS A (collection of pixels) and IS rotatable

        class Image : public std::vector, public Rotatable
        { ...
        };


        Try to remember that the container classes provided in the C++ standard library were not designed for use as public base classes. In fact, Stepanov (designer of the STL) does not even like OOP [stlport.org], calling it technically and philosophically unsound. Unfortunately, this code uses inheritance to expose an _implementation detail_ (the usage of std::vector for storing its elements) that it should keep hidden. The std::vector class provides no added benefit to classes that derive from it because it has no virtual member functions and no (standard) protected members. The code, as it stands, is essentially equivalent to

        class Image : public Rotatable {
        public: std::vector<Something> elems;
        };

        with the difference being that one writes code like a.size() for the first example and a.elems.size() for the second example (not a huge difference, in the grand scheme of things).

        You should avoid exposing implementation details, so, at the very least, write a PixelCollection class if you want to expose a pixel collection in a class interface. The PixelCollection class would then _contain_ a private vector to store its elements. That way, you will hide the storage implementation and be able to change it later. At that point, whether or not PixelCollection should be a private member of the class or a public base is another matter that needs to be dealt with...
    • I think I was lucky that I didn't come across his book for a few years, particularly because of his skepticism of MI. I might have followed his advice and my career would have been the worse for it.

      Funny, you had the opposite experience that I had. I discovered Scott's book quite by accident when I first started programming C++. It guided me away from multiple inheritance, which I ended up never using until I turned to Java five years later.

      To avoid self-aggrandizement, I didn't independently invent the notion of pure abstractions. I had fiddled with Objective-C and it had a construct (whose name I forget) that allowed you to define a pure interface.

      Your comment about Objective-C reminded me of something James Gosling once said in one of my interviews. I went to Artima.com and searched for it, but couldn't find it. To my surprise, the comment wasn't in the article anywhere. I went back to the text file that originally came back from the transcriber in 1999, and there was his comment. Somehow it never got published. So I just published it four years after he said it. Sorry.

      Like you, Gosling found inspiration for Java's interface from the corresponding construct in Objective-C. Here's what he said in 1999:

      http://www.artima.com/intv/gosling13.html [artima.com]

      • Like you, Gosling found inspiration for Java's interface from the corresponding construct in Objective-C.

        That is very cool...

        In case anyone is wondering, the Objective-C construct in question is the "protocol". A protocol is essentially nothing more than a list of methods, with no implementations. A class can indicate that it intends to "conform" to one or more protocols, and the compiler will issue a warning if doesn't implement everything. Methods can also specify that their parameters should conform to a certain set of protocols and object references (id's) can specify that they can only point to objects that conform to a certain set of protocols.

        The compiler will perform compile-time type checking wherever it has enough information. So, if you create an object instance and then a few lines later try to assign it to an id or pass it to a method that have a protocol specification, and the object doesn't implement all of the methods required, the compiler will complain. I don't think it's even necessary that the class *specify* conformance to a protocol, the compiler can still check by seeing if all of the required methods are present. Of course, if the compiler doesn't have all of the information it needs, then run-time typechecks are generated.

        Protocols were primarily invented, I believe, to make it feasible to detect errors at compile time rather than at run time. In practice, though, anyone who used them quickly discovered that they're also a very effective tool for understanding and defining the structure of a complex program -- and they did it without limiting or constraining the dynamic typing of the language at all.

        It's a small step from seeing how protocols can layer structure onto a dynamically-typed language to seeing how they can define structure in a statically-typed language, where the structure must be completely verifiable at compile time because there are limited (or no!) facilities for run-time checks.

        My favorite programming language is, and maybe always will be, Objective-C++. It turns out that although Objective-C and C++ are both OO extensions to C, they take completely different approaches, both philosophically and syntactically. In fact, the syntaxes of the extensions are so completely orthogonal that you can just lump them together into a single language, without ambiguity. Since NeXT built their Objective-C compiler on top of gcc, and others were building C++ on top of gcc, the merger was natural.

        The result is a language that has all the expressive power and flexibility of a fully dynamically-typed language *and* all of the on-the-metal performance of a statically-typed language designed to be as efficient as C. The programmer gets to choose the tradeoffs between expressiveness and performance on a class by class basis, and can easily mix and match, passing C++ objects to Objective-C class methods and vice-versa.

        Of course, the result also has all of the arcane complexity of C++, and although Objective-C is very simple, the resulting design decisions are anything but, since there are two very different views of object orientation to be mixed and matched. For the programmer who has mastered both views and also understands the dusty corners of C++, the combination is extremely powerful and, IMO, wholly appropriate to everything from on-the-metal bit twiddling to rapid development of large, complex applications...

        ... as long as they can be written by this single programmer, because the odds of finding two people who can agree on enough of the myriad design tradeoffs to get any useful work done is next to nil. And don't even *think* about bringing a novice developer onto the project.

    • Scott said in the interview that the C++ community doesn't have a name for interface classes. Maybe not, but I've been using the term "pure abstract class" for close to a decade and I don't think I've come across a single marginally-competent C++ programmer who didn't immediately understand the term, and I'm pretty sure I picked the term up from comp.lang.c++.

      In the interview, Scott exhibits amazing ignorance for an author of his supposed stature. As someone else pointed out, use of interfaces has been common in the Win32 world, with COM, for at least about eight years. Smalltalk has used implicit interfaces for decades, and that's where the Objective-C construct presumably came from.

      Scott seems to have a very ad-hoc approach to programming - "I do it this way because it seems simpler and it works". That's fine, but if you're going to be writing about this stuff, you'd think you'd perhaps study it a bit, understand the formal underpinnings (type theory in this case), to be able to put your choices into a wider context - but he doesn't do this, unless he's playing dumb in the interview to avoid confusing the readership.

      Basically, Scott seems to be a step above authors like Bruce Eckel, who write intro/overview "how to program in language X" books. One shouldn't be looking to him for insight into software design.

      • Scott seems to have a very ad-hoc approach to programming - "I do it this way because it seems simpler and it works".

        My take on his books has always been that they're intended to help novices avoid common pitfalls, rather than to provide insight for experienced developers. When they're considered from that point of view, I think they're valuable books, overall. With respect to a few specific topics, like MI, I disagree with his position even for novices.

        As someone else pointed out, use of interfaces has been common in the Win32 world, with COM, for at least about eight years.

        Hehe. That strikes me as a rather odd example, since COM is so much newer than all of the other technologies we're discussing, with the sole exception of Java, which was invented at about the same time as COM. But you're certainly correct that this idea isn't new.

        • As someone else pointed out, use of interfaces has been common in the Win32 world, with COM, for at least about eight years.

          Hehe. That strikes me as a rather odd example, since COM is so much newer than all of the other technologies we're discussing,

          I should have said "use of interfaces in C++". The point is that COM's entire model is based on the use of interfaces in C++ specifically. A COM interface is basically a C++ virtual function table, by design. COM code in C++ consists of using MI to combine implementations of multiple interfaces - basically, mixins. Of course, the mixin approach was described in the C++ context at least as far back as Booch's OO book. I would expect someone writing books about C++ to be aware of all this, but Scott's comments against MI never seemed to take any of this into account.
          • Ah, I see what you meant. That is a very good example, then. I only used the COM stuff when it was new, and it basically had to be done in C at that point (and was a huge pain). That experience drove me to get a job writing code for embedded systems -- as far away from MS as I could manage...

            Of course, the mixin approach was described in the C++ context at least as far back as Booch's OO book.

            Really? Wow, I've completely forgotten that. Next time I'm in the office I'll have to grab my copy and see what Booch had to say about it. I've been working at home for two years now, but I still have to cart a box full of stuff home every time I go to the office -- it's been a slow migration :-)

            I would expect someone writing books about C++ to be aware of all this, but Scott's comments against MI never seemed to take any of this into account.

            I fully agree with this sentiment. Although I think Booch's book came after Scott's first book, an author of C++ texts should definitely be familiar with all of the major literature (and, preferably, most of the minor literature as well).

            • I don't remember specifically what Booch said, and my copy is in storage in another state, but my memory is that he described and explained mixins in general, and related that to the use of MI in C++. Of course, I might have made some obvious connections on my own.

              Afaict, Booch's 1st edition ('91) came out the year before Meyer's 1st edition of Effective C++ ('92). Not as big a gap as I remembered - I think I only came across Meyers a few years after that.

              I only used the COM stuff when it was new, and it basically had to be done in C at that point (and was a huge pain). That experience drove me to get a job writing code for embedded systems -- as far away from MS as I could manage...

              Cool! No-one ever regretted avoiding Microsoft... ;) What kind of embedded systems? Purely as a hobby, I've had fun playing with programming PIC and Scenix chips, although my conclusion based on that experience was that I'd really rather work with *slightly* more powerful processors. I realize there are plenty of those, but I haven't yet gotten around to trying any of them (it has too high a distraction potential! :)

              • Afaict, Booch's 1st edition ('91) came out the year before Meyer's 1st edition of Effective C++ ('92). Not as big a gap as I remembered - I think I only came across Meyers a few years after that.

                My confusion probably arose from the fact that I had Booch's book within a few weeks after it came out and didn't run across Meyer's book until about 1995.

                Cool! No-one ever regretted avoiding Microsoft... ;) What kind of embedded systems?

                I was working on boards that were pretty close to PCs, in terms of power, Motorola 68000 processors and, I believe 4MB of RAM and 4MB of flash. Initially they were running PSOS and the application (control system for routers of audio and video signals) had just been ported, more or less, to VxWorks when I got there. I've worked on a variety of Unix and embedded systems since and haven't really done any more "serious" Windows programming. And now I'm happily hacking on tiny machines running Linux, even though that means I'm mostly writing C.

                It's all fun, actually, but I like to be able to have a little deeper understanding of what's going on and it always seemed that with Windows there was just too much stuff hidden from me. Linux is, of course, ideal in that respect. How much you understand is dependent on how much time you have to invest in learning, not on how much someone will allow you to learn.

                • I've played a little with the 68HC11, and in the old days with the 6800 and 6809, but never with the 68000. I imagine there's a bit of a difference ;) Somehow they've just never crossed my path, and I haven't gone looking...

                  I like to be able to have a little deeper understanding of what's going on and it always seemed that with Windows there was just too much stuff hidden from me. Linux is, of course, ideal in that respect. How much you understand is dependent on how much time you have to invest in learning, not on how much someone will allow you to learn.

                  I completely agree. Aside from satisfying curiosity, that deeper understanding translates directly into the ability to do things more efficiently - being able to understand what's really happening, rather than having to rely on guesswork based on something halfheartedly written up by a documentation department which was being fed tidbits by the developers... You also don't have to spend time working around bugs in black boxes if you're willing to dive in and attack the problem at its source, for example.

                  I've seen this in the Java world too - people working on top of open source server products are amazingly empowered, compared to those stuck with closed source application servers and tools. I think open source at various levels is a big reason behind the corporate success of Java, but one that's often not recognized.

                  • I've seen this in the Java world too - people working on top of open source server products are amazingly empowered, compared to those stuck with closed source application servers and tools. I think open source at various levels is a big reason behind the corporate success of Java, but one that's often not recognized.

                    And one of the nice things about Java, from my point of view, is that it's pretty hard to every *really* hide the source. There's so much information in Java bytecodes that decompilers are highly effective. I've even made custom modifications to supposedly "closed" code -- decompile, hack, recompile. It can also be very interesting to hack a class and rename it, then write your own class with the original class's name and interface that just wraps the original, giving you a chance to inspect and modify everything flowing into or out of objects of the original class. In fact, it occurs to me that you could probably write a little program to automate that -- use introspection to examine the original class and then automatically generate an instrumented wrapper that logs everything... hmmm....

                    Heck even just using javap to poke around the interfaces of classes that are not intended to be public can often tell you a whole lot about how the software works underneath.

                    You can do a lot of this stuff with object code, too, but you're disassembling, not decompiling and the difference is huge.

                    • It can also be very interesting to hack a class and rename it, then write your own class with the original class's name and interface that just wraps the original, giving you a chance to inspect and modify everything flowing into or out of objects of the original class.

                      Neat idea!

                      In fact, it occurs to me that you could probably write a little program to automate that -- use introspection to examine the original class and then automatically generate an instrumented wrapper that logs everything... hmmm....

                      ;)

                      You can do a lot of this stuff with object code, too, but you're disassembling, not decompiling and the difference is huge.

                      I developed two commercial add-on products, which were able to integrate with their host products because I had disassembled library object code, originally written in C. Java takes all the challenge out of that sort of thing, but I don't miss it at all! ;)

      • In the interview, Scott exhibits amazing ignorance for an author of his supposed stature. As someone else pointed out, use of interfaces has been common in the Win32 world, with COM, for at least about eight years.

        That's exactly correct, but make it ten years, not eight. Or perhaps even longer: COM was first given to developers with the OLE2 beta in October 1992, but the object model (without the central class registry) was introduced with MAPI, a little earlier than that.

    • What had driven me to avoid MI, and I must say that design wise I struggle to get the effects of MI through other means, was simple.

      The first C++ project (and most after) have involved designing extensibility by using a base class in an executable, and derived classes created in dynamically linked libraries. The point being the executable loads the library, links to a constructor function, calls it and is returned and object B which can be upcast to the base class A.

      This cannot be done if B is multiply inherited because the executable module does not know about that at compile time.

      This came up a lot when I made client software for Windows where we often wanted to do something like this. COM and other solutions were possible, but this was the simplest, requiring no special layer to manage interfaces instances or thunk to the real member functions.
      • The first C++ project (and most after) have involved designing extensibility by using a base class in an executable, and derived classes created in dynamically linked libraries. The point being the executable loads the library, links to a constructor function, calls it and is returned and object B which can be upcast to the base class A. This cannot be done if B is multiply inherited because the executable module does not know about that at compile time.

        This has the potential to be a really, really interesting point, but I just don't think it's true. In fact, I'm fairly certain that I have done exactly what you say in the past. Even if my memory is faulty (it commonly is), I can't think of how the dynamic linking changes things. It seems like if this were a problem, it would be a problem with static linkage as well since the units are always compiled separately, and the linker doesn't rewrite the code.

        Can you elaborate?

        • I'm replying to myself because after a bit more thought, I can see how you structured things to make it problematic. The key is that you're doing the upcast from B to A in the executable which knows nothing about B (doesn't even know B's name), rather than in the shared library which would know how to perform the upcast properly.

          I don't see that as a real limitation, for two reasons: First, there's a very simple fix -- just do the upcast in the library. Second, I'm pretty certain that what you did is not well-defined even with single inheritance. You're effectively doing something like:

          A* a_ptr = reinterpret_cast<A*>(b_ptr);

          ... but in a more complex manner. Maybe writing it as:

          void* tmp = b_ptr;
          A* a_ptr = (A*)tmp;

          ... is closer to the actuality, but they're equivalent. Now while I think both of the above would "work", assuming single inheritance, in every C++ compiler I've used, I'm pretty certain that the language does not require that it work. That is, compiler writers are permitted to choose a class layout that requires a bit of work for an upcast even in the case of single inheritance, and unless the compiler can recognize the upcast and emit the appropriate code, the results will be... bad.

          Regardless of the correctness issue, IMO it's cleaner to do the upcast in the library. I've always taken the additional step of writing a small wrapper that does the actual construction, and then placing that in the library with the class it instantiates. If you needed to be able to get a B as any of its bases (reasonable), you'd need a wrapper per base but, again, that doesn't seem like a terribly painful tradeoff for being able to (a) use MI and (b) be fairly certain that it will always work, even for SI.

          If you wanted to be able to use placement new, you'd also need another version of each helper, plus a function to tell the caller how much space to allocate.

          Please excuse me if I have completely misunderstood. If there's an issue I'm missing here, I'd very much like to understand it.

        • No it's really true, and you are right, it could affect you with static linking, but generally is less likely to.

          The issue is this... the binary picture of the object in memory. Strictly speaking the compiler needs to know the parent and descended type to make this upcast (potentially, even without MI). If you do this cast without knowing what the descended type really is, you are taking a pointer and just lying to the compiler what the memory print looks like. This generally works (better know your compiler, but that's true in general, since different compilers generally arrange objects differently in memory) since the object does look like it's parent in memory with single inheritance. The data at the top of the object is the same, you can pretend it's the parent and using virtual functions, the overridden functions will be called. But if the descended class were MI, then the footprint is changed, before the data fields there is likely a new pointer to the second parent's data, or it's virtual functions, or the data is possibly inserted prior to the parent you are upcasting to, etc., but whatever the layout, it's changed, and to make the cast safely the compiler needs to know the child type because it's going to have to find the part of the object that is of the same type as the parent and give you a pointer to that.

          This may be making someone shudder as I speak, because I have not mentioned many other ways you should be careful if you play these kinds of games. In general, the way to do this thing carefully is to use pure virtual objects as interfaces, and ask for those (i.e. how COM works). You have some thunking though the interface to your real classes, but everything stays clean and safe, you just have to build or use a thin interface-managing layer (again, such as COM). Real, btw, uses "lightweight" COM even on it's server side... as someone pointed out, COM interfaces are just pure virtual classes... essentially tables of function pointers, so you can use it without the OS registration related stuff that COM also entails (especially now).

          The reason this isn't a problem, usually, with static libraries is merely that the compiler usually knows the descended type, and inserts code to do the conversion. With statically linked libraries often the project is arranged such that the compiler does know the return type is, say, class D, and is doing an actual conversion (not just a static_cast).

          There are other solutions... the library could convert the pointer the proper way before returning it, which is one way to build COM objects using MI, converting the pointer in the library to what the main code expects or was promised.

          So the real issue isn't my specific example, but the havoc that is played at the binary level. A clear strucure-like layout, that maps quite directly to the source code description starts to become mixed up with MI as the compiler juggles the data and function pointers. MI is one of those things that introduces the compiler level voodoo that puts off so many C programmers (e.g. the kind that can't stand even the fact that the 'this' pointer is pushed on the stack and slipped into your function -behind your back!!!- :) I'm not that zealous, but 8 years ago we were worried about performance complications and MI seemed a good thing to avoid.

          Mind you, so were templates (not supported) and exceptions (still badly implimented then), both of which I heartily rejoice in now that compilers support them well and efficiently. It's probably time for me to come back to MI, because I use the "mix-in" design idea anyway, which without MI involves some degree of thunking.

          One more thing about the extensibility system I created this way. There were a lot of special rules for objects created this way... how you created them, how you added data members... too many specific rules for me to argue that this kind of thing isn't a mine field. I think in a multiparadigmed language it's inevitable that you will have to understand not only C++ but the C++ philosophy of a particular project, before you really understand the project. That's the justification of having extra rules. Still, I would now probably do this much differently.

          Playing games at the level of the binary layout of your strucutures is one of the things that makes people now say that C++ is too low level. But the way I see it the only problem is maybe that I should not have thought about it at that level, i.e. it's not C++ fault that it gave me free reign. Considering the task at hand at the time I think it was reasonable in context.
          • Strictly speaking the compiler needs to know the parent and descended type to make this upcast (potentially, even without MI).

            Absolutely, and this is the key. Unless you're doing something excessively clever, the compilation unit that performs the upcast will *always* know the derived typed, because, in general, it won't compile otherwise. With the COM example, you are essentially passing it through a cast to void* so that the executable can then convert it to a base* without having to know about the derived type at all.

            And that cast via void* is not guaranteed to work, even with single inheritance. As you noticed 8 years ago, the authors of your compilers structured class memory layout in the obvious way and the cast worked. But if it hadn't, the compiler would not have been in error.

            If you do this cast without knowing what the descended type really is, you are taking a pointer and just lying to the compiler what the memory print looks like.

            Yep, and lying to the compiler is a dangerous business. It can be fun, though ;-)

            The reason this isn't a problem, usually, with static libraries is merely that the compiler usually knows the descended type, and inserts code to do the conversion.

            More precisely, the reason it isn't a problem with static libraries is that there's rarely, if ever, any good reason to pass a cast through a void*. In the case of dlopen/dlsym (or the Win32 equivalents, whose names I forget), you're constrained by this 3rd party API, and when you pass the pointer through that bottleneck, you lose knowledge of the derived type. Forgetting the derived type is cool, but you need to upcast before forgetting, not after.

            I think in a multiparadigmed language it's inevitable that you will have to understand not only C++ but the C++ philosophy of a particular project.

            This is absolutely true.

            Playing games at the level of the binary layout of your strucutures is one of the things that makes people now say that C++ is too low level.

            As you said, it's not the language's fault if you abuse it. The ability to do low-level work is important for a large class of programming problems, and the fact that you can (ab)use the same pointer manipulation tools that are needed for, say, device drivers, to subvert the higher-level type system is just an artifact of the language's breadth of applicability. It's up to the programmer to avoid doing silly things -- and the compiler will generally tell you if you are! I think it's adequate that C++ makes nasty, tricky code obviously nasty and tricky. It doesn't need to disallow it.

            By way of comparison, I've been writing Javacard applets recently and it is a real pain in the neck. When you're writing code for an 8-bit microcontroller with 2K of RAM and 16K of storage, you *must* write code that is jealous of every byte. Doing this in Java is like trying to swordfight while hog-tied. I like Java well enough, but it really sucks for low-level work. C++ can do a wide range of jobs effectively; just don't blow your own leg off with it.

    • by Anonymous Coward
      There is a certain zen to MI. Much like there is the goto, programming in assembly, and programming in C. At first, young grasshopper you must study. Wax on, wax off. Once you master that then you may move on to crane kick, but not until the proper time.

      I can list off 10,000 reasons why C shouldn't be used for much of anything. I have a masters in software engineering from SEI and I can list off dozens of real world fuckups that were caused or allowed because of the C language and costed millions of dollars and at times possibly even risked lives. Then you go out into the world and you will continually stumble into situation when you want portable assembly and there is no better than C. Would I write the code that controls the 767 in C? Not if I didn't have to. Would I write a huge GUI with hundreds of dialogs and controls in C? Not unless you were holding a weapon and threatening me. Now a kernel? A low level library? Something that talks to the metal? I don't know of anything better than C. Ada is nice at times, Forth is wonderful for certain tasks. C is amazing at most of it. You can see the opcodes that get produced as you write it, it's beautiful. Last embedded project I built from scratch we wrote enough assembly to get a CRT0 working and then enough C to get a JVM working and the rest was gravy.

      I can wax the same way about gotos. I can point to pieces of code that a absolutely beautiful because they have gotos and couldn't be made as elegent without them. Drivers in the linux kernel use them from time to time. Mind you, most of this stuff is low level but that's what it's for. Tool for the job, you know? And it's beautiful when it works.

      In the hands on a newbie it is a little frightening. I think of it like the table saw in shop class. It's the most deadly tool in the room, every shop teacher has personal stories to tell you about kids sawing fingers off and stuff. And without exception there will come a time when that's the tool that does the best job. Sure you could rip a 2x4 in to a 2x2 with a ban saw or even a jig saw but the table saw is the saw intended for that task and it will do it in seconds, perfectly. OOP exists because we think it's easier to model programs in the ways that we think, rather than the ways that computers need them; there is nothing you can build with OOP that you couldn't build with functional or imparitive programming only you might find it easier to do with OOP. So what happens when I can describe something with 2 concrete abstractions? You're saying I should use an abstract abstraction as an inermediate? Take a book, it is a phyiscal collection of bound and printed pages, it is also the collection of words on those pages and in programming land those are both concrete (the specific words are what make the book.) So what if I had a "CollectionOfOrderedWords" object and a "CollectionOfBoundPages" object, my book could be a child of both, specifically myBook might be the mapping between the words and the specific pages. As such I also want to treat it as both. The interface world requires that I actually treat it differently because I want to look at the words rather than the pages. What if I think that abstraction is easier to think about?

      MI has it's problems. It's not the safest thing to use but it makes some things very easy. Particularly when you're building complex things rather than decomposing complex things, which seems to be the tendency among a lot of OOP programmers. If you're building a "final" object that won't be a component of anything else, MI is awesome.

  • by spongman ( 182339 ) on Tuesday January 07, 2003 @09:20AM (#5031788)
    But I would say that while the C++ community was focusing on templates, the STL, and exceptions... what they were not doing was component-based development.
    I don't know where Scott's been for the last 10 years but COM [microsoft.com] specifically uses multiple inheritance of abstract base classes without data or implementation to specify interfaces for component-based development (and I believe that CORBA does the same).

    I'd guess that win32 C++ programmers make up the largest such subset of all C++ programmers. So which C++ community is he talking about exactly?

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...