Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Guido van Rossum On Strong vs. Weak Typing 100

Bill Venners writes "In this interview, Java creator James Gosling says, 'There's a folk theorem out there that systems with very loose typing are very easy to build prototypes with. That may be true. But the leap from a prototype built that way to a real industrial strength system is pretty vast.' In this interview, Python creator Guido van Rossum responds with 'That attitude sounds like the classic thing I've always heard from strong-typing proponents. The one thing that troubles me is that all the focus is on the strong typing, as if once your program is type correct, it has no bugs left. Strong typing catches many bugs, but it also makes you focus too much on getting the types right and not enough on getting the rest of the program correct.'"
This discussion has been archived. No new comments can be posted.

Guido van Rossum On Strong vs. Weak Typing

Comments Filter:
  • More discussion (Score:5, Informative)

    by (a*2)+(ron) ( 204237 ) on Monday February 10, 2003 @07:04PM (#5275458)

    Check out the recent discussions on lambda [weblogs.com] the ultimate [weblogs.com] (a programming languages weblog).
  • by KDan ( 90353 ) on Monday February 10, 2003 @07:06PM (#5275467) Homepage
    "Strong or Weak Typing doesn't make good programs. Good programmers make good programs."

    Daniel
    • > "Strong or Weak Typing doesn't make good programs. Good programmers make good programs."

      That's probably true, but my wild guess is that the majority of so called programmers are not good programmers. Knowing that, what tool (programming language) should be provided to programmers to write a better (not necessarily good) programs? I think that's the question to be asked.
  • It's a great quote, sadly I'm not enough of a programming expert to come up with an equally good comment. I did once write a parser for windows media ASF stream in java, that was an experience I never want to repeat....
    • I did once write a parser for windows media ASF stream in java, that was an experience I never want to repeat....



      Don't know ASF-streams, but I am still curious what you disliked about Java and you think would have been better in other languages?

      Is Python really all the rage among (slashdot) geeks? I've tried it recently, being a bit frustrated by Java's longwindedness, but I dismissed Python, too. Sadly, I forgot the reasons, I think I didn't like the documentation much and some things I wanted didn't seem doable. Perhaps I should try it again.

      I had to do a Perl project lately, though, and I know I spent hours chasing bugs that wouldn't have been possible in Java. The annoyance of that was far worse than the little bit of extra typing I have to do in Java. Had more problems with default initialisations than with types, though.

  • by the eric conspiracy ( 20178 ) on Monday February 10, 2003 @07:08PM (#5275484)
    I think that this argument is exactly backwards. Strong typing allows mechanical type checking via compilers and incremental type checkers during the development process. Strong typing takes a lot of the load off a human developer, and puts it on mechanical systems that can be made much more reliable than humans. The important thing is to take maximum advantage of strong typing - use an IDE that does incremental compilation, etc.

    On the other hand, since it is not possible to mechanically check weak typed code, weak typing places much more load on the programmer to make sure the types are correct than a strongly typed system. The result is many more bugs, and bugs caught much later in the development process when they are more expensive to correct.

    • Weakly typed languages don't detect type errors
      at compile time. check.

      If you actaully run the program and try and concat an int and a string you get an error. If you try and call ob.my_method() and the object doesn't define it, you get an error. keystrokes are moved from the source to the unit tests. As long as the passed in object defines the appropriate interface weakly typed languages don't care if it doesn't actually inherit or explicitly support the interface.

      It is a matter of taste, if you like strongly typed languages then by all means stick with them. There are even a range of weakly typed languages. Perl will try as hard as it can to get two types to play together, you can even add an int and a list and it will give you an answer. Python (Guido's baby) will throw an exception because there isn't any remotely intuitive answer to that operation.
      • keystrokes are moved from the source to the unit tests.

        This also moves type errors from compile time to run time.

        • You don't run unit tests at run time! You run them before you ship.

          Sure, if you like you can give the user the option to run them, but it's not like you just write the tests, never run them, and ship them. (It's certainly a great diagnostic to allow the user's to run them.)

          The point is not to see errors at "compile time"; that's just defining the win for static typing on an irrelevant point. The point is to see the errors before the system is put into production. Unit testing will catch the type errors and a whole hell of a lot more.
          • You don't run unit tests at run time! You run them before you ship.

            No practical unit test provides 100 percent coverage of all special cases that can conceivably occur in a sufficiently complex system.

          • Unit testing will catch the type errors and a whole hell of a lot more.

            Why should I have to write a unit test suite that implements defacto type checking when I can offload that to the compiler vendor? A compiler vendor has far more resources to bring to the problem, and can often go to the extent of using formal methods to insure correctness.

            The point is not to see errors at "compile time"; that's just defining the win for static typing on an irrelevant point.

            My experience has been that compile time type errors are far less expensive than run-time type errors.

            • Why should I have to write a unit test suite that implements defacto type checking when I can offload that to the compiler vendor?

              But you still have to write a unit test suite to test the behaviour of your types, and that is a thing that you can't offload to the compiler. And if these tests implement the type checking, well, thats a plus :-)

              A compiler vendor has far more resources to bring to the problem, and can often go to the extent of using formal methods to insure correctness.

              But these methods can insure syntax correctness only, not the semantic correctness.
      • Typing is an issue of program correctness, which is defined in mathematical terms. The more rigorous you can prove your static application, the closer you are to `knowing' it's correct. Amibiguity is the enemy of math, there can be none for a proof. Dynamic typing is in the latter direction, that is, the more weakly typed a language is, the more ambiguity is added. My personal stance is that program correctness is the current direction Comp Sci should head (see Design by Contract), not expressiveness. We can pretty much express anything we want to do now.

        Why correctness? `Knowing' (a term I use loosely) a program is correct statically is the only way to `know' a program is correct at run-time. Testing at run-time will always, always, always test a subset of code. The larger the application is, the larger the set of untested, and therefore unknown code is.

        Of course, I also doubt we can ever know with complete certainty that a program is correct statically. So we will always have testing as well. I just feel that static correctness is sorely lacking in programming languages and should be the direction CS is headed.

        > If you try and call ob.my_method() and the
        > object doesn't define it, you get an error.

        Is it possible to call an incorrect function that will appear to work correctly (won't throw an exception)? Hrm, let me explain...

        class A
        int index
        foo() { index++ }

        class B
        int index
        oof() { index-- }

        Suppose we have an object a of static type A, and dynamic type B. A has a feature foo, and B has a feature oof. Also, A and B do not conform to each other. If I call a.foo(), is it possible that the call to {B}.oof() will be (incorrectly) made instead? I know these cases would be infrequent. I'm really not trying to make a point here, just curious.
        • The question is which approach works better in practice. Is there a real difference between exhaustive unit tests and exhaustive design by contract?

          Computer Science profs lean towards the exhaustive mathematical. Engineers lean towards either (depending on personal preference) but not exhaustively. They just do it until it meets spec.

          I can't find a link to the joke, but it reminds me about the your-country-here engineer and the French engineer. The punch line is the French engineer looking at the working thingy made by the other guy and saying "Yes, but does it work in Theory?"

    • by Jerf ( 17166 ) on Monday February 10, 2003 @08:33PM (#5275967) Journal
      Sorry, but that's spoken like someone parroting the party line, not someone with experience in both.

      Strong typing allows mechanical type checking via compilers and incremental type checkers during the development process.

      OK, but A: it only allows type checking and B: it enforces type checking.

      A: Type consistency is not a guarentee of correctness. It's not even close to a guarentee of correctness. It's not even more then loosely related to guarentee of correctness. Type 'errors' are only one class of error, and they are detectable in both paradigms, just at different times. Don't skip the focus on "unit-test based testing"; the point of unit-based testing is that it tests for any kind of error you can program a test case for, including type errors, wrong-output for input, environment interaction, and all the other kinds of non-type errors that happen in real life. Unit-test based programming says why pay so much for such a weak guarentee of correctness when you can spend that time writing unit tests and test for useful properties?

      Note that XP calls for unit testing in all languages, not just dynamic ones. You need to be writing them anyhow; why not lean on them and keep your program flexible and tested correct, rather then "proven type correct"?

      B: It enforces type consistency. How many times have you wanted to jump out of the type cage? When programming in a dynamically (not really 'weakly') typed language, it encourages an "interface based" style. If you implement an object that has all the necessary methods, and it'll work right, why shouldn't you be able to use it just because it's not the right "type"? Many, many things in Python present a file-like interface, or a string-like interface, or a sequence-like interface, without having to inherit from the official implementations which may be very wrong for their purposes. And many other parts of the language do that too.

      And guess what? Even in large programs, the world does not come to an end. See unit-testing, although even without that it's still usually OK.

      Because the interface style pervades the whole language, you get to easily write programs that can handle certain types of "type" violations without blowing themselves to smithereens like you might expect if you've never tried it before.

      On the other hand, since it is not possible to mechanically check weak typed code, weak typing places much more load on the programmer to make sure the types are correct than a strongly typed system.

      Two more things:

      One, my experience and the experience of a lot of others says the opposite. You spend an unbelievable amount of time jumping through compiler hoops in a static typed language, and sometimes the smallest change cascades down your entire static hierarchy. You spend a lot of time worrying about things that really aren't importent in the grand scheme of things. Half the time if you're trying to write with any sort of flexible pattern like Iterator or something you end up with a superclass called "Iterable" or "Object" (sound familiar?) or some equivalent, which isn't that far from dynamic typing anyhow.

      The other thing is again, "mechanically checking" code isn't possible for anything but type-correctness. No other goodness properties can be checked that way; in fact Rice's theorem [stanford.edu] proves no non-trivial property of programs can be proven mechanically. (This implies type correctness is not an interesting property ;-) )

      If type safety could give me something above and beyond type safety, I might be interested. But all it does is "prevent" a class of problems that don't exactly plague most coders anyhow. It's an incredibly steep price to pay for something that doesn't do much for you.

      I've experienced both sides of this. Sure, in theory type checking is great, but in practice, the dangers of type errors are incredibly exaggerated. It's just not a problem. It's too late to convince me with theory that has completely failed to play out the way the the type-safety advocates said it would.
      • Type consistency is not a guarentee of correctness. It's not even close to a guarentee of correctness.
        Type consistancy may not be a guarantee of correctness, but type inconsistancy is a guarantee of trouble. :-)
      • Ever tried an ML derivative?
        They are statically typed, yet typed-inferred so you save quite a lot of time and any change cascades itself down. Saves a whole lot of time, especially nice since the type system is flexible enough to do symbolic computing.
      • by shaper ( 88544 ) on Monday February 10, 2003 @11:05PM (#5276854) Homepage

        Unit-test based programming says why pay so much for such a weak guarentee of correctness when you can spend that time writing unit tests and test for useful properties?

        Maybe it's just me, but I've found that much (most?) non-trivial systems that I write are not easily unit-testable. They are usually part of or dependent on large libraries and infrastructure. Most of that infrastructure I did not write nor do I have access to its source. Much of it is often poorly documented, so that one does not have any clear spec upon which to build tests.

        Instead, I tend to rely on the tools to do as much as possible for me, type checking being just one tool in one phase. I expect and plan for unexpected behavior and design for graceful error handling and degradation at every level of code. I know that the developer that wrote that library is at least as lazy as me, probably even more, so I cannot depend on documentation and correct behavior and I definitely do not have the time to write test cases for every interface in every external library that I use.

        Maybe its my early background in military systems. I just got used to disciplined development and I don't really have to think too much about type correctness. I just tend to be careful about it out of habit.

        And finally there is the one iron-clad fundamental rule of software development: a defect found earlier is much easier to fix. I prefer to catch as many errors as I can by compile time. Everything after that starts getting much more expensive really fast.


      • > Sorry, but that's spoken like someone parroting the party line, not someone with experience in both.

        OK, I have experience in Ada, BASIC, C, and C++, and I spend far less time debugging stupid errors in the strongly typed language: Ada.

        > One, my experience and the experience of a lot of others says the opposite. You spend an unbelievable amount of time jumping through compiler hoops in a static typed language, and sometimes the smallest change cascades down your entire static hierarchy.

        Were we talking about "strong" typing or "static" typing?

        > The other thing is again, "mechanically checking" code isn't possible for anything but type-correctness. No other goodness properties can be checked that way

        But that's a rather weak reason for not doing whatever checking we can do.

        > If type safety could give me something above and beyond type safety, I might be interested. But all it does is "prevent" a class of problems that don't exactly plague most coders anyhow. It's an incredibly steep price to pay for something that doesn't do much for you.

        Strong typing is nothing more than saying what you mean and meaning what you say. I don't know where the heck the purported steep price comes from, since in my experience the strong type definitions in my programs provide me with a set of abstractions to work with. If I declare some colors, they're always colors, not just facades for numbers. And thus I immediately and consistently think of them as colors rather than as obfuscated numbers.

        You should learn to think of strong typing as a mechanism for providing additional constraints on the behavior of your programs. It baffles me that people think that is a bad thing. I find that using a strongly typed language moves bug discovery forward in the development process and makes me think more clearly about the higher-level algorithms I'm coding.

      • One, my experience and the experience of a lot of others says the opposite. You spend an unbelievable amount of time jumping through compiler hoops in a static typed language, and sometimes the smallest change cascades down your entire static hierarchy. You spend a lot of time worrying about things that really aren't importent in the grand scheme of things. Half the time if you're trying to write with any sort of flexible pattern like Iterator or something you end up with a superclass called "Iterable" or "Object" (sound familiar?) or some equivalent, which isn't that far from dynamic typing anyhow.

        Sorry, but what you describe above is the java Interface, a way of guaranteeing that an object will meet a particular group of method signatures (but doesn't require any of the work like creating parent-class implementations). This is something that Python lacks in a big way. For instance, all the almost-like-all-file objects you mention don't have the same group of file-like methods. An interface promises you that all the methods you expect will be there.

        And I really don't understand the argument against strong typing anyway, are you too fucking lazy to use an extra keyword for an argument and return type? You don't like having auto-generated documentation that you can actually use without examining the code to fill in the details?

        The other thing is again, "mechanically checking" code isn't possible for anything but type-correctness. No other goodness properties can be checked that way; in fact Rice's theorem [stanford.edu] proves no non-trivial property of programs can be proven mechanically. (This implies type correctness is not an interesting property ;-) )

        I'm sorry, but this is just wrong. I'm writing code in a weakly-typed language with the most advanced IDE possible. If I'm writing inside a function or method definition, that IDE cannot provide me any intelligent auto-completion or method checking of ANY of the arguments passed in. That IDE can provide me no help when using that funtion or method because it doesn't know what it returns. This might not mean much on a small API that you wrote yourself, but working with someone else's code on a large project means that you've got to have access to all source, all the time to serve as your own documentation.

        • "sorely lacks" may be a bit of an overstatement. The group at Zope has made an Interface [zope.org] module for python. As a first cut [zope.org] for how interfaces could be implemented in python. (I think the fact that they could make an interface implementation without changing the core language says a bit about dynamic languages like python.) In their current implementation [zope.org], they seem best to be combined with unit tests. A class itself can be imported, even if it doesn't support the interface, but Interface.Verify.verifyClass(ImyInterface, myClass) will return true only for classes that correctly implement the interface.
      • I started in ObjC when all Objects were void* (now I think there is normally more checking). I loved it.

        Then I did 6 years of Java and C++.

        A year ago I started working with my original coworker on a large project in Java. He is still using a not very strongly typed programming style (lots of casting and parameters of type Object). It is extremely hard to understand what is going on. There are sections of code that have strong static typing and they are much easier to understand, because the type is obvious. I don't need to go digging through the code to find out if I have a list of switches, switchFamilies, or references to archivedSwitches; the type can tell me right there what I'm dealing with.

        After almost 10 years of corporate IT programming experience at 6 different companies, I find that strong typing is appropriate for large jobs with higher turn over and dynamic typing may be more powerful for jobs with a few guru programmers that need job security.

        Joe
      • Indeed, static type checking cannot prove your program correct, but it can prove your program incorrect, without even running a test. If your type system is sufficiently powerful to express your intention, the type checker helps. If it is too weak, it will get in your way. The latter happens all the type in C, where the fix usually involves void and it happens in Java where the fix is spelt object and a type cast.

        Where the type system is expressive enough, I never found a need to cast. Neither in Haskell, which has a polymorphic type system, nor in C++, where the template mechanism creates a sort of meta-language with weak typing used to generate a strongly typed program.

        When programming in a dynamically (not really 'weakly') typed language, it encourages an "interface based" style.

        As does the use of templated classes and functions in C and as does Haskell through its type classes.

        in fact Rice's theorem proves no non-trivial property of programs can be proven mechanically.

        Indeed. You shouldn't bother writing unit tests, after all they are a doomed attempt at mechanically proving a program correct. Even worse, the attempt of formally specifying what in correct is already doomed.

        So both tools, tests and typing, are imperfect. Choose what helps the most or combine them. I see great value in a clear error message telling me the bar was not supposed to go into a container of foo, as opposed to an error telling me a bar was missing some method I meant to invoke on a foo. The alternative is to write a test case, that checks if something that goes into my container really is a foo. Which is actually the same as worrying about typing and declaring the type.

      • in fact Rice's theorem proves no non-trivial property of programs can be proven mechanically.

        I followed that link to Rice's theorem, and it states that no non-trivial property of programs can be checked mechanically (not "proved" -- there are plenty of programs with non-trivial properties that are provable). In particular, relying on unit testing to ensure any aspect of correctness beyond doubt is doomed.

        On the other hand, if your encoding of Turing machines is a type-safe language, then type safety is a trivial property -- so you can be sure your program has it. Surely knowing something is better than knowing nothing.

        How many times have you wanted to jump out of the type cage?

        In Java, all the f*cking time. In ML, just about never.

        JV

    • [...] weak typing places much more load on the programmer [...] The result is many more bugs

      Let me guess: This is what your professor told you?

      Try it out sometime, you might find that neither is true in practice.
    • I guess I would be more inclined to believe this argument if people used:

      int a,b,c;
      float d,e,f;
      c = add_int(a,b);
      f = add_float(d,e);

      instead of

      int a,b,c;
      float d,e,f;
      c = a + b;
      f = d + e;

      I am not a fan of strong typing because it keeps me from writing to the interface an object understands. I can have two objects in two hierachies that understand a common set of messages. I should be able to use those objects in the same code without worring about type. Type is a poor substitute for the actual interface and object understands.

      On the second point, strict typing generally adds code to a project that need not be there. As an example, adding lines of code causes errors. Having one generic collection as opposed to the endless type specific collections adds code.

      more code, less understanding, more errors

      You could use templates (C++), but then you really got to ask why I am trying to get out of my strong typing.

      • Have a look at languages which have type inference and type classes. Type inference means that most of the time, you do not have to declare types for your variables, the compiler will figure it out. For example using the + operator means that both arguments must be some kind of number. You get an error when the compiler infers a clash, such as using + on a variable in one place (which makes it a number) and then doing concatenation on it in another (which means it has to be a list). You can also add type declarations to make your intention clear. Declaring the type of variables is extra information which should not be necessary, but because programmers make mistakes, the redundant information can help to track down where the mistake occurs. It's a similar principle to putting assertions in the code, which are redundant properties checked at run time.

        (This point is so important I feel like repeating it. Adding lines of code does not cause errors - not if those lines are extra checking to catch errors that might have crept in elsewhere.)

        Your add_int() and add_float() example would be dealt with by type classes. For example in Haskell the (+) function has the type

        Num t => t -> t -> t

        which roughly means that it takes two arguments of type t and gives back a result of type t. But t must be an instance of 'Num'.

        In object-oriented languages like C++ or Java you would do the same with a base class of 'things that can be added'. Not with this trivial example, of course, since you have the builtin types and + operator from C, but for things like 'Printable' or 'Serializable'.

        Or with overloading you can define a foo() function taking an int and another taking a float or any other object you want. You'll be told at compile time if a suitable function can't be found for the types you are using.

        You make a good point about writing to a particular interface, and looking at the set of messages an object understands. You can get this by having a common abstract base class or interface. I don't think the example of 'two objects in two hierarchies that understand a common set of messages' is particularly likely to happen in practice, because in a statically typed OO language these objects would be written to have a common base class. However, there might be a type system that keeps track of the messages that can be received by an object, and checks that at compile time. ('You are trying to call method foo() on variable a, but this variable might be of type X, which does not have a foo() method.')
        • I don't think the example of 'two objects in two hierarchies that understand a common set of messages' is particularly likely to happen in practice

          My impression is that such things are fairly common in more dynamic OO languages, such as Smalltalk. The various collection classes are probably good candidates for that strategy, since they share little or no code but benefit from having a common protocol.

          because in a statically typed OO language these objects would be written to have a common base class.

          Right, but that sort of thing doesn't make as much sense in a language like Smalltalk or Lisp, precisely because they aren't statically typed. You probably aren't going to be able to verify at compile time that the children actually implement the interface defined by the base class, so unless the base contributes some code there isn't much benefit to having it. Inheritance in those languages is basically synonymous with implementation inheritance, and separate interface inheritance may be non-existent.
          • Yes, I agree, in a dynamically typed language there is less likely to be a common base class. My point was that _if_ you are programming in a statically typed OO language then the problem of two objects with the same interface but not sharing a base class is unlikely to happen.
            • My point was that _if_ you are programming in a statically typed OO language then the problem of two objects with the same interface but not sharing a base class is unlikely to happen.

              Mostly, I think, because the language prevents you from doing it easily. You're no more likely have a common base just in virtue of the static typing, but since that's the only practical way to do things in a static language, you don't have much choice. I don't have a problem with that (it's not a huge burden, and it's often worth it), but it's not a very convincing reason to try static typing if you're used to the alternative (just as informal protocols probably aren't very convincing reason to use pure dynamic typing if you're used to static languages).
  • -1 Flamebait (Score:5, Insightful)

    by clem.dickey ( 102292 ) on Monday February 10, 2003 @07:26PM (#5275595)
    So what other imperfect things should we do away with? Roads with median strips? Safety locks on guns? Metal detectors?

    I would argue that strict typing reduces one class of bugs so that I can concentrate on other less tractable classes. Who gave Guido the idea that strong-typing programmers are satisifed with clean compiles. I will be satisfied with clean compiles when the compiler can detect *all* bugs. Until then we (some of us, at least) need to work on improving languages and language tools toward that goal.
    • Re:-1 Flamebait (Score:5, Insightful)

      by DannyBoy ( 12682 ) on Monday February 10, 2003 @09:45PM (#5276397)
      I completely agree with you. Any software developer who doesn't have their head up their ass knows that a clean compile only means that you've made it past the simplest bugs. The good ones have unit and regression tests, the others just run a few testcases by hand. Who ships as soon as it cleanly compiles???

      The main thing that scares me about languages with run-time time checking is that the types can change at run time. If you have unit tests where a variable is a integral/scalar value, and then somehow at runtime it has a string value (cause someone called your function incorrectly), you're screwed because you didn't test that. And if your response is that you should have had a unit test for that, then you'd have to have a unit test for every type (incl user defined) types in the language. This isn't really a problem if it is a small program or you are scripting something, because you can have more control over the inputs, but in a large system, especially with a large development team, compile time type checking has an incredible value!!!

      dan
      • If you have unit tests where a variable is a integral/scalar value, and then somehow at runtime it has a string value (cause someone called your function incorrectly), you're screwed because you didn't test that.

        And whose fault is it that you didn't test that?

        The predicate was invented for a reason.
        • If you have unit tests where a variable is a integral/scalar value, and then somehow at runtime it has a string value (cause someone called your function incorrectly), you're screwed because you didn't test that.

          And whose fault is it that you didn't test that?


          Im an XP by far, and a huge python fan (and contributor). Initially, I agreed with the first comment here. But then, I did a little thought experiment. Once you do find a bug in big code, you HAVE to try and reproduce it. Yes, unit test that fails to catch something but still works, will result eventually in an actual bug, so this simply means that your unit testing wasnt coded well; it doesnt have much to do with the fact that you have strong typing or not. Once a bug happens, you have to track it down regardless of what language you are in.
          And see python isnt weakly typed, its strongly typed, eg: when you need to know the type of a variable, you call type(var_name) and u know exactly what type it is, so during the bug hunting process you can get the exact same information regardless of whether its typed by value (of variable) (eg python, eg 'implicitly') or whether its type by variable (eg java, eg 'explicitly').

          And if you argue that java or some other explicitely typed language can make the bug hunting process faster, i disagree. Again, java or python will show you the exact line where execution failed. Then the debugging process begins at that line regardless of language.

          • And see python isnt weakly typed, its strongly typed, eg: when you need to know the type of a variable, you call type(var_name) and u know exactly what type it is, so during the bug hunting process you can get the exact same information regardless of whether its typed by value (of variable) (eg python, eg 'implicitly') or whether its type by variable (eg java, eg 'explicitly').

            That's what I meant by "predicates were invented for a reason." The only weakly typed language I am familiar with that doesn't provide a mechanism for checking a variable's type (to ensure you don't misuse it) is FORTH
      • by g1zmo ( 315166 )
        Who ships as soon as it cleanly compiles???


        My boss.
      • Who ships as soon as it cleanly compiles???

        Bah, clean compiles are for wimps. If it builds in debug configuration on one platform, that's enough to ship the release build on fifteen others, darnit.

    • Show me a C++ program - and I mean a full application, GPL'd and all - that has a clean compile. You'll see twisted typecasts everywhere, and each typecast is a signal that the type system was in conflict with the needs of the programmer. Type systems are so restrictive as to cause lots of trouble in all practical cases, for limited gain. The fact is, there are just as many non-typesafe "correct" programs as there are typesafe "incorrect" programs. Type safety and correctness are completely orthogonal.
      • You'll see twisted typecasts everywhere, and each typecast is a signal that the type system was in conflict with the needs of the programmer.

        That's not my experience. Those typecasts are effectively labels on code that was hard to get correct.

        Fixing type problems is not "satisfying the compiler", it's fixing potential errors. Every time you try to break the type system, you're doing something potentially dubious. It should require effort to do that, because it makes you assert that you're doing said potentially dubious thing consciously.

    • I'm not sure I'd agree with your analogies.

      Weak typing can be incredibly useful for those cases where you'd really like to write some routines or data structures that can ignore type.

      I find myself getting around typing issues in languages like C by using pointers, which I think is a much worse kettle of fish than weak typing, especially when you throw programmers who handle warnings about undefined pointer types by casting the pointer to J. Random Type into the mix.

      If you look at it one way, strong and weak typing are different tools for different jobs, and you should use whichever one is appropriate for the task at hand.

      If you look at it another way, who cares? The problems raised in the strong vs. weak typing argument are better solved with taking the time to design the damn program correctly in the first place, before you start cutting code, anyway.
      • The type system in C is not particularly expressive. In C++ you would be able to declare a linked list of pointers to T, and have this checked, and avoid casting things to (void *) and back again.

        In a language with type classes you could write a function to work on any kind of object and again specialize it to the particular types you are using, and have this checked at compile time. In most cases without having to write any explicit type declarations yourself.
  • Whenever I type strongly my wife complains about the noise and asks that I type more quietly.

    • I'll never give up my Model M Keyboard [yahoo.com]. Feel is everything.

    • wife complains about the noise

      I have the exact opposite problem, when my wife gets annoyed at the computer[1] she start typing too strongly and I have to ask her to type more quietly[2] since the application in question[3] shows no improvement using strong typing.

      [1]: because it doesn't do what she expects, she expect the wron thing
      [2]: to prevent her from damaging the keyboard
      [3]: msword

  • Say it, Guido! (Score:2, Insightful)

    by The Pim ( 140414 )
    It's a shame that this view is dismissed out-of-hand by the industry. Of course, this is an industry so conservative that a primitive language like Java is considered "state of the art". I sometimes wonder which will happen first: the industry wakes up to the possible productivity improvements of runtime-typed languages, or type-inferencing languages become convenient and accessible enough to take over. Either one would be a great day.

    Those who say "why would I ever choose not to have certain errors detected at compile time" are missing the big picture. The errors that can be caught by static typing are only the beginning of the errors that can occur in a software system. And they are the ones that will be caught easily in testing. If the uncertainties of runtime typing encourage you to write more tests, so much the better! And have you noticed how much easier it is to whip up tests, and to add the extra code to deal with corner cases, in such languages? In my experience, strongly-typed code tends to be more susceptible to unexpected inputs, just because it's such a pain to handle and test them.

    Further, runtime-typed languages are usually better at slaying the real dragons of software developments: the complex errors that are beyond the scope of typing. Guido said a wise thing:

    You can also easily write a small piece of code in Python and see that it works.
    As a maintainer, I would much rather have code that I can see is logically correct with my eyes, than code that a compiler can tell me type checks. High-level languages are much better at expressing complex ideas in clear code; in C and Java, the idea gets lost in the mechanical details.

    My experience has convinced me that a strong team can produce more robust code in a runtime-typed language than a similar team using a traditional strongly-typed language, given the same amount of time. The first team also has more leeway in trading robustness for speed of completion, when that counts.

    • It's a shame that this view is dismissed out-of-hand by the industry.

      There's a reason why it's dismissed out-of-hand. That's because the industry has to work with real computers, not virtual machines. The fundamental assumption of real world systems is that data is typed. The further you get away from this assumption, the more overhead you need to make things work. Does that data fit in 32 bits or 48 bits? The CPU needs to know.

      This may sound simplistic, but work your way up the scheme of things. At a very low level, houses are made of wood, so at a very high level you're better off with strong fire codes. At a very low level, data is typed, so at a very high level you're better off with strong types.

      Of course, this is an industry so conservative that a primitive language like Java is considered "state of the art".

      Since when is Java a "primitive" language? Or is this just your bias that all strongly typed languages are primitive? p.s. I am not a Java fan.

      And in my experience, the industry most certainly does not consider Java to be state-of-the-art. Those businesses that did were the ones that led the way in the dot.bomb crash. Java serves an important role, and is an important language in some domains, but in the meantime the industry is still waiting for Java to fulfill its promised hype of "write once run anywhere", a Java office suite, and a Java OS that can actually work outside of a think tank.

      In my experience, strongly-typed code tends to be more susceptible to unexpected inputs, just because it's such a pain to handle and test them.

      That's why you only accept string input, and validate as a string, and only then convert it to your strong typed data. I thought everyone knew this. scanf() has been deprecated for the last twenty years!

      As a maintainer, I would much rather have code that I can see is logically correct with my eyes, than code that a compiler can tell me type checks.

      Why do you keep assuming that us "strong" types think that type checking is the end of the story? Throw away your stereotypes and look at the world the way it really is. We don't toss our software over the wall just because it compiles. We all want code that we can see is logically correct without a compiler. That's why we don't bring compilers into our code reviews. But compilers are also useful tools to catch those dumb mistakes that we all make.

      My experience has convinced me that a strong team can produce more robust code in a runtime-typed language

      Use the right tool for the job. I write software for a realtime system. Is a runtime typed language appropriate? Hah! On the other hand, Python and other high level scripting languages are good tasks like complex text processing.
      • Re:Say it, Guido! (Score:2, Insightful)

        by The Pim ( 140414 )
        There's a reason why it's dismissed out-of-hand. That's because the industry has to work with real computers, not virtual machines.

        That's a bizarre thing to say. Do you think that Python code doesn't eventually execute real machine instructions on real ints, floats, and pointers? What evidence do you have that the programmer's view of "data" should match the hardware's view of data? For certain performance-critical or low-level code this is necessary, of course. But I'd much rather have a higher-level view of the world. It's simlpy more productive.

        Since when is Java a "primitive" language?

        Will you accept that C is a primitive language? What does Java has that C doesn't? Garbage collection, which is old hat (what languages in use today besides C and C++ don't have garbage collection?). A simplistic object system to which it does not even adhere faithfully (int et al). It doesn't have pointers, but it does have unsafe casts, so it's not type-safe. If you add generics and boxing, I'll probably take Java out of the primitive category.

        Or is this just your bias that all strongly typed languages are primitive?

        No!! Haskell, for example, is quite advanced.

        p.s. I am not a Java fan.

        I am neither a Java fan nor a Python fan.

        That's why you only accept string input, and validate as a string, and only then convert it to your strong typed data.

        I'm glad you do. But it will take you longer to process the text than it would in Python. Is there any chance that you would do more careful validation, or at least emit more meaningful error messages, if you were programming in Python? This is a perfect example of how "scripting" features may lead to greater robustness.

        We all want code that we can see is logically correct without a compiler.

        I still contend that it is much harder to produce such "obviously correct" code in a low-level language. You lose concision, mechanical details get in the way, and related pieces are in different parts of the code. All of these things make reading harder.

        But compilers are also useful tools to catch those dumb mistakes that we all make.

        I really don't deny that. I just don't think the sacrifice in productivity and expressivity are worth it, just to catch these errors. Have you tried a language like Haskell? One of the wonderful things about it is it catches type errors without imposing a burden on the programmer.

        PS. Thanks for replying thoughtfully to an admittedly trollish message.

      • Since when is Java a "primitive" language?

        Reading between the lines on The Pim's reply, and adding my own gloss, I agree with him. Java is a primitive language.

        A "primitive" language is one which relies on one dominant "paradigm" (for lack of a better word). Java (currently) has objects and basically nothing else.

        A "non-primitive" language has more than one paradigm available. C++ has objects plus parametric polymorphism (templates), behaviour cancellation (via private inheritance), complex overloading, a source preprocessor and so on. Haskell (which The Pim mentions) has objects (with separation of interface inheritance and implementation inheritance), parametric polymorphism, algebraic data types, first class functions (and hence closures and continuations), and so on.

        For large programs, I find this important. By using a multi-paradigm language, I am not locked into one solution space.

  • by dozer ( 30790 ) on Monday February 10, 2003 @09:22PM (#5276281)
    Guido: "Strong typing catches many bugs, but it also makes you focus too much on getting the types right and not enough on getting the rest of the program correct."

    Really? Really??? This blanket statement certainly doesn't describe anybody I've worked with. I wonder what information he bases it on...

    In a production environment, I've found that writing strongly typed programs always saves time in the long run. It doesn't take much more time and, if you occasionally make a silly mistake (like using == instead of eq in Perl), it can save you hours of aggrivation and headache.

    For quick one-offs, of course, losely-typed is always the way to go.
    • Perl's use of == vs eq is just poor design and has nothing to do with dynamic or static typing.
      • Perl's use of == vs eq is just poor design and has nothing to do with dynamic or static typing.

        It has to do with strong vs. weak typing. Using a string comparison operator (similar to perl's eq) on two ints in a strongly typed system would result in a compile error. Using it on two scalar variables in a weakly typed system may or may not result in a runtime error, depending on what kind of user input was read into them, for instance.
  • Language creator defends his language over other languages!!!! Whoa! Total mind blower
  • Dynamic typing might well be superior in practice, but it could never work in theory!
  • I notice types most when I am calling some sort of function, and when I am looking at the body of a function which has parameters.

    In a language where parameters have specified types, I can look at the signature for a function I may want to call and see at a glance what it is expecting me to pass. When I am looking in the body of an unfamiliar function at variables which were defined as its parameters, I can see at a glance what type of things those parameters represent.

    I find strongly typed languages make it much easier to provide this information I find helpful. I find these things sorely missing in real world use of languages like JavaScript, Python and Perl. That information and more can be provided through some sort of of out-of-band documentation mechanism, but I personally like having it right there as part of the language.

    Larry

  • by Phouk ( 118940 ) on Monday February 10, 2003 @10:06PM (#5276521)
    Here are some totally unscientific definitions, use at your own risk:

    Static typing: Both variables and objects have types. Type checking happens both at compile time and run time.

    Dynamic typing: Variables don't have types, but objects do. Type checking happens at run time.

    Strong typing: Strict and effective type checks; a string is a string and not a number. Often confused with static typing.

    Weak typing: Absent or ineffective type checks. E.g.: everything is a string, or everything is a pointer. Thus, a string could be used as a number or the other way round. Often confused with dynamic typing.

    Python, for example, has strong but dynamic typing.

    BTW, if you haven't seriously tried a dynamically typed language yet, maybe you should - they are simply much more fun, IMHO.
    • Dynamic typing: Variables don't have types, but objects do.

      I must be one of those dinosaurs, since that doesn't even parse in my experience. In an OO language, all objects are variables, and in a "purist" OO language, all variables are objects. Or at least, that's how I treat them.

      So what's the difference?
      • How about:

        Dynamic typing: object identifiers don't have types, but object instances do.

        x = new Circle(); ...
        x = new Donkey();

        The object identifier, x, doesn't have an inherent type. It can refer to any object. Some C/C++ programmers think of it kind of like a void* -- it can refer to anything.

        In this case a Circle object is created and assigned to it. Later on, the same object identifier, x, is used to refer to a new Donkey object.
      • by AJWM ( 19027 ) on Tuesday February 11, 2003 @03:36AM (#5277785) Homepage
        Think of it this way: the variable is just a (named) bucket that can hold stuff, objects are the stuff that sometimes goes in the buckets (and sometimes is just pointed to by something in a bucket, etc.)

        If you put a Foo object in bucket 'bar', it will still be a Foo object when you take it out, so (in a strongly typed language) you'd better be expecting a Foo. But when you're done with the Foo, you can put a Baz object in that bucket 'bar'.

        The bucket doesn't have a type, the object in the bucket does.
    • by Anonymous Coward
      Actually, I would not say that in purely statically typed languages that objects have types, or that type checking happens at run time. Languages like C++ are not purely statically typed (anymore...).

      Strong/weak and Static/Dynamic are orthogonal axes.

      Strong/Weak is how rigorously typing is enforced. Static/Dynamic is when it happens.

      Strong Dynamic typing: Scheme, Lisp*

      Weak Dynamic typing: Perl5 (no, really, test it, it's scary - worst of both worlds!)

      Strong Static typing: ML family languages

      Weak Static typing: C

      "Typeless" or operator-typed languages: Assembly, Forth. In these languages, looking at them positively, types are associated with operators (eg. (asm) mov.l, mov.w, mov.b, (forth) F+, +), or, more negatively, types are irrelevant.

      * Note that Common Lisp has optional Weak and Strong Static typing.
      • Weak Dynamic typing: Perl5 (no, really, test it, it's scary - worst of both worlds!)

        That's why the first thing every Perl programmer should type (ok, after the hash-bang line) is "use strict; use warnings;". (Or s/use warnings;/something else that enables warnings/). The Perl interpreter will happily process your mistakes without blinking if you don't take precautionary measures. For example:
        # use strict;
        # use warnings;

        $foo = "bar";
        $foo += 3;
        print $foo, "\n";
        If uncommented, the "use strict;" line will cause the interpreter to complain about $foo not being associated with a package (guards against typos). The "use warnings" line will cause the interpreter to print a warning message when it executes $foo += 3 ("Argument "bar" isn't numeric in addition (+) at..."). With both commented out, the interpreter happily executes the code and prints (on my system) "3". I agree that this behavior is horrendous. So remember kids, "use strict" and enable warnings!
    • No language does this that I know of, but here's how I think type systems should work.

      It's pretty much a stronger form of SML-style type-inference.

      Variables don't have types, values do, as in dynamic typing. But, variables can have type constraints.

      You can optionally specify type constraints for variables and parameters.

      From there the compiler/interpreter can go wild with type-inference trying to determine as much as it can about the structure of your program. As the type constraints are optional you can end up with inferred types for variables that are complex unions and disjunctions of types.

      Code that looks like:
      (defun foo (x) 'hello)
      could be assigned the type foo : unknown -> symbol.

      The code can then be weakly checked for inconsistancies between inferred types.

      You can write a fully dynamic program like this by specifying no types. You can write a fully static program by specifying all types. Or, you can use a loose mix of the two that lets you do weak compile-time type checking where you want it, and avoid the hassle where you don't.

      Anyway, I'm just rambling here. I'll likely try to start playing with these ideas in Lisp sometime soon, as I've been meaning to for several years now.

      Justin Dubs
      • I would recommend a look at Soft Typing (often in the context of scheme) and also Dylan's type system.

        If you give up the optional typing, foo looks very much like a polymorphic function. Haskell's type system would type it as foo:: Quotable a => a -> Symbol. Perhaps someone with more up-to-date understanding of types can compare F-Bounded Quantification to Haskell's Type Classes.

    • here are some totally unscientific definitions, do not use:

      Strong typing: Original IBM PC keyboard, requires effort, but very satisfying for coding, data entry, letter writing, or any other purpose requiring text. Your hands will get tired.

      Dynamic typing: The old Apple adjustable keyboard or the IBM Butterfly laptop. Breaks easily, but may fit your hands better.

      Weak typing: The Atari 400 membrane keyboard. Often too wimpy to handle adult hands.

      Static typing: Keyboard has loose wiring and gives you an electric shock. Ouch!

  • For strong typing I recommend an old IBM-PS2 clicky keyboard. Those who are more inclined toward weak typing can stick with the Microsoft Natural keyboard.

  • Subjective (Score:5, Interesting)

    by Tablizer ( 95088 ) on Monday February 10, 2003 @10:35PM (#5276712) Journal
    As somebody who has seen many forum-fights over this issue, I think it is probably subjective. There are complex tradeoffs, and the final score is probably greatly influenced by personal factors. What trips up person A may not trip up person B nearly as much.

    I personally like "dynamic" typing. It is easier to read, and it is easier to spot errors that may be missed if there is a lot of formal typing clutter IMO.

    However, I think it might be possible to get closer to the best of both worlds by having a "lint"-like utility that points out suspicious usage. For example, it might flag this usage:

    x = "foo";
    doSomething(x);
    y = x + 3; // suspicious line

    If you really want it to do that, then you could tell the utility to ignore that particular line to not distract future inspections. (Assume the above language uses a different operator for string concatenation.)
  • by Tom7 ( 102298 ) on Monday February 10, 2003 @11:58PM (#5277083) Homepage Journal
    Strong typing catches many bugs, but it also makes you focus too much on getting the types right and not enough on getting the rest of the program correct.

    (Does anyone else find it a little scary that Guido confuses "strong" and "static" typing?)

    There's not much substance in this article to actually refute, but I would like to share my experience on this. I have had a lot of experience with static and dynamic, strong and weakly-typed languages, though not much with Python.

    I'm a fan of statically-typed functional languages, especially SML and O'Caml. I agree that static typing catches many bugs; ones that would not be caught at compile-time in a dynamic language. However, in my experience, spending time getting the types right is not a distraction but actually a guide in the design of the program. Static typing encourages . Even if I considered all of that time (which amounts to very little once you become good at the languages) a burden, I think static typing would still be worth it. The reason is that compile-time errors are much, much easier to track down and fix than ones that occur only dynamically (or only once you've shipped your program!).

    By the way, "strong" typing does not mean writing down a lot of types. (ML and Haskell have type-inference systems where you end up writing less than you would in C or Java, and maybe even less than in Python!) By the time you become an expert in a language like ML, you are hardly encountering type errors (except when you make a typo or actual mistake), and hardly writing down anything having to do with types -- the best of both worlds!
  • It's interesting that very few comments/analysis/quotes highlight the fact that the language molds your thinking, making one type of error more or less likely.

    If you find this strange, I would submit that speaking human languages influences the way bilingual people think, so software programming would draw upon the same phenomenon.

    It's the whole "I got a hammer and a nail and a saw and a drill thing..." You can't talk about bas-relief... 'cuz you don't have a chisel. Or if you do, it's in your other toolbox...

    Strongly-typed languages may be more effective in preventing type errors by making the programmer think about the type of variables... than by the compiler catching any... And there is also the question of code style to consider... a weakly & dynamically typed language may have just as few bugs if it's coded with strong code guidelines... Of course, then we may just be getting back to the "good programmer makes good code" syndrome
  • Best of both worlds with pyLint? [dotfunk.com] It claims to do type-checking based on the source, but it seems to be in an early state of development.

    Also (though this doesn't check types), pyChecker [sourceforge.net] can track down a few common errors as well, and seems to be more mature than pyLint.

  • It's been written in the article that qmail and mailman have been written in python. While I agree for mailman, I just downloaded Qmail source (very small !) : no PY file.
  • It's mearly dynamically typed. E.g. you can't add strings and numbers, but Python will automatically convert one to the other for you.

    So I guess this thread can be deleted.
    • I was about to post something along those lines when I noted your post. I wonder if, perhaps, there is some miscommunication and assumption going on here? What really is the difference between strong and weak typing?
      • What really is the difference between strong and weak typing?

        Strong typing means that typing is enforced: you can't apply operations to instances of types for which those operations are undefined. Weak typing is a lack of strong typing. It doesn't necessarily mean that there is no type checking at all, just that it's incomplete. C compilers, for example, generally do a fair amount of type checking, but it easy to bypass it using, e.g., void * and casts.
  • Everyone knows documentation is crucial for code readability and maintainability. Strong typing, aside from letting the compiler catch countless developer goof-ups, is an invaluable tool for good documentation. Documentation is not just overarching system descriptions or code comments. In my opininon, another _crucial_ element of documentation is the code itself. But code can only be effective as documentation when it is thoughfully designed. I find that in a well designed system, the method signature says a vast majority of what you need to know about it. The name of the method, when considered with the name of the class it is in, should give you a very good idea of what this method does. The return type, parameters, and declared exceptions should further clarify what this method does and how it does it. Of course you'll need code comments to clarify the finer points, but if you need explicit documentation to describe _what the method does_, you should be more thoughtful about the design. That being said, strong types are crucial. They allow me to specify _exactly_ what I want you to pass to my method, and _exactly_ what I will return. This "forced contract" is what allows huge systems to evolve. In a weakly typed system, you can document this same behavior until your face turns blue, but the compiler won't warn you when your documentation is out of date.
  • I program in Python quite a bit and love it. The economy of the syntax is wonderful so I seem to be able to do much more with much less typing. Unfortunately when the program starts to get big and I have to refactor something such as adding/removing a method, adding/removing arguments to a method, or changing the interpretation (type) of some of the arguments, I get very paranoid and wonder if I really did catch and fix all the calls of the method. Yes, a recursive grep will find these, but did I skip one? Did I misspell one of the variables. I can't tell you how many times I've gone blind looking at Python code for a bug that is caused by a misspelled variable.

    As a result I usually limit my Python programs to something like 1K lines (I have one that is 2K, oh well) If the program is larger than that I use Java. (for static type checking) If the program is less than 2-3 lines I use Perl. If I'm writing something performance heavy (server daemon) I may write it in C/C++.

  • Nobody has noted this yet, so I guess it's up to me.

    Most large systems (and I mean large) provide some kind of scripting capability, and that scripting system is often dynamically typed. Even if the base system is written in a statically typed language, it will, in general, provide a dynamically typed interface at some level, and some of the system may even be written in that dynamically typed scripting system.

    Using this approach, you get the benefits of both. The underlying system, where correctness and performance are important, get written in a statically typed language. The upper levels, where performance is not so critical (because you're effectively just gluing together base components) and flexibility is important, get written in a dynamically typed language.

  • A substantial fraction of bugs in supposedly working programs arise where something in one place is inconsistent with something in another place. (Local trouble tends to be caught during original development.) Non-local inconsistencies are also easy to introduce when a program is being modified.

    Static checking that finds such problems is of modest use during development, substantial use during integration, and extremely valuable during modification. The bigger the program, the more valuable it is.

  • I think an advantage of strong typing over weak typing is that it yields more efficient programs. When the type of a variable is known at compile time, the correct functions to use on it can be hard-wired into the binary, saving time (and prabably memory) when the program is run. On the other hand, weak typing is more flexible and often saves time on the developer's part.

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...