Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Concept Programming 78

descubes writes "A recent article asked about improvements in programming. Concept programming is a very simple idea to improve programming: program code should reflect application-domain concepts. What is amazing is not the idea itself, but how often we don't apply it, and how much existing tools and techniques can get in the way without us even realizing it. To be able to represent all concepts equally well, we need tools that don't force a particular, restricted vocabulary on us. The Mozart project is a Free Software project to implement concept-programming development tools. It includes the Coda universal intermediate language, the Melody persistent program representation, the Moka Java-to-Java extensible compiler, and a fairly advanced front-end for a new programming language called XL. In the long run, Mozart can give the Free Software community a foundation for something as powerful as Charles Simonyi's Intentional Programming."
This discussion has been archived. No new comments can be posted.

Concept Programming

Comments Filter:
  • by Anonymous Coward
    I apply this principle all the time.

    I write lisp macros, essentially extending the language and customising the object system to incorporate the domain-specific concepts, rolled into a package.

    It's the way many forth coders work too, BTW.

    I find it interesting that lisp, coming from the really high-end and forth, coming from the really low end, feel so similar in this respect.

    • Yes, Lisp can do a lot of this, because it has meta-programming (reflective) capabilities built-in. What Lisp or Forth lacks is:

      - A way to adapt the syntax. In Lisp, you write (+ 1 2), not 1 + 2. So if you have the semantic ability to represent concepts, you don't have the syntactic ability.

      - A distinction between the program environment and the meta-program environment. When you create a lambda, its "namespace" is the current program. In Mozart, it needs not be.
      • > What Lisp or Forth lacks is: Apparently you aren't aware of the reader. This [cmu.edu] is one of many implementations of infix syntax support for any Common LISP.
        • I knew it could be done, but I was not aware that this had been done already. My mistake, I'll fix the web site. And your observation partially invalidates this particular argument.

          Lisp might have the technical capabilities to do concept programming. But based on my limited experience, concept programming is not in the Lisp mind set any more than for other languages. Rather than asking themselves how code could represent user concept, Lisp programmers spend their time finding ways to turn application concepts into lists or functions. Just like C programmers spend their time turning concepts into pointers and low-level malloc(). This is backwards in both cases.

          The core of my (+ 1 2) argument is that this form is the natural representation of the concept in Lisp. The Lisp users consciously choose mathematical purity over diversity. Saying "everything is a function" or "everything is a list" is mathematically correct. But it is not true from a concept point of view. If you have any doubt, a comment from the package you referred reads essentially: 1+2 is equivalent to (+ 1 2), but much easier to read according to some folks (emphasis mine). This comment was most certainly written because it was expected that there would be a negative reaction to trying to mess with the sacred notational purity of Lisp. The comment really reads like "Eh, I'm not one of them guys who don't know the one and only truth!"

          Lisp is both more extensible that C, and much more capable to digest new paradigms, like object-oriented programming. This is no coincidence. I see this is a proof by existence that concept programming can enhance the longevity and adaptability of the code.
  • Uh... (Score:3, Informative)

    by eggstasy ( 458692 ) on Wednesday November 27, 2002 @02:02PM (#4768759) Journal
    Isnt this the whole philosophy behind OO programming? ISTR my OO teacher saying something like "every class should reflect a real-world concept"... but hey, it's been a while. I could be wrong.
    • Re:Uh... (Score:4, Informative)

      by sporty ( 27564 ) on Wednesday November 27, 2002 @02:14PM (#4768888) Homepage
      Problem is, in OOP, everything is an object, or a thing.

      In AOP (aspect object programming) everything is a verb and a matter of flow. I.e. after doing one thing, you do another. Of if this action is taken, this must be taken. Sorta trigger driven.

      In concept, everything seems to be more verb/adjective like. I.e. you wouldn't create a Max object or a Drive object or a Smell object in OOP. You'd create things that have a max() method, or a drive() or smell() method. You'd create the concept of smell() and prolly return something that describes the result of finding the max(), or driving() or smelling().
      • > You'd create the concept of smell() and prolly return something that describes the result of finding the max(), or driving() or smelling().

        How's this differ from polymorphism as implemented in languages with type inference?
        • Your base. You are still based on objects with nouns that have the abilities. Reverse it where you have abilities or qualities and depending on what you throw at it, they get evaluated differently.

          It's not to say you can't do it with OOP or AOP, it's just the syntax that is different. Just like you can do procedural or AOP within OOP.. typically you don't.
          • > Your base. You are still based on objects with nouns that have the abilities. Reverse it where you have abilities or qualities and depending on what you throw at it, they get evaluated differently.

            Now it's polymorphism and multiple dispatch. I recognize that every language is basically just syntactic sugar for either turing machines or lambda calculus, but I'm just failing to see what's revolutionary here and isn't a well-trod concept with a shiny new label on it.

            Maybe that's what FP needs is a shiny new label, who knows.
            • It's just syntax and organization that's new. With concept programming, things don't look like objects anymore.. just verbs.. kinda freaky. I think it the concept looks more... lispish.. where you can take a function, throw anything at it and "do something" with what is passed.

              Know what I mean?
              • Re:Uh... (Score:3, Informative)

                by aminorex ( 141494 )
                Yeah, you mean meta-object protocol [lisp.org]
              • but what are the things that you throw at the verbs if they are not nouns?

                I think I'm going to have to read up on this as it sounds interesting.
              • A concept is not something in your code. It is something from the application space. So you don't throw something at a concept. You represent it in the code. And there are many ways to do it.

                For instance, the concept of adding A and B can be represented by:

                A+B: a built-in operation, the preferred form for C
                Add(A, B): a function call, the preferred form for C++ except for built-in types
                (+ A B): a list representation that can be evaluated, the preferred form for Lisp
                A.Add(B): a method invokation, the preferred form for SmallTalk (written A + B in SmallTalk)

                All of these concept representations have subtly different semantics. What does that mean with respect to how well the concept is represented? This is a question that good programmers always ask themselves implicitly, and that concept programming makes explicit.
                • Add(A, B): a function call, the preferred form for C++ except for built-in types


                  Slight nit, but this is the basic form in non-OO languages and braindead OO ones like Java. In C++ you just overload the + operator and write A + B like you would expect. At the compiled level, yeah, it's a function call (and you can write it as A.+(B) [or something like that] if you really want to).


                  To me, the concept of addition is best represented by A + B, mainly because that best reflects the way we inherently think of the operation.

    • Re:Uh... (Score:3, Informative)

      by KieranElby ( 315360 )

      True, but not every concept is easily represented as an object.

      The 'Concept Programming vs. Objects' [sourceforge.net] page explains how concept programming relates to OOP.

      • Hmm, read that comparison and thought of python, which I'm most intimate with.
        There we have module or global functions (methods), in order to represent something like "max".

        I mean, it's trival to find "things" which are _not_ well represented by "objects", that's why pure object oriented languages are not soo omnipresent as not-so-pure ones. Every mathematical function comes to mind etc., do we really need a new paradigm for that - and if yes, why don't we just call it "not-so-pure-OO"?

        • The point of the new "paradigm" to understand how and when to select Python, how and when to select Java, how and when an object oriented design matches the concepts, how and when a function would be better.

          Today, we do this implicitly, a bit like experienced programmers used function pointers in C to do some unnamed sort of OO.

          What if you make it explicit? Then you realize that any tool or language has built-in limits, but that we learn how to workaround them, and then make it part of our mental model. That is a Bad Thing.

          The reason for the Maximum example is that I don't know of a good way to write it in Java. I know of good ways to write it in many other languages. But in Java, all approaches add a lot of useless noise and concepts just to bolt it on one of the few Java paradigms. The same is true in C or C++ as well. That is undesirable.

          The same limit exists for more complicated concepts in any non-extensible language. Lisp or Python are somewhat more extensible than C or Java, but you can still hit the ceiling pretty easily. In Lisp or SmallTalk, for instance, it would be doing math-intensive work, because you don't want to write (+ 1 2) when 1+2 is shorter and nicer.
          • The same limit exists for more complicated concepts in any non-extensible language. Lisp or Python are somewhat more extensible than C or Java, but you can still hit the ceiling pretty easily. . In Lisp or SmallTalk, for instance, it would be doing math-intensive work, because you don't want to write (+ 1 2) when 1+2 is shorter and nicer.

            I don't want to do language evangelism, but I'd be interested in what is wrong with python. I know it sounds silly to insist on that, but my next question would be why you don't just use python, because I find nothing on that site (didn't read everything, though) which python couldn't do.

            And here [slashdot.org] you give an example about C++ which is a critique against the language, not the underlying "paradigm", AFAIK.

            • I don't want to do language evangelism, but I'd be interested in what is wrong with python

              Nothing is 'wrong' with Python (or C or Java for that matter). Actually, IMHO, it is a very good language (clean syntax, clean semantics). What is wrong is when you use for what it was not designed for.

              Here are a few extreme examples: would you use Python to describe a page (using it like HTML)? Or to do numeric-intensive computing? For real-time embedded applications? To facilitate your coding of a "memcpy()" implementation in a difficult assembly like Itanium? If not, why not? The reasons are probably not the same in each case. Note that for all these examples, I think that XL could perform very well, given the right "plug-ins".

              To use a less extreme example from the web site: if a good fraction of your application domain was expressed using the derivative notation, how would you automate the job of turning these application derivatives into code in Python? I'm not saying that you can't, I'm really asking the question. My limited knowledge of Python doesn't indicate that it would do that very well, but I might be wrong. I'm always ready to steal ideas, you see ;-)
              • What is wrong is when you use for what it was not designed for.

                Well, that's right with any tool ;).

                Note that for all these examples, I think that XL could perform very well, given the right "plug-ins".

                Well, python is extremely friendly to other languages, esp C, C++, Java. See for instance Extending and Embedding the Python Interpreter [python.org] . So, IMO python lends itself very well to numeric computing (see also the numpy module etc.), as it's easy to first write the code and later implement critical sections in a compiled language.
                For realtime embedded applications, see for instance this thread [google.com] on comp.lang.python. Btw. there are a lot of very friendly and competent people which are always interested in language wars^H^H^H^Hcomparisons. I bet if you asked your question about the derivative notation (which can't be translated literally to python, I'm sure), this might lead to interesting discussions.

                • Btw. there are a lot of very friendly and competent people which are always interested in language wars^H^H^H^Hcomparisons.

                  It's not about language comparisons. You can apply concept programming principles in Python. If you decide to call a method "Draw" rather than "Vhhlqkwz", it is because you apply concept programming implicitly: "Draw" is a better program representation of the concept than "Vhhlqkwz", at least for English readers.
      • The 'Concept Programming vs. Objects [sourceforge.net]' page explains...

        I read through that page but I fail to see how concept programming should be any better than OO programming. Sure, with C++ you cannot easily specify methods that take lists but it's not because of OO but because of C++ limitations. IMO, max() should be method of a list or a group and C++ can do that fine with templates. Does anybody grok the Zen of Concept Programming? Is it really any better than OO programming?

        • IMO, max() should be method of a list or a group and C++ can do that fine with templates.

          Are you suggesting that:

          x = (new List<int>).Add(1).Add(2).Add(3).Add(4).max()

          is a better representation of the concept in your code than:

          x = max(1, 2, 3, 4)

          ?
          • Are you suggesting that:
            x = (new List).Add(1).Add(2).Add(3).Add(4).max()
            is a better representation of the concept in your code than:
            x = max(1, 2, 3, 4)

            If you have code like max(1, 2, 3, 4), a simple bugfix would be replacing it with simple 4. Usually, if you have to get max() of a group (unordered) or list you first generate the group or list somehow. The concept here is that the list has the required method for finding the maximum value instead of some random procedure somewhere else.

            So the code becomes:
            /* thelist is of type list and is initialized somewhere */
            m = thelist.max()

            If you hardcode the list's contents you already know what the maximum value will be. No need to compute it runtime.

            • You are not implementing the concept I talk about, but a different one, the max of a list. Trying to convince me that this is the right concept because your language makes it easier to manipulate illustrates my point very well: you are not even aware of the "concept cast" that happened there. Blue pill or red pill?

              Let me follow your suggestion for a moment. Consider now something like Max(BUFFER_SIZE, CACHE_SIZE), where BUFFER_SIZE and CACHE_SIZE are two implementation-defined constants.

              - Why do you need a list?
              - Why do you need a method call?
              - Why do you need dynamic dispatch?
              - Why would I need to box BUFFER_SIZE as an Integer?
              - Why should anybody have to implement a "LessThan" method wrapper?
              - Why should I do the computation at runtime, when, as you pointed out, the compiler (but not me) should trivially be able to compute it, for any given platform?
              - Why should I change my code if CACHE_SIZE happens to be sampleSize, a variable?
              - Why should I change the form of my code if I compare three entities and not two?
              - Why should I change the form of my code if the compared entities are not integers but strings?

              The answer to all these questions is "Because of Java", not "Because of the application concept". That is the problem.

              And now, the more important, higher-order questions:
              - Is this an unrealistic example, something that nobody would ever need? In other words, is your "Usually" justified?
              - Do you really think that this problem occurs only with Max, or am I trying to illustrate some general class of problem? Since it is obviously the latter, why try to fix my Maximum implementation? Are you trying to avoid the real issue?
              • max(...)
                max(sequence) -> value
                max(a, b, c, ...) -> value

                With a single sequence argument, return its largest item.
                With two or more arguments, return the largest argument.
                Python 2.1.3 (#1, Jun 11 2002, 10:40:20)
                [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
                Type "copyright", "credits" or "license" for more information.
                >>> max(1,2)
                2
                >>> max((1,2))
                2
                >>> max('a','b')
                'b'
                >>> max((1,2,3),(1,2,3,4))
                (1, 2, 3, 4)
                >>> max((1,2,3,2,5),(1,2,3,4))
                (1, 2, 3, 4)

                # it also works with any object, provided it knows how to compare itself, defined by __cmp__:

                >>> class my_obj:
                ... def __init__(self,item):
                ... self.item = item
                ... def __cmp__(self,other):
                ... return cmp(self.item,other)
                ...
                >>> a = my_obj(3)
                >>> b = my_obj(4)
                >>> my_max = max(a,b)
                >>> my_max.item
                4
                • Now we are getting to an interesting implementation, at last :-) So let's see what concept programming can help us uncover...

                  One nit: the implementation probably allows comparison between objects A and B of different type, as long as you can compare them. So this is not exactly the same concept, but it's close enough.

                  On the use side, nothing to say. The notation is natural. It is also the same as the one I suggested :-)

                  On the implementation side, I notice that you did not show the first line of the help, which states "built-in function". Why is it a built-in? Can't you write something like that in Python?

                  Another nit: the use of __cmp__ instead of < is not very convincing. The < concept is mapped in a relatively unnatural way.

                  A more serious issue: you made __cmp__ a member of the "my_obj" class. This is, I think, the most natural approach in Python. Back to application space, < is not defined based on the type of its left operand. Why does it matter? If you write A < B, you don't expect the left and right to behave differently with respect to types. In your approach, they do. If I write max(A, B), it might work if A has a __cmp__ even if B doesn't.

                  So the implementation is useful and practical, but concept programming still has some interesting things to teach us about it. All of these comments have interesting applications in terms of long-term maintainnability. This is left as an exercise for the reader
                  • On the implementation side, I notice that you did not show the first line of the help, which states "built-in function". Why is it a built-in? Can't you write something like that in Python?

                    built-in in python doesn't mean that you can't implement it in python, it just means the you doesn't need to import a module to use it. It would be trivial to re-implement it in python, I just didn't do that.

                    Another nit: the use of __cmp__ instead of < is not very convincing. The < concept is mapped in a relatively unnatural way.

                    I can't follow here, maybe you misunderstood me, maybe I don't understand you. I don't use __cmp__ instead of <, at most I use cmp instead of <. But
                    cmp(a,b) == (a < b)
                    in python. I could have also written
                    return a < b
                    .
                    To show a more esoteric example:
                    >>> (3).__cmp__(4)
                    -1
                    >>>
                    >>> (4).__cmp__(4)
                    0
                    >>> (4).__cmp__(3)
                    1
                    The brackets around the ints are there to signal that the 4 is to be used as an instance of its type (class), so we can get at its methods - don't ask me why, this would not be needed with strings etc.). Note that since version 2.2, there are more special comparison functions to distinguish every possible comparison (=,!=, <, ), and that if the left type doesn't have the right method, the right side might have the reflected version (i.e. instead of >), so this is looked up on the right side, and used if found.
                    And yes, you can destroy the mathematical meaning of these operators with this power, but that's life, as long as you don't deliver a __cmp__ function which fdisks the hard disk as a side effect ;). I think this also answers your next paragraph.

  • Nice (Score:1, Funny)

    by Anonymous Coward
    But I think I'll stick to Buzzword Programming.
  • At my first engineering job, twenty years ago, the head programmer liked to say, "Sounds like bullshit to me!" It is only when I read about things like this that I miss him...
  • by GusherJizmac ( 80976 ) on Wednesday November 27, 2002 @02:30PM (#4769027) Homepage
    The examples on the site of where Concept Programming results in a better solution, are quite contrived and not very convincing. They make Concept Programming out to be nothing more than a glorified bowl of syntax sugar (hold the types), and that's not always a good thing.

    The first example discusses the concept of "Maximum", and shows how you would implement that concept in Java, followed by the allegedly superior XL way to do it. The Java "class" makes no sense, and really would not be the way to go about it. YOu would never want to model the concept of Maximum in that way, but if you did, you would use the already-existing Comparable interface and creating a static method called "Max" of some class that takes a list of comparable objects.

    Furthermore, in C, you can model it exactly as they have, since C allows multiple arguments.

    The next example was discussing takinga derivative and how you can translate some incorrect Java syntax that takes a derivative into the Java equivalent. Why not write a method to do this? What is to be gained by using a non-standard syntax? It makes it harder to write (you have to learn something in addition to Java), and harder to read (same reason).

    As for the XL language, and the notion of Concept Programming, it just wasn't explained well at all, and left me saying "what's the big deal? What does this buy me? Where is a real example?" Not every program (dare I say not many programs) are based around mathematical equations and operations. Most involve executing some logic based on input and spitting out output. Modelling that as math seems really counterintuitive (and not in-line with the "concept" of your domain).

    Ultimately, seems like some typical academic wank-fest that someone can use to get their Ph.D., but not very applicable in the real world.

    • The first example discusses the concept of "Maximum", and shows how you would implement that concept in Java, followed by the allegedly superior XL way to do it. The Java "class" makes no sense, and really would not be the way to go about it. YOu would never want to model the concept of Maximum in that way, but if you did, you would use the already-existing Comparable interface and creating a static method called "Max" of some class that takes a list of comparable objects.

      I agree that the Java implementation was idiotic at best. One would think that the author was somehow convinced that "maximum", being a noun, would therefore be implemented as its own object in an OO design. Of course, competent OO programmers do not merely select random nouns from sentences to formulate their design, because doing so leads to backwards implementations, as the author has demonstrated. Unfortunately, because the implementation chosen in the Java example was so unrealistic, and the implementation differences are the basis for the comparison, the entire comparison between Concept Programming and OO Programming is worthless.

      I find it interesting that the "conceptual" implementation looks suspiciously similar to C++'s std::max_element(). In fact, it almost seems like the comparison is really between Generic Programming and OO Programming. I think that page needs to contain better examples to illustrate its point.
      • I agree that the Java implementation was idiotic at best.

        Several comments were along these lines. Now a challenge for all you Java experts who discuss my sanity :-) Please give me a good representation of the "Maximum" concept in Java.

        Now that you thought about something, ask yourself:

        - How large is it? What fraction of the code is useful code, what fraction is syntactic or semantic glue?

        - How difficult is it to use? How much code for the users is irrelevant glue?

        - How restricted is it? Does it work for all types? Does it work for "int", or do I need additional "helpers", like boxing/unboxing? Can I add arbitrary classes? What does it mean for my code if I need more than one "less-than"?

        - How efficient is it? Did I just create 5 objects, two lists, and invoked seven method calls just to compare two numbers?

        - How easy is it to document? Does the structure make sense, compared to the original "Max" idea? Could you explain how it works to your grandmother?

        Here are the approaches that I know about:

        - Having static functions like Max(int, int). This works for a limited number of arguments and a limited number of types. So it soon becomes very verbose. It is still the best method in many cases.

        - Having an interface like Comparable with a method like "Less". This means that I need all sort of ugly boxing, it doesn't work with the natural less-than operator, and it is a bit inflexible, for instance if you want more than one meaning for "less-than".

        - Passing lists and comparators around. It is very inefficient, and the caller has to use an unnatural interface.

        - A few less interesting solutions, like passing a string, or having a Max class where you accumulate values using one method and produce the result using another.

        Please post about any other idea you might have.
        • Several comments were along these lines. Now a challenge for all you Java experts who discuss my sanity :-) Please give me a good representation of the "Maximum" concept in Java.

          The comparator/container approach is probably the best way to implement this "concept". I agree that the Java solution is more verbose than the XL solution, but I am not convinced that this matters much. Java tends to be verbose, but it is still a very easy and readable language.

          "Concept Programming", as I understand it, mainly focuses on extensibility. The web site shows a derivation example, where the language has essentially been extended, allowing it to "properly reflect the concept" of derivation in the source code. While the concept of an extensible language is somewhat interesting, I don't think that altering the fundamentals of a language to suit the current problem is a great idea. Doing so makes the common programmer a language designer, and raises issues concerning complexity, maintenance, and readability. Elegance is nice, but it doesn't necessarily help programmers do their job any better.
          • The Maximum example goal is to illustrate how not having the right tool leads to having to workaround. Being more verbose is the problem. If you are more verbose by a factor of 4 on something that simple, what about something more complicated?

            Concept programming doesn't focus on extensibility. Extensibility is a consequence, not a goal.
            • The Maximum example goal is to illustrate how not having the right tool leads to having to workaround. Being more verbose is the problem. If you are more verbose by a factor of 4 on something that simple, what about something more complicated?

              Verbosity is not an effective measure of complexity. *cough*Perl*cough*

              I am glad you responded and updated your web site, because I feel like I have a better understanding of "Concept Programming". I still don't understand exactly what methodologies are considered "concept-oriented". I understand that, according to "Concept Programming", program code should represent application concepts, but what programming techniques are utilized to achieve that goal? Is it nothing more than a goal? For "Concept Programming", it almost seems like a programmer would have to use a language that supports every possible programming construct with a syntax that is simple and intuitive to the domain. To do this, one would probably require an extensible language. It sounds like extensibility truly is a goal, or at least a requirement to reach your goal.
    • The next example was discussing takinga derivative and how you can translate some incorrect Java syntax that takes a derivative into the Java equivalent. Why not write a method to do this? What is to be gained by using a non-standard syntax? It makes it harder to write (you have to learn something in addition to Java), and harder to read (same reason).

      Because taking a derivative is not a function. It's a metafunction; it is applied to a function and returns another function. For example: the derivative of sin(x) is cos(x), for all x. This can't be implemented as a method and so using method syntax would be nonsensical.

      I agree that I just didn't get the point of the Minimum example; but the meko thin-tool application (which is used for the above transformation) looked really cool. Program transformation is a hot topic in the functional programming world; I'd like to see what it can do in the procedural world.

      Oh, yeah, and all programs can be modelled as a function; all Turing-complete languages are equivalent, so a program can be transformed from one language to any other, including the purely functional languages. This is the entire basis behind a lot of computational theory, and so their mathematical representation makes a lot of sense.

      • Because taking a derivative is not a function. It's a metafunction; it is applied to a function and returns another function. For example: the derivative of sin(x) is cos(x), for all x. This can't be implemented as a method and so using method syntax would be nonsensical.

        That's not exactly true. It just depends on how you model it. In any case, you either are creating a generalized derivation toolkit (which you could create in Java or any language), or you do some combination using lookup tables.

        Methods in Java are objects, so you are free to pass them to other methods. Since everything is an object, including methods and classes and variables, you can pass a function to a function and have it return a function. This is also trivial in C.

        The point is that the explanation doesn't really give much motivation for Concept Programming, other than as an academic exercise.

        That transformation tool looks like a souped-up pre-processor (which C has had forever), and not only does it not seem terribly ground-breaking, but it seems like a generally bad idea if you are wanting to produce production-quality maintainable code, especially when the language in the example (Java) already has powerful facilities for you to model your solution without relying on some new syntax and translation step.

        • Methods in Java are objects, so you are free to pass them to other methods. Since everything is an object, including methods and classes and variables, you can pass a function to a function and have it return a function. This is also trivial in C.

          Oh, I'd like to see you try! Since this is so trivial, please give me a definition of a method or function "derive" in either C or Java so that I can write, for any function G, something like:

          function F = derive(G);

          In C++, you can do it for some special forms of G, using template meta-programming. People have written articles about it. It takes thousands of lines of VERY hairy C++ code. In Lisp, you can do it. In Java or C? I don't think so.
    • The examples on the site of where Concept Programming results in a better solution, are quite contrived and not very convincing. They make Concept Programming out to be nothing more than a glorified bowl of syntax sugar (hold the types), and that's not always a good thing.

      You missed the point of the examples. They are not about syntactic sugar in the XL or Java++ implementations, but rather in the semantic limitations that non-extensible methodologies or tools impose on us. You prove the point further with your examples, by sticking to reference models even if they don't work very well.

      The first example discusses the concept of "Maximum", and shows how you would implement that concept in Java, followed by the allegedly superior XL way to do it. The Java "class" makes no sense, and really would not be the way to go about it. YOu would never want to model the concept of Maximum in that way, but if you did, you would use the already-existing Comparable interface and creating a static method called "Max" of some class that takes a list of comparable objects.

      In order to implement the concept, you have added a lot of noise in the implementation, which includes not only the Comparable interface, but the need to use methods rather than operators. Your method needs to take a List of objects, so you have added noise at the call site as well. None of this code is actually useful in representing the original concept, it is some artificial complexity to stick to the OO model at any cost.

      Furthermore, in C, you can model it exactly as they have, since C allows multiple arguments.

      But the C vararg interface is not object-oriented, so this doesn't invalidate the point. As an additional note, it is not type safe either, So you need to add more noise to pass the type information, again both in the implementation (ever looked at the code for printf) and at the call sites (beauties like "a=%s b=%d")

      The next example was discussing takinga derivative and how you can translate some incorrect Java syntax that takes a derivative into the Java equivalent. Why not write a method to do this? What is to be gained by using a non-standard syntax? It makes it harder to write (you have to learn something in addition to Java), and harder to read (same reason).

      As another poster noted, this is a meta-operation, which transforms a concept into code. You can't write a method that does that, unless your method is part of the compiler. Please show me how you would code this in Java.

      Ultimately, seems like some typical academic wank-fest that someone can use to get their Ph.D., but not very applicable in the real world.

      Isn't that exactly what they said about OO when it first came out? BTW, I don't have a PhD.

      Is it useful? If you ever wrote a perl script that scans your source code, or ever used some preprocessor, or something like that, then you have hit the limits of your tools, and concept programming tools would have helped you. On the other hand, if you never did, then you are not a Real Programmer (TM)... I can literally list hundreds of examples of such additional "build tools" in common Open Source projects.
      • Charles Simonyi (the infamous!) appears to think
        it's more than a wankfest. He has a startup based
        on what is, in the abstract, essentially the same
        basic idea.

        But I must disagree about the deficiences of Java.
        It's all about style. Admittedly, the lack of
        varargs-alike syntax in Java is a bit annoying, but
        it's trivial to work around:

        ComparableSubclass object =
        ComparableSubclass.Maximum(makeList4(a,b,c,d));

        • Admittedly, the lack of varargs-alike syntax in Java is a bit annoying, but it's trivial to work around

          And then, you totally miss the point. It's not about Java not having varargs, it's about the fact that not having varargs forces you to workaround. The resulting code no longer maps correctly to the application concept. It generally contains a lot of noise.

          You only showed a fraction of the total noise with your example. makeList4 looks simple, but how many of these do you need? Do you need makeList27? For floating-point arguments? It gets very verbose by then...

          And then, the only reason it is trivial to workaround is because I chose a trivial example to make my point. Take something more complex than maximum, and you have real problems. Compare the Java and Objective-C of the MacOSX frameworks to see what I mean...
          • I think there are people who get this, and people who don't. Some people define "syntactic sugar" in such a broad way that you would think there is nothing separating anything from machine language but syntactic sugar.

            Personally, I think of the "power" (ie. expressive power) of a language metaphorically as in physics: power = work / time. In this case, it's how much "work" a program can do divided by the amount of developer "time" it took to write the program.

            I think it's likely that we are not even within an order of magnitude of the power that languages will eventually possess.

            To me, syntactic sugar is an equivalent syntax for some construct that has no value but to save some finite number of keystrokes every time it is used. An example of this is infix operators in OO languages, where we write "3 + 4" rather than "3.plus(4)". (So, clearly, syntactic sugar can be important, since I don't think people would willingly use a language that mandated the latter notation.)

            In contrast, working around the lack of varargs in Java the way your parent post described would require an infinite number of keystrokes every time you do it, because you need an infinite number of makeListXXX methods in order to be truly "equivalent". Therefore, varargs is not just syntactic sugar for the makeListXXX notation in my books; it is fundamentally more powerful. And, even if you could write enough methods to make them essentially equivalent, the time required to write all these methods makes Java, by definition, less powerful when it comes to varargs.

  • from the justification [sourceforge.net]:
    Projects fail not because of language defects, but because people insist on using the wrong tools. Projects succeed when people use the right tools, not because of the features of these tools.

    I'm not against this point of view, because I have problems thinking for myself ;) ...
    Isn't this the idea behind a rich OS, like a *nix?

    I have all these languages to choose from, that I can glue together many different ways... I don't have to nail myself to A solution, and thereby A drawback.
    I can get the drawbacks of everything, all at once!

    Maybe I'm sadistic, but I like having a lot of tools in my toolbox. I like that if I don't understand a concept in a language, I'm not shut out of the programming flock forever.

    I knew I should have just posted something funny and hung up. Coffee...

  • by Anonymous Coward
    The latest turn in the computer science industry is the new development process called "Market Programming"

    It is designed to make it easier for people who don't have programming experience, but who do work in the Marketing Department, to develop applications indepedently of a technology or programming staff.

    Market Programming requires a voice-recognition element, because people from Marketing Department are (seemingly) much clearer when speaking, but the meaning of their thoughts are completely lost when committed to the written word are able to systematically analyzed.

    The process amounts to a Marketing Expert speaking into the microphone, or to an individual that the Marketing Expert will treat like a microphone. For example, he might say, "I need a robust, multi-tiered, fault-tolerant, enterprise-class, innovative, xml, j2ee, turn-key, hands-off solution ASAP."

    At this point, the Market Process begins.

    Step 1: BLORK!
    Step 2: Marketing Expert double-clicks on the setup.exe icon to receive and implement the solution.

    NOTE: Step 1 may take a while. Please be patient. The process will initially report the completion time to be 3 months, but it may eventually take 9 months to complete.

  • *Begin Rant*
    I'm not about to say this or any of the other ideas referenced are bad, at least not always. It just seems like an idea that needs more proof, evidence, or something beyond an opinion.

    In physics, chemistry, and other hard sciences, ideas are submitted to far more scrutiny than many of todays ideas. We can come much closer to proving the ideal gas law than theorists have bothered to do with design patterns, OO, concepts, extreme programming, etc. Too often in computing today, an idea is "proven" when some analyst, poohbah or Slashdot has referenced it.

    Since there really isn't any hard math, evidence and such, the idea is much closer to opinion than fact, and as such won't last very long.

    I'd submit that the best approach to programming is to hire good people, and keep them once they learn the product. Give them a real budget with realistic deadlines. Listen to the developers. Think twice before shoving methodology dujour onto the development team. A good team probably can function more efficiently and effectively without unsubstantiated opinions. When something is *really* substantiated, then developers should and would listen.
    *End Rant*
    • Since there really isn't any hard math, evidence and such, the idea is much closer to opinion than fact, and as such won't last very long.

      Two hard data points:

      - By replacing a "wrong" concept (pointers) with a "better" concept (input/output flags on arguments) for parameter passing, XL shows a 70% improvement over C++ for parameter passing (the detail being, because it avoids the loads and stores, and does everything in registers). This particular fact had been documented previously for languages such as Ada.

      - By replacing a "wrong" concept (hierarchy of classes with left-shift operator) with a "better" concept (type-safe, variable argument lists), XL shows a 7x improvement over C++ for text I/O.

      Your turn: prove that having the freedom to select the right tool for the job is a bad thing :-)
      • Your post is meaningless.

        70% improvement in what? Kittens petted per minute? And is that an increase or a decrease - it rather depends on how you feel about kittens after all.

        A 7x improvement over C++ for text I/O? Again, what are you measuring? Time spent correcting programmer errors? Time spent in meetings deciding a standard syntax for text i/o functions? Time spent by the compiler building the machine code?

        What?
  • Reading around his site I think I'm impressed by the volume of work put in, and (if it is indeed as he claims), the quality of e.g. complex number manipulation.

    Mozart is a poor choice of name, unless it is very very very old - because it is already taken by a programming system (see the Oz language [mozart-oz.org]).

    Also, check out Pliant [pliant.cx] which is (relatively) mature, and does most of what is discussed on the site and a whole lot more.
  • This really is a silver bullet posting.

    Don't you love simple, two-word statements that the majority of your audience understands to mean "someone's become fixated on something that will magically 'fix everything'"?

    I could make exactly the same type of post about any technology or approach.

    When trying to make people aware of an approach that they do not know, do *not* use words such as "amazing". They simply obscure any real information, and turn off precisely those people who it may benefit.

    If I were to run your post through spambayes [sourceforge.net] I suspect it would classify your post as spam on my machine. Although it may have enough useful markers to counteract the obvious spam markers.
  • Shark Sandwich (Score:3, Insightful)

    by ENOENT ( 25325 ) on Wednesday November 27, 2002 @05:00PM (#4770198) Homepage Journal
    So, how will introducing the concept of "concepts" make it any easier to write code? I have no concept of how the concept of concepts will help me to conceptualize programming concepts. A class might be a concept, a function might be a concept, windshield degradation on your racing simulation might be a concept... It appears that that "concept programming" is just a fancy way of talking about waving your hands in the air.

    From the REALLY LIMITED amount of information about "concept programming" on the linked site, it appears that the author REALLY REALLY wants to use higher-order functions (a la Scheme or Haskell), but he just doesn't know it.

    • A class might be a concept, a function might be a concept, windshield degradation on your racing simulation might be a concept... It appears that that "concept programming" is just a fancy way of talking about waving your hands in the air.

      From the web site:


      Obviously, this definition is quite broad in scope. So it begs the question: what is not a concept? Well, by definition, it is either something from the environment that doesn't matter to your program, or it is something that matters to your program but is not from the application environment.

      What doesn't matter is a large set of the application domain, but it's a difficult lesson to learn for programmers. For instance, a car racing game program doesn't need to have much code dealing with the behavior of the cigarette lighter. The cigarette lighter is not a concept for that program. This example is trivial. But how simplified can the engine model be? Should tires wear out? Can the windshield be damaged? All these are non-trivial decisions.

      What is program-specific includes things like the particular syntax to use. It may also contain elements of the design that are mandated by a particular methodology. These elements can be recognized because they don't translate back. There is no word in the mechanics vocabulary for that semi-colon that terminates statements in C.
      • Uh, yeah. That's what I'm referring to.

        Notice that there is nothing in this description that says anything about what concept programming is. It just vaguely asserts that abstraction is good, which is hardly news.

        • Concept programming about making sure that the right concepts show up in your code.

          So that is obvious? Well, let's see Hello World in C++:

          #include
          int main() {
          cout "Hello World";
          }

          And now let's see Hello World in Basic:

          PRINT "Hello World"

          Which one is closer to the "application concept" in that case?

          Hence Observation #1: Concept Programming sounds easy and obvious, but we just don't do it.

          Now, I can write a "print" in C++ and make it look almost like the BASIC one. So no big deal, right? Well, I can certainly do that for simple stuff, but not for complicated things. You cannot write in C or C++ something that is functionally equivalent to WriteLn in Pascal. In XL you can. You can't write a tasking package that gives you the kind of high-level constructs found in Ada. In XL you can. You cannot have the C preprocessor or compiler compute a symbolic derivative for you. In XL, you can.

          So that is Observation #2: For a given tool or language, there are concepts that I can write, others that I can't. They depend on the tool you select.

          This brings me to Question #1: Can we write programming tools where you can talk about any concept?

          And this is Proof By Implementation #1: XL is a language that is more extensible than even Lisp. I don't know if I can represent any concept, but I can represent using a single language models that up to now were natural only in a few "specialized" languages like Ada, Prolog or Lisp.

          Oh, but this adding abstraction, so it must be slow, right? Well, no.

          Observation #3: The choice of concepts defines the performance you get.

          Observation #4: Higher concepts sometimes enable optimizations that would be impossible without the compiler knowing what it's talking about.

          Proof by Implementation #2: By applying these techniques, XL already has demonstrated on a few "toy", but common examples significant performance improvements compared to languages generally considered "fast" like C or C++.

          Is that enough to get you interested again? :-)
          • Forget XL for a moment. My objection is based on the idea of "concept programming". Based on the information on your site, concept programming could be anything at all.

            If concept programming really means anything, then you need to sit down and write up a detailed and exact description of what this is. If your description takes less than 5000 words, go back and see what you left out. Most importantly, don't refer to XL or any examples written in XL, as these will just confuse the issue at this point.

            After you have defined concept programming clearly and exactly, then you should write a paper on XL as an example of a language designed for concept programming. Be sure to explain exactly what you mean when you say that XL is "more extensible than Lisp". You should also compare and contrast XL with languages such as Scheme, Haskell, Dylan, and Smalltalk. Do NOT mention performance or optimization in this paper AT ALL. XL is a language, and languages do not have performance characteristics. Performance is a property of an implementation.

            Finally, when you've defined all this stuff, you can talk the performance of code generated by the current implementation of your compiler. This document is the least interesting, because, if you are working diligently at it, performance of the next version of your compiler should be better, and so on. So this document will change rapidly, while the first two should change either very slowly or not at all.

            Sorry to be pedantic. I'm trying (now) to help you explain (to me and everybody else) why concept programming is something we should be interested in.
            • If concept programming really means anything, then you need to sit down and write up a detailed and exact description of what this is

              What is the part you don't understand in "Concept programming is the idea that application concepts should be reflected in program code" ? That's much less than 5000 words.

              Based on the information on your site, concept programming could be anything at all.

              This was the "Is it trivial?" objection in my previous reply. It sounds trivial, and we think we do it. In practice, we don't, that's my Observation #1 above. Anybody can, by looking at the C++ vs. BASIC code, and applying the concept programming one-line definition, identify that C++ has a worse "mapping" of the concept than BASIC for that example. You want a Lisp or Haskell or functional languages example? What about the concept of "addition"? Do you think that writing (+ 1 2) is the correct way to represent the concept that everybody on the planet writes as 1+2 ?

              Why can't I just "forget XL for the moment"? Because for some simple examples, XL is the only language where I can express the concept at all. Maximum is a good example. I know of no language which lets you define it "perfectly". Give me any language (and I know a few at least superficially) and write Maximum in that language, and I can probably point out why it doesn't behave enough like the mathematician's concept called "Maximum".

              After you have defined concept programming clearly and exactly, then you should write a paper on XL as an example of a language designed for concept programming. Be sure to explain exactly what you mean when you say that XL is "more extensible than Lisp".

              I gave a clear and exact definition of what is concept programming. I wrote a web site and 50000 lines of code instead of a paper, precisely showing XL as an example of language designed for concept programming as you suggested. As for the "more extensible than Lisp", I think you could figure it out if you actually spent the time to read and understand the web site. Ask yourself if Lisp could really cover real-time programming or numeric computing? And if not, why not? I know the answers, and I have written them on the web site (or at least I tried to...)

              Do NOT mention performance or optimization in this paper AT ALL. XL is a language, and languages do not have performance characteristics. Performance is a property of an implementation.

              Based on several years of experience as a compiler developer, I can tell you that you are very wrong. Languages have performance characteristics. One contribution of my XL performance benchmarks is to illustrate why: because a wrong concept can prevent the compiler from doing the right thing. In Java, one "wrong concept" is the stack-based GC JVM design (which can be contrasted to the Taos/Elate virtual machine design, which did not have similar performance drawbacks, at least theoretically). In C and C++, it is the over-use of pointers which causes something called "aliasing". Lisp, Haskell, Scheme, Dylan and Smalltalk share a single performance-impacting "wrong concept", namely their reliance on a single data structure (or a few of them).

              Sorry to be pedantic.

              You sound more like someone who has been brainwashed by the Lisp school :-) Not being able to think out of the Lisp school is not better, from a concept programming point of view, than not being able to see past pointers and low-level memory management.

              And to validate my claim that if you had explored the web site, you could have found answers, I quote this page [sourceforge.net]:


              The belief in the superiority, universality and elegance of their preferred view of the world is particularly strong among users of "minority" languages, such as Lisp or Objective-C. A Slashdot comment, for instance, read something like: "People who don't know LISP are bound to reinvent it, badly." This kind of statement may be in reaction to the ignorance most people have of how well designed Lisp actually is. Lisp stood the test of time deservedly.

              But no matter how well known the quote is, the objection is easily dismissed. Try and do Prolog-style logic programming in LISP, and you'll end up with a lot of useless effort (compared to using Prolog, of course.) Try and do numeric-intensive programming, and LISP is no good either, not because of its performance, but because mathematicians write 1 + 1 rather than (+ 1 1) Naturally, you could write an expression parser in LISP, but then in C++ and Fortran, you don't have to... Using Lisp for such projects is, in most cases, bigotry.

              • Why can't I just "forget XL for the moment"? Because for some simple examples, XL is the only language where I can express the concept at all. Maximum is a good example. I know of no language which lets you define it "perfectly". Give me any language (and I know a few at least superficially) and write Maximum in that language, and I can probably point out why it doesn't behave enough like the mathematician's concept called "Maximum".
                How about Perl?
                sub Maximum {
                my $max = shift;
                for (@_){
                $max = $_ if $_ > $max;
                }
                return $max;
                }
                or we could just translate your own Maximum version

                sub Maximum {
                my $N = shift;
                return $N unless @_;
                my $result = Maximum(@_);
                if ($result < $N) {
                $result = $N
                }
                return $result;
                }
                (slashdot ate my indentation)
                • Try the following with your code:
                  print Maximum(1, 2, 54,, 4), "\n";

                  print Maximum(1, 2, "asd", 4), "\n";
                  print Maximum("abc", "def", "hg"), "\n";
                  print Maximum("def", "abc", "hg"), "\n";

                  The output is:
                  % ./max

                  54
                  4
                  abc
                  def

                  Yeah! One out of 4! No compiler warning!

                  Bzzzt, you lose :-)

                  If I was nasty, I could also insert a few snide questions her about the deep application concept represented by the subtle difference between $_ and @_ in your code...
                  • Of course, the concept of maximum can only be definited with respect to some partial ordering function. I was using the partial ordering function <, which in Perl orders numbers numerically and orders strings as being equal with 0. If you have some other partial ordering function you'd like to to define, I can easily replace the calls to < with calls to some user defined less_than function that does whatever you want, and the code will always return a maximal element (x such that for all y, it is not the case that less_than( y , x )). Of course, you'd rather just say I'm wrong than properly define Maximum or notions of well-defined orderings, so just go ahead and call me wrong because my code didn't do whatever it was you had in mind.

                    Incidentally, the difference between $_ and @_ is a matter of syntax, and every programming language, even your beloved XL, has syntax that has to be learned.

                    • Of course, the concept of maximum can only be definited with respect to some partial ordering function.

                      Good point. The XL Maximum example [sourceforge.net] makes this requirement explicit and verifiable by the compiler. The Perl code you submitted doesn't.

                      I was using the partial ordering function <, which in Perl orders numbers numerically and orders strings as being equal with 0.

                      So you are saying "the problem is not in my code, but in the Perl < concept which doesn't match the mathematician's <. Why did you decide to use Perl's < then? By the way, your definition of Perl's < is somewhat incomplete, you could have "use overload" or other operator redefinitions too.

                      If you have some other partial ordering function you'd like to to define, I can easily replace the calls to < with calls to some user defined less_than function that does whatever you want, and the code will always return a maximal element (x such that for all y, it is not the case that less_than( y , x )).

                      I don't need a somewhat pedantic definition of what a partial ordering is :-) My question is why Perl forces me to write all this less_than stuff if all I want is to compare two strings?

                      Of course, you'd rather just say I'm wrong than properly define Maximum or notions of well-defined orderings, so just go ahead and call me wrong because my code didn't do whatever it was you had in mind.

                      I gave a relatively simple and precise challenge: write a Maximum that resembles a mathematician's understanding of Maximum, using any language. Better yet, I also gave you an example written in XL as a reference. It specifies very precisely "what I have in mind", which includes comparing elements of the same type, in any number, using only <, and rejecting any type for which < doesn't return a boolean.

                      You proposed some Perl code in response. It is my duty to simply point out that your code fails to behave like the mathematician would expect it to. Not because "Of course, I want to prove you wrong" , but because it accepts nonsensical input (from the mathematician's point of view), such as Maximum(1, "abc").

                      Incidentally, the difference between $_ and @_ is a matter of syntax, and every programming language, even your beloved XL, has syntax that has to be learned.

                      But the syntax Perl imposes is not exactly close to what the mathematician would expect. Perl mandates the correct understanding of the difference between $_ and @_, which is totally irrelevant to the mathematician. On the other hand, XL tries very hard to ensure that any element of the syntax, if arbitrary, is also necessary. Try to look at the example [sourceforge.net] and find something that you can remove without changing the definition in a way that matters from the mathematician's point of view... I think the only irrelevant item is "C", the name of the boolean variable used for validation of "ordered". All other symbols might be replaced with some equivalent symbol, but not removed.
                    • > My question is why Perl forces me to write all this
                      > less_than stuff if all I want is to compare two
                      > strings?

                      Because it's an application domain concept?

                      Perl is an extensible language. When you need
                      a concept that's not defined in the base language,
                      you write an extension.

                    • I don't need a somewhat pedantic definition of what a partial ordering is :-) My question is why Perl forces me to write all this less_than stuff if all I want is to compare two strings?

                      Ahh, but wait, how does XL know what to do with the < operator? Maybe it works built-in for strings and numbers, but what if I wanted to compare two arbitrary objects? I suppose I'd somehow have to make sure that my objects match the generic type ordered of your max/min code. How would I do that if not by writing a less_than function or overloading the < operator?

                      By the way, I don't understand the generic type ordered in your code at all. What I keep reading is "Something (what? two objects?) is of the generic type ordered if you have two ordered objects and can find out which of them is smaller," but this obviously doesn't make sense.

                      The most readable way of writing down the concept "ordered" in a somewhat formalized way somewhat similar to XL's syntax would imho be something like:

                      generic type ordered (A, B) if defined(A < B)

                      i.e. two objects A and B are ordered if the < operator is defined for these two objects. But that would be just like having an interface "comparable" in an object-oriented language.

                      Finally, let me say that I really do like some of the ideas in XL. I will certainly take a closer look at it when I have time. One thing I think you should consider changing is the name "concept programming." Look at all the confusion it generated in this /. discussion :-). What you really mean is simply "expressive programming," i.e. the goal of your project is to enable developers to write more expressive code, as opposed to e.g. forcing them to make everything an object and to sometimes write very verbose code like in Java-the-language. Of course, "expressive programming" is not a new idea, so you'd lose your new-buzzword marketing advantage ;-)

                    • My purpose here is not to expose the details of XL syntax and semantics, which are largely irrelevant. Let me answer your questions once, but if you have further questions, I suggest taking them to the XL mailing list on SourceForge.

                      How does XL know what to do with the < operator? There are operator definitions. For instance, less-than for integers is declared as:

                      function CmpLT(integer A, B) return boolean
                      written A < B

                      I invite you to look at the real thing in the CVS tree. A pragma connects this declaration to the implementation, which is in this case a built-in operation of the compiler back-end. The same mechanism, but generally without the pragma, is used for programmer-defined types.

                      The way I keep reading it is... but it obviously doesn't make sense

                      It turns out that you read it right. You are telling the compiler what an ordered entity is. Why doesn't this make sense? To implement "Maximum", this is exactly what the compiler needs. It allows the compiler to reject incorrect uses of Maximum, for instance. If you ever passed a wrong argument to a C++ template, you will understand why this is useful.

                      What you really mean is simply "expressive programming" [... which] is not a new idea

                      "Expressive programming" doesn't carry the important notion that your reference is the application domain. A pointer in C is expressive, but I still have problems with a pointer used where the application concept is not that of "reference"... for instance to return values from a function. This problems comes from the concept mismatch, not from whether the code is expressive or not.

                      Also, I'm not trying to do something new, but to do something useful. Some people are doing concept programming today, some are using the terms in a different sense. What matters is that there is a lot of code written today where "my" concept programming shows problems. As long as such code exists, I'll keep trying :-)
              • > you could write an expression parser in LISP, but
                > then in C++ and Fortran, you don't have to...

                You don't have to in Lisp either, since it has
                already been written.

                In fact the only reason you don't have to write
                one in C++ and Fortran is that it has already
                been written. It happens to be part of the
                compiler front-end, but that's irrelevant to the
                point that someone had to write it.

                > Using Lisp for such projects is, in most cases,
                > bigotry.

                It's using a general tool for a specific job.
                That fact that it is such a generally applicable
                tool is what makes it so valuable.

                • The existence of parsers is largely irrelevant. Several Lisp engines have been written in C too. You can write in you C code something like:

                  eval("(max 1 2 3 4)");

                  If there is something you don't like in this picture, you might start to "feel" concept programming.
              • > Try and do Prolog-style logic programming in LISP,
                > and you'll end up with a lot of useless effort

                Not really. You end up with a lot less effort
                than using Prolog, because you get all of Prolog's
                functionality, plus direct access to a more general
                underlying framework. See, for example, Schelog [neu.edu]
                or Poplog [poplog.org].

                Hey, why reinvent the wheel? A dog might not be
                able to walk past a tree without pissing on it, but
                I would hope that a software developer could do
                better than that.... Call me a cock-eyed optimist.
              • What is the part you don't understand in "Concept programming is the idea that application concepts should be reflected in program code" ? That's much less than 5000 words.

                And it's a meaningless statement that tells me nothing. Most modern middle-level and high-level programming languages make it possible to reflect application concepts in program code. Even an old despised language like COBOL (which was designed from the ground up to be readable by PHBs) satisfies your definition of "concept programming". And it's more difficult to pull off, but even C code can have this property.

                I think what you're maybe trying to say is that program code should only reflect application concepts and nothing else (such as the other stuff that creeps in regarding types, memory allocations, etc.) This is probably the biggest part of a programming language's "personality"- whether it is low-level and best suited to working in the machine domain, or high-level and good for working within an abstract domain. No language seems to do both well, although most at least try.

                But think about what you're doing. You're telling a computer how an application should behave in order to interact with some problem domain that is completely foreign to it. Doesn't it make sense to use a language that lets you specify behaviors in both domains? It's nice to do programming entirely in terms of the application's own semantics, but at some point somebody or something has to generate the machine instructions. If I'm using a language that wants me to forget the underlying machine details, I might write some pretty weird programs. At the very least, I'm putting a lot of trust in the behavior of the compiler.

  • The trouble with all this conceptual stuff is that unless you can get a decent spec, or, at the absolute minimum, useful feedback from the end user, all you end up with is a conceptually elegant and/or implementationally efficient useless program rather than an inelegant and/or inefficient useless program.

    In my experience, being tied into the wrong programming metaphor, or the wrong object set, or whatever, is far worse than spagetti code. At least with spagetti, all options are equally possible and equally bad, or good. Once you start having to fight your metaphor or object model, you end up with kludges that make Fortran look like a really beautiful thing.

    And, again in my experience, the only way to get a decent spec is to write the code at least once, wave it at the user, list all the wrong assumptions, throw all the code away and start again. Which is not something coders like very much. But wasn't it that guru guy from Xerox (Kay?) who said that the best thing for software integrity was frequent fires in the backup store?

  • Thanks to all of you in the Slashdot community who offered interesting feedback. I've started taking this feedback into account, and updated the web site.
  • In one place on the XL site, we see "The syntax itself generally matters very little in the long term, as long as it is not completely inane or ambiguous." Elsewhere, we see "Try and do numeric-intensive programming, and LISP is no good either, not because of its performance, but because mathematicians write 1 + 1 rather than (+ 1 1)". So...which of these mutually exclusive claims would the author have us believe?
    • You should believe both, they are not mutually exclusive. Concept programming is about representing the concepts of your application domain correctly. It is a very subtle, grayish, and subjective proposition, not at all the kind of black-and-white thing you are talking about.

      If the main concept of your domain is based what Lisp-ers call M-exprs (infix notation for math), you should not be forced to use S-exprs (the prefix notation). In the case of Lisp, you can add a macro to use infix notation, so it is more of a strong encouragement, "we know what is best for you, my friend". But the default syntax for that particular application is close to inane. The default semantics is also inappropriate for that domain, by the way, see http://www.ai.mit.edu/docs/articles/good-news/subs ubsection3.2.2.4.html, where a rather experienced Lisper has this to say: This example is bitterly sad: The code is absolutely beautiful, but it adds matrices slowly. Therefore it is excellent prototype code and lousy production code. You know, you cannot write production code as bad as this in C.

      Another example: Perl has special notation and semantics for parsing regexps. If you want to parse text, Perl will be better than Lisp: concepts are represented in a more concise way, with tons of implicit semantics like $_ which make a lot of sense in that context. Yet to do general purpose programming, the syntax of Perl is inane at best.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...